text
stringlengths 1
2.25M
|
---|
---
abstract: 'We investigate the dynamical stability of a fully-coupled system of $N$ inertial rotators, the so-called Hamiltonian Mean Field model. In the limit $N \to \infty$, and after proper scaling of the interactions, the $\mu$-space dynamics is governed by a Vlasov equation. We apply a nonlinear stability test to (i) a selected set of spatially homogeneous solutions of Vlasov equation, qualitatively similar to those observed in the quasi-stationary states arising from fully magnetized initial conditions, and (ii) numerical coarse-grained distributions of the finite-$N$ dynamics. Our results are consistent with previous numerical evidence of the disappearance of the homogenous quasi-stationary family below a certain energy.'
address: |
Centro Brasileiro de Pesquisas Físicas, R. Dr. Xavier Sigaud 150,\
22290-180, Rio de Janeiro, Brazil
author:
- 'Celia Anteneodo and Raúl O. Vallejos'
title: Vlasov stability of the Hamiltonian Mean Field model
---
[^1]
long-range interactions ,nonextensivity ,Vlasov equation 05.20.-y ,05.70.-a
Introduction {#sec:introduction}
============
Nonextensive Statistical Mechanics (NSM) was proposed in 1988 [@ct88] as an attempt to account for many phenomena that can not be described within the standard Boltzmann-Gibbs scenario (see [@review] for reviews). NSM starts from a generalized entropy functional, and using suitable constraints, a generalization of the usual canonical ensemble thermostatistics is obtained. Instead of the exponential weight, in NSM one has a $q$-exponential probability distribution in $\Gamma$-space:
$$\label{qcanonical}
P(x) \propto \exp_q[-\beta H(x)]
\equiv \left\{ 1- \beta (1-q) H(x) \right\}^{1/(1-q)}$$
(if the quantity between curly brackets is negative then $P=0$). Here, $x$ is a point in the $N$-body phase space of a system described by the Hamiltonian $H(x)$, $\beta$ a generalized inverse temperature and $q$ the entropic index [@review]. Unlike the canonical distribution function \[Eq. (\[qcanonical\]) with $q=1$\], which applies to equilibrium situations, the $q$-canonical distribution (with $q\neq 1$) is believed to be associated to meta-equilibrium or even out-of-equilibrium regimes [@review].
The predictions of the theory can in principle be derived from Eq. (\[qcanonical\]). However, explicit calculations for the anomalous high-dimensional systems of interest for NSM have not been performed yet due to large technical difficulties. For instance, single particle distribution functions require the integration of $P(x)$ over $N\!-\!1$ particles.
On the other side, there are many experimental and numerical data well described by $q$-exponential functions [@review]. This agreement is considered as an indirect evidence of the applicability of the NSM to those systems. Moreover, $q$-exponentials have also been observed in non-Hamiltonian and even non-physical (biological, economical, etc.) systems [@interdisciplinary], but in these cases the connection with Statistical Mechanics (from first principles) is less clear.
A model system potentially important for NSM is the so-called Hamiltonian Mean Field model (HMF), an inertial $XY$ ferromagnet with infinite-range interactions [@inagaki93; @antoni95]:
$$\label{ham0}
H = \frac{1}{2} \sum_{i=1 }^N p_{i}^{2} +
\frac{1}{2N} \sum_{i,j=1}^N
\left[ 1-\cos(\theta_{i}-\theta_{j}) \right] \;,$$
where the coordinates of each particle are the angle $\theta_i$ and its angular momentum $p_i$. This model can be considered a simple prototype for complex, long-range systems like galaxies and plasmas. In a certain sense, the HMF is a descendant of the mass-sheet gravitational model [@antoni95].
The HMF has been extensively studied in the last few years (see [@reviewHMF] for a review), especially due to its long-lived quasi-equilibrium states, which exhibit breakdown of ergodicity, anomalous diffusion, aging, and non-Maxwell velocity distributions. In some cases $q$-exponential behaviors were reported: two-time correlation functions [@montemurro03; @pluchino03], single particle velocity distributions (truncated $q$-Gaussians) [@qss_vd] and temperature relaxation [@latora04]. The simplicity of the HMF, that made possible a full analysis of its statistical properties both in the standard canonical [@antoni95] and microcanonical ensembles [@antoni02], gives support to the hope that it may be possible to derive explicit first principle predictions, e.g., for single particle velocity distributions.
The most widely investigated quasi-stationary states (QSS) of the HMF arise from special initial conditions (water-bag) that evolve to states with well defined macroscopic characteristics and whose lifetimes increase with the system size [@qss_vd]. For a given energy, the QSS is different from the corresponding equilibrium state to which the system eventually evolves. These states have been observed for specific energies $\varepsilon\equiv H/N$ below the critical value $\varepsilon_{\rm cr}=0.75$ corresponding to a ferromagnetic phase transition. In a magnetization diagram (see [@qss_vd] and Fig. \[fig:m2vse\]), a first part of the QSS family lies on the continuation of the high-energy equilibrium line (having zero magnetization) down to the energy $\varepsilon_{\rm min} \simeq 0.68$. At $\varepsilon_{\rm min}$ the $m=0$ line ceases to exist, being substituted by a curve of magnetized states.
![Squared magnetization versus energy for quasi-stationary states (circles). System size is $N=10^4$. We averaged over 10 initial conditions and in the time window $500<t<2000$ (see Sec. \[sec:results\] for numerical details). The full line is the analytical result for thermal equilibrium [@antoni95]. []{data-label="fig:m2vse"}](f1vlasov.eps){width="75.00000%"}
This paper is an attempt to understand the origin of such discontinuity. We base our analysis on the associated Vlasov equation which describes the continuum limit of the Hamiltonian particle dynamics. Using the stability criterion recently developed by Yamaguchi et al. [@yamaguchi], we show that the [*coarse-grained*]{} homogeneous (zero-magnetization) QSS family becomes unstable at $\varepsilon_{\rm min}$.
Vlasov approach {#sec:taylor}
===============
Introducing radial and tangent unitary vectors $\hat{\bf r}_i = (\sin \theta_i, \cos \theta_i)$ and $\hat{\bf \theta}_i=( \cos \theta_i,-\sin \theta_i)$, Hamiltonian (\[ham0\]) leads to the equations of motion
$$\begin{aligned}
\nonumber
\dot{\theta}_i &=& p_i, \\ \label{motioneqs}
\dot{p}_i &=& {\bf m} \cdot \hat{\bf \theta}_i,\end{aligned}$$
for $i=1,\ldots,N$, with the magnetization per particle $${\bf m} = \frac{1}{N} \sum_{i= 1}^N \hat{\bf r}_i \; .$$ Eqs. (\[motioneqs\]) describe the two-dimensional motion of a cloud of $N$ points in the fluctuating mean-field ${\bf m}(t)$. In the continuum approximation, the cloud becomes a fluid governed by the Liouville equation for the single-particle probability distribution $f(\theta,p,t)$ $$\label{liouville}
\frac{\partial f}{\partial t} \,+\, p\frac{\partial f}{\partial \theta}
\,+\, {\bf m}\cdot \hat{\bf \theta} \; \frac{\partial f}{\partial p}\,=\,0 \;,$$ with $${\bf m}(t)=\int d\theta \,\hat{\bf r}(\theta)\int dp \,f(\theta,p,t) \;.$$ Liouville equation (\[liouville\]) has to be solved self-consistently with the calculation of ${\bf m}(t)$, and is thus formally equivalent to the Vlasov-Poisson system found in plasma and gravitational dynamics [@balescu]. The continuum approximation will be progressively better as $N$ grows, especially when the cloud of particles is spread over a finite region in $\mu$-space. This is the case of the distributions we will analyze in this paper (see Fig. \[fig:pdfs\]). Detailed discussions of Vlasov approximation can be found in [@braun77; @spohn80; @yamaguchi].
Dynamical stability {#sec:stability}
===================
Every spatially homogeneous distribution $f(p)$ is a stationary solution of Vlasov equation (\[liouville\]). The reason is that homogeneity leads to ${\bf m}=0$ and thus momenta do not change. The interesting point is whether these homogeneous solutions are stable or not. There is the well known Landau analysis [@balescu] which concerns [*linear*]{} stability. This linear criterion was recently applied to the HMF by Choi and Choi to discuss ensemble equivalence and quasi-stationarity [@choi03]. Besides the linearity restriction, Landau analysis has the disadvantage of requiring the knowledge of detailed analytical properties of $f(p)$. This excludes, in principle, numerically obtained momentum distributions.
A more powerful stability criterion for homogeneous equilibria has been proposed by Yamaguchi et al. [@yamaguchi]. This is a nonlinear criterion specific to the HMF. It states that $f(p)$ is stable if and only if the quantity
$$I\,=\,1\,+\,\frac{1}{2} \int_{-\infty}^\infty {\rm d}p\, \frac{f^\prime(p)}{p}$$
is positive (it is assumed that $f$ is an even function of $p$). Note that this condition is equivalent to the zero frequency case of Landau’s recipe [@antoni95; @choi03].
The authors of Ref. [@yamaguchi] showed that a distribution which is spatially homogeneous and Gaussian in momentum becomes unstable below the transition energy $\varepsilon_{\rm cr}=3/4$ (see also [@inagaki93; @choi03]), in agreement with analytical and numerical results for finite $N$ systems. They also showed that homogeneous states with zero-mean uniform $f(p)$ (water-bag) are stable above $\varepsilon=7/12=0.58...$ (see also [@antoni95; @choi03]), and verified that the stability of these Vlasov solutions has a counterpart in the associated particle dynamics.
In the same spirit, it is also worth analyzing the stability of the family of $q$-exponentials $$\label{qgaussian}
f(p) \propto \exp_q(-\alpha p^2) =\left[ 1- \alpha (1-q) p^2 \right]^{1/(1-q)} \; ,$$ which allow to scan a wide spectrum of probability distributions, from finite-support to power-law tailed ones, containing as particular cases the Gaussian ($q=1$) and the water-bag ($q=-\infty$). In Eq. (\[qgaussian\]), the normalization constant has been omitted and the parameter $\alpha>0$ is related to the second moment $\langle p^2 \rangle$, which only exists for $q<5/3$. In the homogeneous states of the HMF one has $\langle p^2 \rangle = 2\varepsilon-1$, as can be easily derived from Eq. (\[ham0\]). Then, the stability indicator $I$ as a function of the energy for the $q$-exponential family reads $$I = 1-\frac{3-q}{2(5-3q)(2\varepsilon-1) } \;.$$ Stability occurs for energies above $\varepsilon_{\rm q}$ $$\label{threshold}
\varepsilon_{\rm q}(q) = \varepsilon_{\rm cr} +\frac{q-1}{2(5-3q)} \; .$$ It is easy to verify that one recovers the known stability thresholds for the uniform and Gaussian distributions, i.e., $\varepsilon_{\rm q}(-\infty)=7/12$ and $\varepsilon_{\rm q}(1)=3/4$. We remark that Eq. (\[threshold\]) states that only finite-support distributions, corresponding to $q<1$, are stable below $\varepsilon_{\rm cr}$. This agrees with numerical studies in the QSS regimes of the HMF (see Fig. \[fig:pdfs\]).
![Momentum distributions for energies indicated on the figure. Histograms (bin size $\simeq$ 0.07) were generated averaging on the time interval (500,2000) and 10 different initial conditions (very small random perturbations of the fully magnetized regular water-bag). In all cases $N=10^4$. See Sec. \[sec:results\] for further details. Full lines are water-bag-plus-cosine fittings to the data.[]{data-label="fig:pdfs"}](f2vlasov.eps){width="100.00000%"}
The QSS we discuss in this paper are obtained from particular initial conditions: all angles are equal and momenta distributed uniformly with zero mean; energy is chosen slightly below $\varepsilon_{\rm cr}$. After an initial “violent relaxation" (the initial state is highly inhomogeneous) the system evolves to a QSS characterized by spatial homogeneity and momentum distributions with cutoffs [@qss_vd]. However, the resulting distributions of momenta are not uniform any more. Some structure appears on top of a slightly smoothed water-bag (see Fig. \[fig:pdfs\]). A very simple family of functions exhibiting the basic features of the observed $f(p)$ is the water-bag plus cosine: $$f(p)=
\left\{ \matrix{ 1/(2p_{\rm max}) + B \cos(\pi p/p_{\rm max}),
& \mbox{for $p \in [-p_{\rm max},p_{\rm max}]$} \; \cr
0,
& \mbox{otherwise} \; .
} \right.$$ The stable functions $f(p)$ occupy a region in parameter space $(p_{\rm max}, B)$ defined by three boundary curves (see Fig. \[fig:cosine\]). The left-top boundary is given by $I=0$ and the other two are associated to positivity, i.e., $f(p)=0$. The metastable states obtained from numerical simulations of finite systems (to be described below) can be approximately fitted by water-bag-plus-cosine functions. Remarkably, the resulting pairs $(p_{\rm max},B)$ fall close to the boundary of Vlasov stability, and exit the stability region for energies below the limiting value $\varepsilon \simeq 0.68$.
![Stability diagram for water-bag-plus-cosine momentum distributions. The shadowed area corresponds to the stability domain. Also shown are three iso-energetic subfamilies with $\varepsilon=0.50,0.60,0.69$ (dotted lines). Symbols correspond to quasi-stationary states obtained numerically, for energies $\varepsilon=0.67,0.69,0.72$ and $N=10^4$ (see Sec. \[sec:results\] for numerical details). []{data-label="fig:cosine"}](f3vlasov.eps){width="65.00000%"}
Fig. \[fig:cosine\] also informs that for low energies, $\varepsilon<7/12$, there is a qualitative change in the shape of stable homogeneous distributions: they change the maximum at $p=0$ by a minimum since $B<0$. As $\varepsilon\to 1/2$, the stable distribution resembles a couple of delta functions approaching the origin.
Numerical results {#sec:results}
=================
The application of the stability criterion to a discrete distribution requires some coarse-graining. We will consider smoothed distributions $$f_\gamma(p)\,=\,\frac{1}{N} \sum_{i=1}^N \delta_\gamma(p-p_i),$$ with Lorentzian delta functions
$$\delta_\gamma(x) \;=\; \frac{\gamma}{\pi} \frac{1}{x^2+\gamma^2} .$$
The coarse-graining parameter $\gamma$ corresponds to an intermediate scale, between microscopic and macroscopic (to be specified). We will also impose parity, $f(p)=f(-p)$, so that our smoothed distribution will be calculated as
$$\label{fgamma}
f_\gamma(p)\,=\,\frac{\gamma}{\pi N} \sum_{p_i>0} \left[
\frac{1}{ (p-p_i)^2 +\gamma^2} \,+\, \frac{1}{ (p+p_i)^2 +\gamma^2}
\right] \;.$$
Accordingly, the stability indicator becomes simply
$$\label{indicator}
I\,=\,1\,+\,\frac{1}{N} \sum_{p_i>0} \frac{p_i^2-\gamma^2}{ (p_i^2 +\gamma^2)^2} \;.$$
All simulations have been done with $N=10^4$ particles. We have considered water-bag initial conditions. All particles are set at $\theta=0$, corresponding to maximum magnetization modulus $m=1$. Momenta were distributed on a regular lattice, symmetrically around $p=0$. Our experience indicates that this “crystalline" distribution behaves like a larger system, say $N=10^5$, where the $p$’s are extracted randomly from a uniform distribution. Initial states were propagated according to Hamilton equations, approximated by a symplectic fourth-order scheme [@yoshida90]. We registered the quantities of interest $m(t)$ (homogeneity indicator) and $p_i(t)$ (input to the stability indicator). Strictly speaking, $m=0$ does not imply that the states are homogeneous. However, the sudden relaxation that leads to the present QSS mixes particles [@pluchino03] in such a way that $m=0$ and spatial homogeneity are expected to be synonymous.
Concerning the calculation of smoothed distributions, we chose a coarse-graining parameter $\gamma=0.05$, after verifying that such a choice erases microscopic fluctuations, but preserves macroscopic features of the QSS momentum distributions (like those shown in Fig. \[fig:pdfs\]). Anyway, some tests were run with $\gamma=0.025$ and $\gamma=0.1$ to check that our results are not substantially changed by the precise choice of the smoothing (see inset in Fig. \[fig:ivse\]). We also eliminated rapid time fluctuations by doing an additional average in the time window $500<t<2000$, a typical domain of quasi-stationarity after initial relaxation of the water-bag (see Fig. \[fig:ivst\]).
![Example of stability as a function of time, for the fully magnetized regular water-bag initial condition, with $\varepsilon=0.69$, $N=10^4$, and smoothing $\gamma=0.05$. The full line corresponds to a running average over a time window of width $\Delta t=100$, indicating quasi-stationarity in the interval $500<t<2000$. []{data-label="fig:ivst"}](f4vlasov.eps){width="75.00000%"}
The sensitivity to initial conditions (of the microscopic state of the system) after times like those considered here ($t \approx 1000$) is enormous. To be sure that the effect we are observing is statistically robust, we also did an average on initial conditions. These were obtained by slightly perturbing the regular water-bag with a small random component: $p_i'=p_i (1+10^{-4} \xi)$, where $\xi$ is random uniform in $[-1,1]$. Our stability analysis of the coarse-grained quasi-stationary states of the HMF is summarized in Fig. \[fig:ivse\]. There one sees that stability is positive for energies above $\varepsilon\simeq 0.68$, except for the small region $0.73 < \varepsilon < 0.75$. We believe that this is probably a spurious effect due to the proximity of the ferromagnetic phase transition; further analyses, involving longer times, or larger system sizes, are necessary for elucidating this question. The fact that the stability indicator becomes negative below $\varepsilon\simeq 0.68$ signals the disappearance of the homogeneous metastable phase at that energy. Simultaneously with $I$ crossing zero at that point, the magnetization rises from negligible values to finite ones, of the order of those corresponding to equilibrium (Fig. \[fig:m2vse\]). This stability picture is consistent with the analysis of the water-bag-plus-cosine family (Sec. \[sec:stability\]), showing that distributions like those observed in QSS live close to a stability border, and that this border is traversed when energy goes down a limiting value. The symbols in Fig. \[fig:cosine\] were obtained by fitting a water-bag-plus-cosine to the numerical histograms of Fig. \[fig:pdfs\].
![Stability versus energy for quasi-stationary states. Note that $I$ goes through zero around $\varepsilon \simeq 0.68$. The smoothing parameter is $\gamma=0.05$ ($N=10^4$). We averaged over 10 initial conditions and in the time interval $500<t<2000$. Inset: $I$ vs $\varepsilon$ for $\gamma=0.025$ ($\nabla$), $0.05$ ($\bullet$) and $0.1$ ($\triangle$). []{data-label="fig:ivse"}](f5vlasov.eps){width="75.00000%"}
A comment about negative stability is in order. The present stability test only applies to homogeneous states. However, below $\varepsilon=0.68$ the measured distributions are evidently inhomogeneous ($m \neq 0$). In these cases, negative stability refers to hypothetical homogeneous states having the measured $f(p)$.
Conclusions {#sec:conclusions}
===========
Yamaguchi et al. conjectured that stable homogeneous solutions of Vlasov equation lead, via discretization, to quasi-stationary states of Hamiltonian dynamics, and showed that it is indeed the case for some families of states of the HMF [@yamaguchi]. We have shown that the reciprocal may be also true: metastable states of the discrete Hamiltonian dynamics lead, after smoothing, to stable solutions of the Vlasov equation (this was verified for the homogeneous QSS).
Moreover, our analysis indicates that the disappearance of the homogenous family (resulting from the violent relaxation of special water-bag initial conditions) at $\varepsilon \simeq 0.68$ can be interpreted as a reflection of the lose of Vlasov stability at that critical energy. This interpretation is supported by similar evidence resulting from the analytical study of the water-bag-plus-cosine family.
Despite providing interesting information, the results we presented are not definitive. Some points remain unclear and deserve further investigation, e.g., the stability properties close to the ferromagnetic transition. Another point, perhaps more important, concerns the quantification of spatial homogeneity. A null squared magnetization indicates the absence of spatial structures on the largest scale. A more stringent test would require the verification that higher-order Fourier coefficients of the particle density tend to zero in the large $N$ limit.
Acknowledgements {#acknowledgements .unnumbered}
================
We are grateful to Constantino Tsallis for the very enthusiastic discussions which stimulated this and many other works. We thank T. Dauxois and S. Ruffo for interesting conversations and for communicating their results prior to publication (thanks also to J. Barré, F. Bouchet and Y. Yamaguchi). This work had financial support from Brazilian Agencies CNPq, FAPERJ and PRONEX.
[99]{}
C. Tsallis, J. Stat. Phys. [**52**]{}, 479 (1988).
, edited by S. R. A. Salinas and C. Tsallis, Braz. J. Phys. [**29**]{} 1999; [*Nonextensive Statistical Mechanics and its Applications*]{}, edited by S. Abe and Y. Okamoto, Lecture Notes in Physics Vol. 560 (Springer-Verlag, Heidelberg, 2001); [*Non Extensive Thermodynamics and Physical Applications*]{}, edited by G. Kaniadakis, M. Lissia, and A. Rapisarda, Physica A [**305**]{} (Elsevier, Amsterdam, 2002); [*Anomalous Distributions, Nonlinear Dynamics and Nonextensivity*]{}, edited by H. L. Swinney and C. Tsallis, Physica D (2004), in press.
, edited by M. Gell-Mann and C. Tsallis (Oxford University Press, New York, 2004), in press.
S. Inagaki, Prog. Theo. Phys. [**90**]{}, 577 (1993).
M. Antoni and S. Ruffo, Phys. Rev. E [**53**]{}, 2361 (1995).
T. Dauxois, V. Latora, A. Rapisarda, S. Ruffo and A. Torcini, in [*Dynamics and Thermodynamics of Systems with Long Range Interactions*]{}, edited by T. Dauxois, S. Ruffo, E. Arimondo and M. Wilkens, Lecture Notes in Physics Vol. 602, Springer (2002).
M. Montemuro, F. A. Tamarit and C. Anteneodo, Phys. Rev. E [**67**]{}, 031106 (2003).
A. Pluchino, V. Latora and A. Rapisarda, cond-mat/0303081, to appear in Physica D.
V. Latora, A. Rapisarda, and C. Tsallis, Phys. Rev. E [**64**]{}, 056134 (2001) and Physica A [**305**]{}, 129 (2002).
V. Latora and A. Rapisarda, in [*Nonextensive Entropy – Interdisciplinary Applications*]{}, edited by M. Gell-Mann and C. Tsallis (Oxford University Press, New York, 2004), in press.
M. Antoni, H. Hinrichsen, and S. Ruffo, Chaos, Solitons and Fractals [**13**]{}, 393 (2002).
W. Braun and K. Hepp, Comm. Math. Phys. [**56**]{}, 101 (1977).
H. Spohn, Rev. Mod. Phys. [**53**]{}, 569 (1980).
Y.Y. Yamaguchi, J. Barré, F. Bouchet, T. Dauxois and S. Ruffo, cond-mat/0312480, to appear in Physica A.
R. Balescu, [*Statistical Dynamics*]{} (Imperial College Press, London, 2000).
M.Y. Choi and J. Choi, Phys. Rev. Lett. [**91**]{}, 124101 (2003).
H. Yoshida, Phys. Lett. A [**150**]{}, 262 (1990).
[^1]: e-mail: [email protected]; [email protected]
|
---
abstract: 'Some new results on geometry of classical parabolic Monge-Ampere equations (PMA) are presented. PMAs are either *integrable*, or *nonintegrable* according to integrability of its characteristic distribution. All integrable PMAs are locally equivalent to the equation $u_{xx}=0$. We study nonintegrable PMAs by associating with each of them a $1$-dimensional distribution on the corresponding first order jet manifold, called the *directing distribution*. According to some property of these distributions, nonintegrable PMAs are subdivided into three classes, one *generic* and two *special* ones. Generic PMAs are uniquely characterized by their directing distributions. To study directing distributions we introduce their canonical models, *projective curve bundles* (PCB). A PCB is a $1$-dimensional subbundle of the projectivized cotangent bundle to a $4$-dimensional manifold. Differential invariants of projective curves composing such a bundle are used to construct a series of contact differential invariants for corresponding PMAs. These give a solution of the equivalence problem for PMAs with respect to contact transformations.'
address:
- 'Dipartimento di Matematica, Università di Milano, via Saldini 50, I-20133 Milano, Italy'
- 'Dipartimento di Matematica e Informatica, Università di Salerno, Via Ponte don Melillo, 84084 Fisciano (SA),Italy'
author:
- D Catalano Ferraioli
- A M Vinogradov
title: 'Differential invariants of generic parabolic Monge-Ampere equations'
---
Introduction
============
Since Monge’s “Application de l’Analyse à la Géométrie” Monge-Ampere equations periodically attract attention of geometers. This is not only due to the numerous applications to geometry, mechanics and physics. Geometry of these equations being tightly related with various parts of the modern differential geometry has all merits to be studied as itself. Last 2-3 decades manifested a return of interest to geometry of Monge-Ampere equations, mostly to elliptic and hyperbolic ones. The reader will find an account of recent results together with an extensive bibliography in [@KLR].
In this article we study geometry of classical parabolic Monge-Ampere equations (PMAs) on the basis of a new approach sketched in [@V]. According to it, a PMA $\mathcal{E}\subset
J^{2}(\pi), \,\pi$ being a 1-dimensional fiber bundle over a bidimensional manifold, is completely characterized by its *characteristic distribution* $\mathcal{D_{\mathcal{E}}}$ which is a 2-dimensional Lagrangian distribution on $J^{1}(\pi)$, and vice versa. Such distributions and, accordingly, the corresponding to them PMAs, are naturally subdivided into four classes, *integrable, generic* and two types of *special* ones (see [@V] and sec.4). All integrable Lagrangian foliations are locally contact equivalent. A consequence of it is that a PMA $\mathcal{E}$ is locally contact equivalent to the equation $u_{xx}=0$ iff the distribution $\mathcal{D_{\mathcal{E}}}$ is integrable. This exhausts the integrable case. On the contrary, nonintegrable Lagrangian distributions are very diversified and our main goal here is to describe their multiplicity, i.e., more precisely, equivalence classes of PMAs with respect to contact transformations.
With this purpose we associate with a Lagrangian distribution a *projective curve bundle* (shortly, PCB) over a 4-dimensional manifold $N$. A PCB over $N$ is a 1-dimensional smooth subbundle of the “projectivized” cotangent bundle $PT^{\ast}(N)$ of $N$. Under some regularity conditions such a bundle possesses a canonical contact structure and, as a consequence, a canonically inscribed in it Lagrangian distribution. There exists a one-to-one correspondence between generic Lagrangian distributions and *regular* PCBs. This way the equivalence problem for generic PMAs is reformulated as the equivalence problem for PCBs (see [@V]) and this is the key point of our approach. The fiber of such a bundle over a point $x\in N$ is a curve $\gamma_{x}$ in the projective space $PT_{x}^{\ast}(N)$. The curve $\gamma_{x}$ can be characterized by its scalar differential invariants with respect to the group of projective transformations. By putting such invariants for single curves $\gamma_{x}$ together for all $x\in
N$ one obtains scalar differential invariants for the considered PCB and, consequently, for the corresponding PMA.
This kind of invariants resolves the equivalence problem for generic PMAs on the basis of the “principle of $n-$invariants” (see [@AVL; @SDI]). We have chosen them among others for their transparent geometrical meaning. It should be stressed, however, that there are other choices, maybe, less intuitive but more efficient in practice. We shall discuss this point separately.
Special Lagrangian distributions admit a similar interpretation in terms of 2-dimensional distributions on 4-dimensional manifolds supplied with additional structures, called *fringes* (see [@V]). Differential invariants of fringes, also coming from projective differential geometry, allow to construct basic scalar differential invariants for special PMAs. They will be discussed in a separate paper. It is worth mentioning that all linear PMAs are special.
Our approach is based on the theory of solution singularities for nonlinear PDEs (see [@VinSing]). Indeed, a Lagrangian distribution or, equivalently, the associated PCB, represents equations that describe fold type singularities of multivalued solutions of the corresponding PMA. An advantage of this point of view is that it allows a similar analysis of higher order PDEs and, in particular, to understand what are higher order analogues of Monge-Ampere equations. These topics will be discussed in a forthcoming joint paper by M. Bachtold and the second author.
The paper is organized as follows. The notations and generalities concerning jet spaces and Monge-Ampere equations, we need throughout the paper, are collected in sections 2 and 3, respectively. In particular, the interpretation of PMAs as Lagrangian distributions is presented there. Section 4 contains some basic facts on geometry of Lagrangian distributions. The central of them is the notion of the *directing distribution* of a Lagrangian distribution. The above mentioned subdivision of nonitegrable PMAs into generic and special types reflects some contact properties of this distribution. Projective curve bundles that are canonical models of directing distributions are introduced and studied subsequently in section 6. For completeness in section 5 we give a short proof of the known fact that integrable PMAs are locally equivalent one to another. Finally, basic scalar differential invariants of generic PCBs and hence of generic PMAs are constructed and discussed in section 7.
Throughout the paper we use the following notations and conventions:
- all objects in this paper, e.g., manifolds, mappings, functions, vector fields, etc, are supposed to be smooth;
- $C^{\infty}(M)$ stands for the algebra of smooth functions on the manifold $M$ and $C^{\infty}(M)$-modules of all vector fields and differential $k$-forms are denoted by $D(M)$ and $\Lambda^{k}(M)$, respectively;
- the evaluation of $X\in D(M)$ (resp., of $\alpha\in\Lambda^{k}(M)$) at $p\in M$ is denoted by $\left.
X\right\vert _{p}$ ( resp., $\left. \alpha\right\vert _{p}$);
- $d_{p}f:T_{p}M\longrightarrow T_{f(p)}N$ stands for the differential of the map $f:M\rightarrow N$,$M$ and $N$ being two manifolds;
- For $X,Y\in D(M)$ and $\alpha\in\Lambda^{k}(M)$ we shall use the shorten notation $X^{r}(Y)$ for $L_{X}^r(Y)$ and $X^{r}(\alpha)$ for $L_{X}^r(\alpha)$ (with $X^{0}(Y)=Y$ and $X^{0}(\alpha)=\alpha$) for the $r$-th power of the Lie derivative $L_{X}$.
- depending on the context by a *distribution* on a manifold $M$ we understand either a subbundle $\mathcal{D}$ of the tangent bundle $TM$, whose fiber over $p\in M$ is denoted by $\mathcal{D}(p)$, or the $C^{\infty}(M)$-module of its sections. In particular, $X\in\mathcal{D}$ means that $X_{p}\in\mathcal{D}(p)$, $\forall p\in M$;
- we write $\mathcal{D}=\left\langle
X_{1},...,X_{2}\right\rangle $ if the distribution $\mathcal{D}$ is generated by vector fields $X_{1},...,X_{r}\in D(M)$; similarly, $\mathcal{D}=Ann(\alpha _{1},...,\alpha_{s})$ means that $\mathcal{D}$ is constituted by vector fields annihilated by forms $\alpha_{1},...,\alpha_{s}\in\Lambda^{1}(M)$;
- if $M$ is equipped with a contact distribution $\mathcal{C}$ and $\mathcal{S}\subset\mathcal{C}$ is a subdistribution of $\mathcal{C}$, then $\mathcal{S}^{\bot}$ denotes the $\mathcal{C}$-orthogonal complement to $\mathcal{S}$.
Preliminaries
=============
In this section the notations and basic facts we need throughout the paper are collected. The reader is referred to [@AVL; @KLV; @VK] for further details.
Jet bundles
-----------
Let $E$ be an $(n+m)$-dimensional manifold. The manifold of $k$-th order jets, $k\geq0$, of $n$-dimensional submanifolds of $E$ is denoted by $J^{k}(E,n)$ and $\pi_{k,l}:J^{k}(E,n)\longrightarrow
J^{l}(E,n)$, $k\geq l$, stands for the canonical projection. If $E$ is fibered by a map $\pi:E\rightarrow M$ over an $n$-dimensional manifold $M$, then $J^{k}\pi$ denotes the $k$-th order jet manifold of local sections of $\pi$. $J^{k}\pi$ is an open domain in $J^{k}(E,n)$. The $k-$th order jet of an $n-$dimensional submanifold $L\subset E$ at a point $z\in L$ is denoted by $\left[ L\right] _{z}^{k}$. Similarly, if $\sigma$ is a (local) section of $\pi$ and $x\in M$, then $\left[
\sigma\right] _{x}^{k}=\left[ \sigma(U)\right]
_{\sigma(x)}^{k}$, $U$ being the domain of $\sigma$, stands for the $k-$th order jet of $\sigma$ at $x$. The correspondence $z\mapsto\left[ L\right] _{z}^{k}$ defines the $k-$th *lift* of $L$ $$\begin{array}
[c]{cccc}j_{k}L: & L & \longrightarrow & J^{k}(E,n).
\end{array}$$ Similarly, the $k-$th *lift* of a local section $\sigma$ of $\pi$ $$j_{k}\sigma:U\rightarrow J^{k}\pi$$ sends $x\in U$ to $[\sigma]_{x}^{k}$, i.e., $j_{k}\sigma=j_{k}(\sigma (U))\circ\sigma$. Put $$L^{(k)}=\operatorname{Im}(j_{k}L),\qquad\qquad M_{\sigma}^{k}=\operatorname{Im}(j_{k}\sigma).$$
Let $\theta_{k+1}=\left[ L\right] _{z}^{k+1}$ be a point of $J^{k+1}(E,n)$. Then the $\emph{R-plane}$ associated with $\theta_{k+1}$ is the subspace $$R_{\theta_{k+1}}=T_{\theta_{k}}(L^{(k)})$$ of $T_{\theta_{k}}(J^{k}(E,n))$ with $\theta_{k}=\left[ L\right]
_{z}^{k}$. The correspondence $\theta_{k+1}\mapsto
R_{\theta_{k+1}}$ is biunique. Put $$V_{\theta_{k+1}}=T_{\theta_{k}}(J^{k}(E,n))/R_{\theta_{k+1}}.$$ and denote by $$pr_{\theta_{k+1}}:T_{\theta_{k}}(J^{k}(E,n))\longrightarrow V_{\theta_{k+1}}$$ the canonical projection. The vector bundle $$\nu_{k+1}:V_{(k+1)}\longrightarrow J^{k+1}(E,n),\qquad k\geq0,$$ whose fiber over $\theta_{k+1}$, is $V_{\theta_{k+1}}$ is naturally defined. By $\nu_{k,r}$ denote the pullback of $\nu_{k}$ via $\pi_{r,k},r\geq k$.
Let $\mathcal{C}(\theta_{k})\subset T_{\theta_{k}}(J^{k}\pi)$ be the span of all $R-$planes at $\theta_{k}$. Then $\theta_{k}\mapsto\mathcal{C}(\theta _{k})$ is the *Cartan distribution* on $J^{k}(E,n)$ denoted by $\mathcal{C}_{k}$. This distribution can be alternatively defined as the kernel of the $\nu_{k}$-valued *Cartan form* $U_{k}$ on $J^{k}(E,n)$:$$U_{k}(\xi)=pr_{\theta_{k}}(d_{\theta_{k}}\pi_{k,k-1}(\xi)) \in
V_{\theta_{k}}, \quad\xi\in T_{\theta_{k}}(J^{k}(E,n)).$$
A diffeomorphism $\varphi:J^{k}(E,n)\rightarrow J^{k}(E,n)$ is called *contact* if it preserves the Cartan distribution. Similarly, a vector field $Y$ on $J^{k}(E,n)$ is called *contact* if $\left[ Y,\mathcal{C}_{k}\right]
\subset\mathcal{C}_{k}$. A contact diffeomorphism $\varphi$ (respectively, a contact field $Y$) canonically lifts to a contact diffeomorphism $\varphi^{(l)}$ (respectively, a contact field $Y^{(l)}$) on $J^{k+l}(E,n)$).
Below the above constructions will be mainly used for $n=2,m=1,k=1,2$. In this case $\mathcal{C}_{1}$ is the canonical contact structure on $J^{1}(E,2),\\ \operatorname{dim}E=3$, and the bundle $\nu_{1}$ is $1$-dimensional. $\nu_{1}$ is canonically isomorphic to the bundle whose fiber over $\theta\in J^{1}(E,2)$ is\
$T_{\theta}(J^{1}(E,2))/\mathcal{C}(\theta)$.
A vector field $X$ on $E$ defines a section $s_{X}\in\Gamma(\nu_{1}), s_{X}(\theta)= pr_{\theta}(X)$. Since $\nu_{1}$ is $1$-dimensional, the $\nu_{1}$-valued form $U_{1}$ can be presented as $$\label{UX}
U_{1}=U_{X}\cdot s_{X}, \quad U_{X}\in\Lambda^{1}(J^{1}(E,2)),$$ in the domain where $s_{X}\neq0$.
Let $\mathcal{M}$ be a manifold supplied with a contact distribution $\mathcal{C}$. An almost everywhere nonvanishing differential form $U\in\Lambda^{1}(\mathcal{M})$ is called *contact* if it vanishes on $\mathcal{C}$. A vector field $Y\in D(\mathcal{M})$ is contact iff $$L_{X}(U)=\lambda U,\quad\lambda\in C^{\infty}(\mathcal{M}),
\label{infinitesimal_cont_inv}$$ for a contact form $U$. For instance, $U_{X}$ (see (\[UX\])) is a contact form.
If a vector field $X\in D(J^{1}(E,2)$ is contact, then $f=X\left.
\hskip1pt\vbox{\hbox{\vbox to .18 truecm{\vfill\hbox to .25 truecm
{\hfill\hfill}\vfill}\vrule}\hrule}\hskip1pt\right.
U_{1}\in\Gamma(\nu_{1})$ is called the *generating function* of $X$. $X$ is completely determined by $f$ and is denoted by $X_{f}$ in order to underline this fact. If $U$ is a contact form on a contact manifold $(\mathcal{M},\mathcal{C})$ and $X\in
D(\mathcal{M})$ is contact, then $f=X\left.
\hskip1pt\vbox{\hbox{\vbox to .18 truecm{\vfill\hbox to .25 truecm
{\hfill\hfill}\vfill}\vrule}\hrule}\hskip1pt\right. U\in
C^{\infty }(\mathcal{M})$ is the *generating function* of $X$ with respect to $U$.
Vector fields $X,Y\in D(\mathcal{M})$ belonging to $\mathcal{C}$ are called *$\mathcal{C}$-orthogonal* if $\left[ X,Y\right]
$ also belongs to $\mathcal{C}$. Obviously, this is equivalent to $dU(X,Y)=0$ for a contact form $U$. Observe that $\mathcal{C}$-orthogonality is a $C^{\infty}(\mathcal{M})$-linear property. A subdistribution $\mathcal{D}$ of $\mathcal{C}$ is called *Lagrangian* if any two fields $X,Y\in\mathcal{D}$ are $\mathcal{C}$-orthogonal and $\mathcal{D}$ is not contained in another distribution of bigger dimension possessing this property. If $\dim\;\mathcal{M}=2n+1$, then $\dim\;\mathcal{D}=n$.
A local chart $\left( x,y,u\right) $ in $E$, where $\left(
x,y\right) $ are interpreted as independent variables and $u$ as the dependent one, extends canonically to a local chart $$\left( x,y,u,u_{x}=p,u_{y}=q,u_{xx}=r,u_{xy}=s,u_{yy}=t\right) \label{carta}$$ on $J^{2}(E,2)$. Functions $\left( x,y,u,p,q\right) $ form a (standard) chart in $J^{1}(E,2)$. The local contact form $U=U_{\partial_{u}}$ (see (\[UX\])) in this chart reads $$U=du-pdx-qdy.$$ Accordingly, in this chart the contact vector field corresponding to the generating with respect to $U$ function $f$ reads $$X_{f}=-f_{p}\partial_{x}-f_{q}\partial_{y}+(f-pf_{p}-qf_{q})\partial
_{u}+(f_{x}+pf_{x})\partial_{p}+(f_{y}+qf_{x})\partial_{q}. \label{contact_VF}$$
In the sequel we shall use $\mathcal{C}$ and $U$ for the contact distribution and a contact form in the current context, respectively.
Parabolic MA equations
======================
MA equations
------------
Let $E$ be a $3$-dimensional manifold. A $k$-th order differential equation imposed on bidimensional submanifolds of $E$ is a hypersurface $\mathcal{E}\subset J^{k}(E,2)$. In a standard jet chart it is seen as a $k$-th order equation for one unknown function in two variables. In the sequel we shall deal only with second order equations of this kind. In a jet chart (\[carta\]) on $J^{2}(E,2)$ such an equation reads $$F(x,y,u,p,q,r,s,t)=0. \label{SDE}$$
The standard subdivision of equations (\[SDE\]) into hyperbolic, parabolic and elliptic ones is intrinsically characterized by the nature of singularities of their multi-valued solutions (see [@VinSing]). Some elementary facts from solution singularity theory we need in this paper are brought below.
Let $\theta_{2}\in J^{2}(E,2)$, $\theta_{1}=\pi_{2,1}(\theta_{2})$ and $F_{\theta_{1}}=\pi_{2,1}^{-1}(\theta_{1})$. Recall that an $R$-plane at $\theta_{1}$ is a *Lagrangian* plane in $\mathcal{C}(\theta_{1})$, i.e., a bidimensional subspace $R\subset\mathcal{C} (\theta_{1})$ such that $d\omega|_{R}=0$ for a contact 1-form $\omega$ on $J^{1}(E,2)$. If $P\subset\mathcal{C}(\theta_{1})$ is a 1-dimensional subspace, then $$\label{ray}l(P)=\left\{ \theta\in
F_{\theta_{1}}|R_{\theta}\supset P\right\}
\subset F_{\theta_{1}}$$ is the 1-*ray* corresponding to $P$.
A local chart on $F_{\theta_{1}}$ is formed by restrictions of $r,s,t$ to $F_{\theta_{1}}$. By abusing the notation we shall use $r,s,t$ for these restrictions as well. Denote by $(\tilde{r},\tilde{s},\tilde{t})$ coordinates in $T_{\theta_{2}}(F_{\theta_{1}})$ with respects to the basis $\partial _{r}|_{\theta_{2}},
\partial_{s}|_{\theta_{2}},\partial_{t}|_{\theta_{2}}$. Cones $$\mathcal{V}_{\theta_{2}}=\left\{
\tilde{r}\tilde{t}-\tilde{s}^{2}=0\right\} \subset
T_{\theta_{2}}(F_{\theta_{1}}),\,\theta_{2}\in F_{\theta_{1}},$$ define a distribution of cones $\mathcal{V}:\theta_{2}\mapsto\mathcal{V}_{\theta_{2}}$ on $F_{\theta_{1}}$ called the *ray distribution*. This distribution is invariant with respect to contact transformations.
If $P$ is spanned by the vector $$w=\zeta_{1}\partial_{x}+\zeta_{2}\partial_{y}+\mu\partial_{u}+\eta_{1}\partial_{p}+\eta_{2}\partial_{q} \in T_{\theta_{1}}(J^{1}(E,2))$$ then the 1-ray $l(P)$ is described by equations $$\left\{
\begin{array}
[c]{c}\zeta_{1}r+\zeta_{2}s=\eta_{1},\\
\zeta_{1}s+\zeta_{2}t=\eta_{2}.
\end{array}
\right. \label{eq_l(P)}$$ In particular, $l(P)$ is tangent to $\mathcal{V}$.
Put $p_{0}=p(\theta_{1}), q_{0}=q(\theta_{1}). $ Since $P\subset
\mathcal{C}(\theta_{1})$ we have $$w\left. \hskip1pt\vbox{\hbox{\vbox to .18 truecm{\vfill\hbox to
.25 truecm {\hfill\hfill}\vfill}\vrule}\hrule}\hskip1pt\right.
U=0\Leftrightarrow
\mu=\zeta_{1}p_{0}+\zeta_{2}q_{0}, \label{P_contatto}$$ and hence$$w=\zeta_{1}(\partial_{x}+p_{0}\partial_{u})+\zeta_{2}(\partial_{y}+q_{0}\partial_{u})+\eta_{1}\partial_{p}+\eta_{2}\partial_{q}.$$ Moreover, $R_{\theta_{2}}\supset P$ iff$$w\left. \hskip1pt\vbox{\hbox{\vbox to .18 truecm{\vfill\hbox to
.25 truecm
{\hfill\hfill}\vfill}\vrule}\hrule}\hskip1pt\right. (dp-rdx-sdy)_{\theta_{2}}=w\left. \hskip1pt\vbox{\hbox{\vbox to .18 truecm{\vfill\hbox to
.25 truecm
{\hfill\hfill}\vfill}\vrule}\hrule}\hskip1pt\right. (dq-sdx-tdy)_{\theta_{2}}=0$$ and these relations are identical to (\[eq\_l(P)\]).
Obviously, the components of the tangent to $l(P)$ vector at $\theta_{1}$ are $$(\tilde{r},\tilde{s},\tilde{t})=\left(
\zeta_{2}^{2},-\zeta_{1}\zeta
_{2},\zeta_{1}^{2}\right) \label{coord-cono}$$ and manifestly satisfy the equation $\tilde{r}\tilde{t}-\tilde{s}^{2}=0$.
Put $$\mathcal{E}_{\theta_{1}}=\mathcal{E}\cap F_{\theta_{1}}$$ An equation $\mathcal{E}$ is of *principal type* if it intersects transversally fibers of the projection $\pi_{2,1}$. In such a case $\mathcal{E}_{\theta_{1}}$ is a bidimensional submanifold of $F_{\theta_{1}},\forall\;\theta_{1}\in J^{1}(E,2)$. Further on we assume $\mathcal{E}$ to be of principal type.
The* symbol* *of* $\mathcal{E}$ at $\theta_{2}\in\mathcal{E}$ is the bidimensional subspace$$Smbl_{\theta_{2}}(\mathcal{E}):=T_{\theta_{2}}\mathcal{E}_{\theta_{1}}$$ of $T_{\theta_{2}}\left( F_{\theta_{1}}\right) $.
A point $\theta_{2}\in\mathcal{E}$ is *elliptic* (resp., *parabolic*, or *hyperbolic*) if $\mathcal{V}_{\theta_{2}}$ intersects $Smbl_{\theta
_{2}}(\mathcal{E})$ in its vertex only (resp., along a line, or along two lines). So, if $\theta_{2}$ is parabolic, then $Smbl_{\theta_{2}}(\mathcal{E})$ is a tangent to the cone $\mathcal{V}_{\theta_{2}}$ plane. In other words, in this case $\mathcal{E}_{\theta_{1}}$ is tangent to the ray distribution on $F_{\theta_{1}}$.
\[PE\] An equation $\mathcal{E}$ is called *elliptic* (resp., *parabolic*, or *hyperbolic*) if all its points are *elliptic* (resp., *parabolic*, or *hyperbolic*).
If a $1$-ray $l(P)$ is tangent to a parabolic equation $\mathcal{E}$ at a point $\theta_{2}$, then $l(P)\subset\mathcal{E}_{\theta_{1}}$.
The $1$-rays distribution on $F_{\theta_{1}}$ may be viewed as the distribution of Monge’s cones of a first order PDE for one unknown function in two variables. As it is easy to see, this equation in terms of coordinates $x_{1}=r,x_{2}=t,y=s$ on $F_{\theta_{1}}$ is $\frac{\partial y}{\partial x_{1}}\cdot\frac{\partial y}{\partial
x_{2}}=\frac{1}{4}$. A banal computation then shows that characteristics of this equation are exactly $1$-rays and hence its solutions are ruled surfaces composed of $1$-rays.
\[ruled\] If $\mathcal{E}$ is a parabolic equation, then $\mathcal{E}_{\theta_{1}}$ is a ruled surface in $F_{\theta_{1}}$ composed of $1$-rays.
For a parabolic equation $\mathcal{E}$ and a point $\theta_{1}\in
J^{1}(E,2)$ consider all 1-dimensional subspaces $P\subset\mathcal{C}(\theta_{1})$ such that $l(P)\subset\mathcal{E}_{\theta_{1}}$. This is a 1-parametric family of lines and, so, their union is a bidimensional conic surface $\mathcal{W}_{\theta_{1}}$ in $\mathcal{C}(\theta_{1})$. Then $$\theta_{1}\mapsto\mathcal{W}_{\theta_{1}},\quad\theta_{1}\in
J^{1}(E,2),$$ is the *Monge distribution* of $\mathcal{E}$. Integral curves of this distribution are curves along which multivalued solutions of $\mathcal{E}$ fold up. It is worth mentioning that tangent planes to a surface $\mathcal{W}_{\theta_{1}}$ are all Lagrangian. We omit the proof but note that Lagrangian planes are simplest surfaces possessing this property.
Given type singularities of multivalued solutions of a PDE are described by corresponding subsidiary equations. If $\mathcal{E}$ is a parabolic equation, then integral curves of the corresponding to $\mathcal{E}$ Monge distribution describe loci of fold type singularities of its solutions.
Intrinsically, the class of Monge-Ampere (MA) equations is characterized by the property that these subsidiary equations are as simple as possible. More precisely, this means that conic surfaces $\mathcal{W}_{\theta_{1}}$s’ must be geometrically simplest. As we have already noticed, for parabolic equations the simplest are Lagrangian planes. Thus parabolic MA equations (PMAs) are conceptually defined as parabolic equations whose Monge distributions are distributions of Lagrangian planes. It will be shown below that this definition coincides with the traditional one.
Recall that, according to the traditional descriptive point of view, MA equations are defined as equations of the form$$\label{MA_1}N(rt-s^{2})+Ar+Bs+Ct+D=0$$ with $N,A,B,C$ and $D$ being some functions of variables $x,y,u,p,q$. MA equations with $N=0$ are called *quasilinear*. $\Delta=B^{2}-4AC+4ND$ is the *discriminant* of (\[MA\_1\]).
Equation (\[MA\_1\]) is elliptic (resp., parabolic, or hyperbolic) if $\Delta<0\quad(resp., \Delta=0$, or $\Delta>0$).
As it is easy to see, the symbol of equation (\[MA\_1\]) at a point $\theta_{2}$ of coordinates $(r,s,t)$ is described by the equation$$\label{coord_symbl}N(t\tilde{r}+r\tilde{t}-2s\tilde{s})+A\tilde{r}+B\tilde
{s}+C\tilde{t}=0.$$ with $(\tilde{r},\tilde{s},\tilde{t})$ subject to the relation $\tilde {r}\tilde{t}-\tilde{s}^{2}=0$. Now one directly extracts the result from this relation and (\[coord\_symbl\]).
Finally, we observe that all above definitions and constructions are contact invariant.
\[parab\_come\_lagr\]Parabolic MA equations as Lagrangian distributions
-----------------------------------------------------------------------
In this section it will be shown that the conceptual definition of PMA equations coincides with the traditional one.
First of all, we have
\[equiv\] Equation (\[MA\_1\]) is parabolic in the sense of definition \[PE\] if and only if $\Delta=0$. The Monge distribution of parabolic equation (\[MA\_1\]) is a distribution of Lagrangian planes, i.e., a Lagrangian distribution.
First, let $\mathcal{E}$ be equation (\[MA\_1\]). A simple direct computation shows that $\mathcal{E}$ is tangent to the $1$-ray distribution iff $\Delta=0$.
Second, coefficients of equation (\[MA\_1\]) may be thought as functions on $J^{1}(E,2)$. Let $A_{0},\ldots,N_{0}$ be their values at a point $\theta _{1}\in J^{1}(E,2)$. Then $$N_{0}(t\tilde{r}+r\tilde{t}-2s\tilde{s})+A_{0}\tilde{r}+B_{0}\tilde{s}+C_{0}\tilde{t}=0$$ is the equation of $\mathcal{E}_{\theta_{1}}$ in $F_{\theta_{1}}$. If $N_{0}\neq0$ this equation describes a standard cone with the vertex at the point $\theta_{2}$ of coordinates $r=-C_{0}/N_{0},s=B_{0}/2N_{0},t=-A_{0}/N_{0}$. So, $\mathcal{E}_{\theta_{1}}$ is the union of $1$-rays $l(P)$ passing through $\theta_{2}$. By definition this implies that $P\subset R_{\theta_{2}}$ and hence $\mathcal{W}_{\theta_{1}}$ is the union of lines $P$ that belong to $R_{\theta_{2}}$. This shows that $\mathcal{W}_{\theta_{1}}=R_{\theta_{2}}$.
Finally, note that the case $N_{0}=0$ can be brought to the previous one by a suitable choice of jet coordinates.
From now on we shall denote by $\mathcal{D}_{\mathcal{E}}$ the Monge distribution of a PMA equation $\mathcal{E}$. From the proof of the above Proposition we immediately extract geometrical meaning of the correspondence $\mathcal{E}\mapsto\mathcal{D}_{\mathcal{E}}$.
\[vertex\] The distribution $\mathcal{D}_{\mathcal{E}}$ associates with a point $\theta_{1}\in J^{1}(E,2)$ the $R$-plane $R_{\theta_{2}}$ with $\theta_{2}$ being the vertex of the cone $\mathcal{E}_{\theta_{1}}$.
A coordinate description of $\mathcal{D}_{\mathcal{E}}$ is as follows.
\[coord\] If $N\neq0$, then $$\mathcal{D}_{\mathcal{E}}=\left\langle\partial_{x}+p\partial_{u}-\dfrac{C}{N}\partial_{p}+\dfrac{B}{2N}\partial_{q},\;\partial_{y}+q\partial_{u}+\dfrac
{B}{2N}\partial_{p}-\dfrac{A}{N}\partial_{q}\right\rangle
. \label{parab_4}$$
If $N=0$, then $$\mathcal{D}_{\mathcal{E}}=\left\langle
A\partial_{x}+\frac{B}{2}\partial
_{y}+(Ap+Bq)\partial_{u}-D\partial_{p},\;\frac{B}{2}\partial_{p}-A\partial_{q}\right\rangle. \label{quasilin_1}$$
If $(x_{0},\dots,t_{0})$ are coordinates of $\theta_{2}\in
J^{2}(E,2)$, then the $R$-plane $R_{\theta_{2}}$ is generated by vectors $$\partial_{x}+p_{0}\partial_{u}+r_{0}\partial_{p}+s_{0}\partial_{q}\qquad\mathrm{{and}\qquad\partial_{y}+q_{0}\partial_{u}+s_{0}\partial
_{p}+t_{0}\partial_{q}}$$ at the point $\theta_{1}$. By Corollary \[vertex\] one gets the needed result for $N\neq0$ just by specializing coordinates of $\theta_{2}$ in these expressions to that of the vertex of the cone $\mathcal{E}_{\theta_{1}}$ (see the proof of Proposition \[equiv\]). The case $N=0$ is reduced to the previous one by a suitable transformation of jet coordinates.
In its turn, the distribution $\mathcal{D}_{\mathcal{E}}$ completely determines the equation $\mathcal{E}$. More exactly, we have
\[cond\_PMA\]Let $\mathcal{D}$ be a Lagrangian subdistribution of $\mathcal{C}$. Then the submanifold $$\mathcal{E}_{\mathcal{D}}=\{\theta_{2}\in J^{2}(E,2):\dim(R_{\theta_{2}}\cap\mathcal{D}(\theta_{1}))>0\}$$ of $J^{2}(E,2)$ is a parabolic Monge-Ampere equation and $\mathcal{D}=\mathcal{D}_{\mathcal{E}_{\mathcal{D}}}$.
There is the only one point $\theta_{2}\in\left( \mathcal{E}_{\mathcal{D}}\right) _{\theta_{1}}$ such that $R_{\theta_{2}}=\mathcal{D}(\theta_{1})$. With this exception $\dim(R_{\theta_{2}}\cap\mathcal{D}(\theta_{1}))=1$. Hence $\left(
\mathcal{E}_{\mathcal{D}}\right) _{\theta_{1}}$ is the union of all $1$-rays $l(P)$ such that $P\subset\mathcal{D}(\theta_{1})$. They all pass through the exceptional point $\theta_{2}$ and hence constitute a cone in $F_{\theta_{1}}$. But cones composed of $1$-rays are tangent to the $1$-ray distribution on $F_{\theta_{1}}$ and, so, all their points are parabolic. Finally, the last assertion directly follows from Corollary \[vertex\].
Results of this section are summed up in the following Theorem which is the starting point of our subsequent discussion of parabolic Monge-Ampere equations.
\[MAC\] The correspondence $D\longmapsto\mathcal{E}_{D}$ between Lagrangian distributions on $J^{1}(E,2)$ and parabolic Monge-Ampere equations is one-to-one.
The meaning of this Theorem is that it decodes the geometrical problem hidden under analytical condition (\[MA\_1\]). Namely, this problem is to find Legendrian submanifolds $S$ of a $5$-dimensional contact manifold $(\mathcal{M},\mathcal{C})$ that intersect a given Lagrangian distribution $\mathcal{D}\subset\mathcal{C}$ in a nontrivial manner, i.e., $$\dim\{T_{\theta}(S)\cap\mathcal{D}(\theta)\}>0,\,\,\forall\;\theta\in
S.$$ The triple $\mathcal{E}=(\mathcal{M},\mathcal{C},\mathcal{D})$ encodes this problem. By this reason, in the rest of this paper the term parabolic Monge-Ampere equation will refer to such a triple. In particular, equivalence and classification problems for PMAs are interpreted as such problems for Lagrangian distributions on 5-dimensional contact manifolds.
Geometry of Lagrangian distributions
====================================
In this section we deduce some basic facts about Lagrangian distributions on 5-dimensional contact manifolds which allow to reveal four natural classes of them. We fix the notation $(\mathcal{M},\mathcal{C})$ for the considered contact manifold and $\mathcal{D}$ for a Lagrangian distribution on it.
First of all, Lagrangian distributions are subdivided onto *integrable* and *nonitegrable* ones. Accordingly, the corresponding parabolic Monge-Ampere equations are called *integrable*, or *non-integrable*. In the subsequent section it will be shown that all integrable PMAs are locally contact equivalent to the equation $u_{xx}=0$ and we shall concentrate on nonintegrable PMAs.
If $\mathcal{D}$ is nonintegrable, then its *first prolongation* $\mathcal{D}_{(1)}$, i.e., the span of all vector fields belonging to $\mathcal{D}$ and their commutators, is 3-dimensional. Moreover, we have
------------------------------------------------------------------------
1. $\mathcal{D}_{(1)}\subset\mathcal{C}$;
2. the $\mathcal{C}$-orthogonal complement $\mathcal{R}$ of $\mathcal{D}_{(1)}$ is a $1$-dimensional subdistribution of $\mathcal{D}$.
\(1) If (locally) $\mathcal{D}=\left\langle X,Y\right\rangle $, then (locally) $\mathcal{D}_{(1)}=\left\langle
X,Y,[X,Y]\right\rangle $. But, by definition of $\mathcal{C}$-orthogonality, $[X,Y]\in\mathcal{C}$.
\(2) The form $dU$ restricted to $\mathcal{C}$ is nondegenerate. So, the assertion follows from the fact that in a symplectic linear space a Lagrangian subspace in a hyperplane contains the skew-orthogonal complement of the hyperplane.
The $1$-dimensional distribution $\mathcal{R}$ will be called *the directing distribution of* $\mathcal{D}$ (alternatively, of $\mathcal{E}$).
This way one gets the following flag of distributions $$\mathcal{R}\subset\mathcal{D}\subset\mathcal{D}_{(1)}\subset\mathcal{C}.$$
The directing distribution $\mathcal{R}$ uniquely defines $\mathcal{D}_{(1)}$, which is its $\mathcal{C}$-orthogonal complement, and the distribution $$\mathcal{D}^{\prime}=\left\{ X\in\mathcal{D}_{(1)}|\left[ X,\mathcal{R}\right] \subset{D}_{(1)}\right\} .$$
Since $\left[ X,\mathcal{R}\right] \subset{D}_{(1)}$ for any $X\in
\mathcal{D}$, we see that $\mathcal{D}\subseteq\mathcal{D}^{\prime}\subseteq\mathcal{D}_{(1)}$. So, by obvious dimension arguments, only one of the following two possibilities may occur locally: either $\mathcal{D}^{\prime }=\mathcal{D}$, or $\mathcal{D}^{\prime}=\mathcal{D}_{(1)}$. A non-integrable Lagrangian distribution $\mathcal{D}$ is called *generic* in the first case and *special* in the second. Accordingly, the corresponding PMAs are called generic, or special.
The following assertion directly follows from the above definitions.
\[generic-MA\] A generic Lagrangian distribution $\mathcal{D}$ is completely determined by its directing distribution $\mathcal{R}$. It is no longer so for a special distribution and in this case $\mathcal{R}$ is the characteristic distribution of $\mathcal{D}_{(1)}$.
In this paper we shall concentrate on generic PMA equations with a special attention to the equivalence problem. In view of Proposition \[generic-MA\] this problem takes part of the equivalence problem for 1-dimensional subdistributions of $\mathcal{C}$. To localize this part a criterion allowing to distinguish directing distributions of generic PMA equations from other $1$-dimensional subdistributions of $\mathcal{C}$. The introduced below notion of the *type* of a $1$-dimensional subdistribution of $\mathcal{C}$ gives such a criterion.
Fix a 1-dimensional distribution $\mathcal{S}\subset\mathcal{C}$ and put $$\mathcal{S}_{r}=\left\{ Y\in\mathcal{C}\;|\;X^{s}(Y)\in\mathcal{C},\forall\;X\in\mathcal{S}\;\mathrm{{and}\;0\leq s\leq r}\right\}
.$$ In the following lemma we list without a proof some obvious properties of $\mathcal{S}_{r}$.
\[simple\]
------------------------------------------------------------------------
- $\mathcal{S}_{r+1}\subset\mathcal{S}_{r}$ and $\mathcal{S}\subset\mathcal{S}_{r}$;
- If (locally) $\mathcal{S}=\left\langle X\right\rangle $, then (locally) $$\mathcal{S}_{r}=\left\{
Y\in\mathcal{C}\;|\;X^{s}(Y)\in\mathcal{C}, \;0\leq s\leq
r\right\}$$
- $\mathcal{S}_{r}$ is a finitely generated $C^{\infty}(\mathcal{M})$-module;
- If (locally) $\mathcal{C}=Ann(U),\;U\in\Lambda^{1}(\mathcal{M})$, and (locally) $\mathcal{S}=\left\langle X\right\rangle $, then $$\mathcal{S}_{r}=Ann(U,X(U),...,X^{r}(U)).$$
Let $\mathcal{T}$ be a $C^{\infty}(\mathcal{M})$-module. An open domain $B\subset\mathcal{M}$ is called *regular* for $\mathcal{T}$ if the localization of $\mathcal{T}$ to $B$ is a projective $C^{\infty}(B)$-module. If $\mathcal{T}$ is finitely generated (see [@nestruev]), then this localization is isomorphic to the $C^{\infty}(B)$-module of smooth sections of a finite dimensional vector bundle over $B$. In such a case the dimension of this bundle is called the *rank* of $\mathcal{T}$ on $B$ and denoted by $\mathrm{{rank}_{B}\mathcal{T}}$. Moreover, the manifold $\mathcal{M}$ is subdivided into a number of open domains that are regular for $\mathcal{T}$ and the set of its *singular points* which is closed and thin. In particular, vector bundles representing localizations of $\mathcal{S}_{r}$ to its regular domains are distributions contained in $\mathcal{C}$ and containing $\mathcal{S}$ (Lemma \[simple\]).
\[rank\] Let $B\subset\mathcal{M}$ be a common regular domain for modules $\mathcal{S}_{r}$ and $\mathcal{S}_{r+1}$. Then either $\mathrm{{rank}_{B}\mathcal{S}_{r}={rank}_{B}\mathcal{S}_{r+1}+1}$, or $\mathrm{{rank}_{B}\mathcal{S}_{r}={rank}_{B}\mathcal{S}_{r+1}}$. In the latter case $B$ is regular for $\mathcal{S}_{p},p\geq r$, and $\mathrm{{rank}_{B}\mathcal{S}_{p}={rank}_{B}\mathcal{S}_{r}}$.
The first alternative takes place iff $X^{r+1}(U)$ is $C^{\infty}(\mathcal{M})$-independent of $U,X(U),\dots,X^{r}(U)$ as it is easily seen from the last assertion of Lemma \[simple\]. In the second case the equality of ranks implies that localizations of $\mathcal{S}_{r}$ and $\mathcal{S}_{r+1}$ to $B$ coincide. This shows that the localization of $\mathcal{S}_{p}$ to $B$ stabilizes by starting from $p=r$.
With the exception of a thin closed set the manifold $\mathcal{M}$ is subdivided into open domains each of them is regular for all $\mathcal{S}_{p},p\geq0$. Moreover, in such a domain $B$, $\mathrm{{rank}_{B}} \mathcal{S}_{p}=4-p$, if $p\leq r$, and $\mathrm{{rank}_{B}}\mathcal{S}_{p}=4-r$, if $p\geq r$, for an integer $r=r(B), \,1\leq r\leq 3$.
It immediately follows from Lemma \[rank\] that the function $p\mapsto \mathrm{{rank}_{B}\mathcal{S}_{p}}$ steadily decreases up to the instance, say $p=r$, when\
$\mathrm{{rank}_{B}\mathcal{S}_{r}={rank}_{B}\mathcal{S}_{r+1}}$ occurs for the first time, and stabilizes after. Since $\mathcal{S}\subset\mathcal{S}_{p}$, the last assertion of Lemma \[simple\] shows that this instance happens at most for $p=3$, i.e., that $r\leq3$. On the other hand, forms $U$ and $X(U)$ are independent. Indeed, the equality $X(U)=fU$ for a function$f$ means that $X$ is a contact field with generating function $i_{X}(U)\equiv0$. But only the zero field is such one. Hence $r\geq1$.
Finally, observe that regular domains for $\mathcal{S}_{p+1}$ are obtained from those for $\mathcal{S}_{p}$ by removing from latters some thin subsets. Since, as we have seen before, the situation stabilizes after at most four steps by starting from $p=0$, the existence of common regular domains for all $\mathcal{S}_{p}$’s whose union is everywhere dense in $\mathcal{M}$ is guaranteed.
If $\mathcal{M}$ is the only regular domain for all $\mathcal{S}_{p}$’s, then $\mathcal{S}$ is called *regular* on $\mathcal{M}$ and the integer $r$ by starting from which the $\mathrm{{rank}_{B}\mathcal{S}_{r}}$ stabilizes is called the *type* of $\mathcal{S}$ and also the *type* of $X$, if $\mathcal{S}=\left\langle X\right\rangle $.
Hence Lemma \[rank\] tells that the type of $\mathcal{S}$ can be only one of the numbers $1,2$, or $3$, and that the union of domains in which $\mathcal{S}$ is regular is everywhere dense in $\mathcal{M}$.
It is not difficult to exhibit vector fields of each these three types. For instance, vector fields of the form $X_{f}-fX_{1}$ on $\mathcal{M}=J^{1}(E,2)$ with everywhere non-vanishing function $f$ are of type 1. Fields $\partial
_{x}+p\partial_{z}+q\partial_{p}$ and $\partial_{x}+p\partial_{z}+(xy+q)\partial_{p}$ are of type $2$ and $3$, respectively.
Now we can characterize directing distributions of generic PMA equations
\[caratt\] A $1$-dimensional regular distribution $\mathcal{S}\subset\mathcal{C}$ is the directing distribution of a generic PMA equation iff it is of type $3$.
Let $\mathcal{S}$ be a regular distribution of type $3$ and (locally) $\mathcal{S}=<X>$. Then by definition $\mathcal{S}_{1}=\mathcal{S}^{\bot}$ and $\mathcal{D}=\mathcal{S}_{2}$ is a bidimensional subdistribution of $\mathcal{C}$. $\mathcal{D}$ is Lagrangian, since $\mathcal{D}\subset \mathcal{S}_{1}=\mathcal{S}^{\bot}$ and $\mathcal{S}\subset\mathcal{D}$. If (locally) $\mathcal{D}=\left\langle X,Y\right\rangle $, then $\left[
X,Y\right] \notin\mathcal{D}$. Indeed, assuming that $\left[
X,Y\right]
\in\mathcal{D}$ one sees that $X^{p}(Y)\in\mathcal{D}\subset\mathcal{C},\forall p$, and hence $Y\in\mathcal{S}_{p},\forall p$. In particular, this implies that $\mathcal{D}=\left\langle X,Y\right\rangle \subset\mathcal{S}_{3}$ in contradiction with the fact that $\mathrm{{dim}\;\mathcal{S}_{3}=1}$ if $\mathcal{S}$ is of type 3. So, $\mathrm{{dim}\left\langle X,Y,\left[ X,Y\right]
\right\rangle =3}$, and $\left\langle X,Y,\left[ X,Y\right] \right\rangle \subset\mathcal{S}^{\bot}$, since $X(Y),X^{2}(Y)\in\mathcal{C}$. Now, by dimension arguments, we conclude that $\left\langle X,Y,\left[ X,Y\right]
\right\rangle =\mathcal{S}^{\bot}$ and, therefore, $\mathcal{S}$ is the directing distribution of $\mathcal{D}=\mathcal{S}_{2}$.
Conversely, assume that (locally) $\mathcal{R}=\left\langle
X\right\rangle $ is the directing distribution of a generic PMA equation corresponding to the Lagrangian distribution $\mathcal{D}=\left\langle X,Y\right\rangle $ (locally). Then, by definition, $\mathcal{R}_{1}=\mathcal{R}^{\bot}$. Moreover, $\mathcal{D}=\mathcal{D}^{\prime}$ implies that $X^{2}(Y)\notin\mathcal{R}^{\bot}$ and hence fields $X,Y,X(Y),X^{2}(Y)$ form a local basis of $\mathcal{C}$. This shows that $X^{3}(Y)\notin\mathcal{C}$. Indeed, the assumption $X^{3}(Y)\in\mathcal{C}$ implies $\left[ X,\mathcal{C}\right]
\subset\mathcal{C}$, i.e., that $X$ is a nonzero contact field with zero generating function. Hence $Y\notin\mathcal{R}_{3}\Leftrightarrow\left[ X,Y\right]
\notin\mathcal{R}_{2}$. From one side, this shows that $\mathcal{D}=\mathcal{R}_{2}$ and, from other side, that $\mathcal{R}_{2}\neq\mathcal{R}_{3}$. So, $\mathcal{R}_{3}=\left\langle
\mathcal{R} \right\rangle
\Rightarrow\mathrm{{rank}\;\mathcal{R}=3}$.
From now on we denote by $Z$ the directing distribution of the considered PMA $\mathcal{E}$. A coordinate description of $\mathcal{D}=\mathcal{D}_{\mathcal{E}}, \mathcal{D}_{1}$ and $Z$ is easily obtained by a direct computation (see Proposition\[coord\]):
\[quasi\_linear\]Let $\mathcal{E}$ be a quasilinear nonintegrable PMA of the form (\[MA\_1\]) with $A\neq0$. By normalizing its coefficients to $A=1$ one has: $\mathcal{D}=\left\langle X_{1},X_{2}\right\rangle $ and $\mathcal{D}_{(1)}=\left\langle X_{1},X_{2},X_{3}\right\rangle $ with $$\begin{array}
[c]{l}X_{1}:=\partial_{x}+p\partial_{u}+\dfrac{B}{2}(\partial_{y}+q\partial
_{u})-D\partial_{p},\\
X_{2}:=\dfrac{B}{2}\partial_{p}-\partial_{q}\rule{0pt}{19pt},\\
X_{3}:=[X_{1},X_{2}]=M_{1}\left(
\partial_{y}+q\partial_{u}\right)
+M_{2}\partial_{p}\rule{0pt}{19pt},
\end{array}$$ where $$M_{1}=-\dfrac{1}{2}X_{2}(B),\qquad
M_{2}=\dfrac{1}{2}X_{1}(B)+X_{2}(D),$$ and $\mathcal{R}=\left\langle Z\right\rangle $ with $$Z:=M_{1}X_{1}-M_{2}X_{2}.$$
Let $\mathcal{E}$ be a nonintegrable PMA of the form (\[MA\_1\]) with $N\neq0$. By normalizing its coefficients to $N=1$ one has: $\mathcal{D}=\left\langle X_{1},X_{2}\right\rangle $ and $\mathcal{D}_{(1)}=\left\langle X_{1},X_{2},X_{3}\right\rangle $ with $$\begin{array}
[c]{l}X_{1}:=\partial_{x}+p\partial_{u}-C\partial_{p}+\dfrac{B}{2}\partial_{q},\\
X_{2}:=\partial_{y}+q\partial_{u}+\dfrac{B}{2}\partial_{p}-A\partial
_{q}\rule{0pt}{19pt},\\
X_{3}:=[X_{1},X_{2}]=M_{1}\partial_{q}+M_{2}\partial_{p}\rule{0pt}{19pt},
\end{array}$$ where $$M_{1}=-X_{1}(A)-\dfrac{1}{2}X_{2}(B),\qquad M_{2}=\dfrac{1}{2}X_{1}(B)+X_{2}(C),$$ and $\mathcal{R}=\left\langle Z\right\rangle $ with$$Z:=M_{1}X_{1}-M_{2}X_{2}.$$
\[classif\_integr\]Classification of integrable PMAs
====================================================
For completeness we shall prove here the following, essentially known, result in a manner that illustrate the idea of our further approach.
\[INT\] With the exception of singular points all integrable PMA equations are locally contact equivalent each other and, in particular, to the equation $u_{xx}=0$.
Let $\mathcal{E}=\left(
\mathcal{M},\mathcal{C},\mathcal{D}\right) $ be an integrable PMA equation, i.e., the Lagrangian distribution $\mathcal{D}$ is integrable and, as such, define a $2$-dimensional Legendrian foliation of $\mathcal{M}$. Locally this foliation can be viewed as a fibre bundle $\Pi:\mathcal{M}\rightarrow W$ over a $3$-dimensional manifold $W$. The differential $d_{\theta}(\Pi):T_{\theta}\mathcal{M} \rightarrow T_{y}W,
y=\Pi(\theta)$, sends $\mathcal{C}(\theta)$ to a bidimensional subspace $P_{\theta}\subset T_{y}W$, since $\mathrm{{ker}\,d_{\theta}(\Pi
)=\mathcal{D}(\theta)}$. This way one gets the map $\Pi_{y}:\Pi^{-1}(y)\rightarrow G_{3,2}(y), \,\theta\mapsto P_{\theta}$, where $G_{3,2}(y)$ is the Grassmanian of $2$-dimensional subspaces in $T_{y}W$. Note that $\mathrm{{dim}\,\Pi^{-1}(y)=
{dim}\,G_{3,2}(y)=2}$ and, so, the local rank of $\Pi_{y}$ may vary from $0$ to $2$. We shall show that, with the exception of a thin set of singular points, $\Pi_{y}$’s are of rank $2$, i.e., $\Pi_{y}$’s are local diffeomorphisms.
First, assume that this rank is zero for all $y\in W$, i.e., $\Pi_{y}$’s are locally constant maps. In this case $\mathcal{P}_{\theta}$ does not depend on $\theta\in\Pi^{-1}(y)$ and we can put $\mathcal{P}(y)=P_{\theta}$, for a $\theta\in\Pi^{-1}(y)$. Hence $y\mapsto\mathcal{P}(y)$ is a distribution on $W$ and $\mathcal{C}$ is its pullback via $\Pi$. This shows that the distribution tangent to fibers of $\Pi$, i.e., $\mathcal{D}$, is characteristic for $\mathcal{C}$. But a contact distribution does not admit nonzero characteristics.
Second, if the rank of $\Pi_{y}$’s equals to one for all $y\in W$, then $\mathcal{M}$ is foliated by curves$$\gamma_{P}=\{\theta\in\Pi^{-1}(y)|d_{\theta}\Pi(\mathcal{C}(\theta))=P\}$$ with $P$ being a bidimensional subspace of $T_{y}W$. Locally, this foliation may be seen as a fibre bundle $\Pi_{0}:\mathcal{M}\rightarrow N$ over a 4-dimensional manifold N, and $\Pi$ factorizes into the composition $$\mathcal{C}\overset{\Pi_{0}}{\rightarrow}N\overset{\Pi_{1}}{\rightarrow}W$$ with $\Pi_{1}$ uniquely defined by $\Pi$ and $\Pi_{0}$. By construction the $3$-dimensional subspace $d_{\theta}\Pi_{0}(\mathcal{C}(\theta))\subset
T_{z}N, \,z=\Pi_{0}(\theta)$, does not depend on $\theta\in\Pi^{-1}_{0}(z)=\gamma_{P}$ and one can put $\mathcal{Q}(z)= d_{\theta}\Pi
_{0}(\mathcal{C}(\theta))$ for a $\theta\in\Pi^{-1}_{0}(z)$. As before we see that $\mathcal{C}$ is the pullback via $\Pi_{0}$ of the $3$-dimensional distribution $z\mapsto\mathcal{Q}(z)$ in contradiction with the fact that $\mathcal{C}$ does not admit nonzero characteristics.
Thus, except singular points, $\Pi$ is of rank $2$ and hence a local diffeomorphism. So, locally, $\Pi_{y}$ identifies $\Pi^{-1}(y)$ and an open domain in $G_{3,2}(y)$. By observing that $G_{3,2}(y)=\pi_{1,0}^{-1}(y)$, with $\pi_{1,0}:J^{1}(W,2)\rightarrow W$ being a natural projection, one gets a local identification of $\mathcal{M}$ with an open domain in $J^{1}(W,2)$. It is easy to see that this identification is a contact diffeomorphism. In other words, we have proven that any integrable Lagrangian distribution on a $5$-dimensional contact manifold is locally equivalent to the distribution of tangent planes to fibers of the projection $J^{1}(\mathbb{R}^{3},2)\rightarrow\mathbb{R}^{3}$.
Finally, we observe that $\mathcal{D}_{\mathcal{E}}=\left\langle
\partial_{x},
\partial_{y}\right\rangle $ for the equation $\mathcal{E}=\left\{
u_{xx}=0\right\} $ and hence this equation is integrable.
Projective curve bundles and non-integrablegeneric PMAs
=======================================================
In this section non-integrable generic Lagrangian distributions and, therefore, the corresponding PMAs are represented as $4$-parameter families of curves in the projective $3$-space or, more exactly, as *projective curve bundles*. Differential invariants of single curves composing such a bundle (say, projective curvature, torsion,etc) put together give differential invariants of the whole bundle and consequently of the corresponding PMA. This basic geometric idea is developed in details in the subsequent section.
Let $N$ be a $4$-dimensional manifold. Denote by $PT_{a}^{\ast}N$ the $3$-dimensional projective space of all $1$-dimensional subspaces of the cotangent to $N$ space $T_{a}^{\ast}N$ at the point $a\in N$. The *projectivization* $p\tau^{\ast}:PT^{\ast}N\rightarrow N$ of the cotangent to $N$ bundle $\tau^{\ast}:T^{\ast}N\rightarrow N$ is the bundle whose total space is $PT^{\ast}N={\bigcup}_{a\in N}PT_{a}^{\ast}N$ and the fiber over $a\in N$ is $PT_{a}^{\ast}N$, i.e., $(p\tau^{\ast})^{-1}(a)=PT_{a}^{\ast}N$. A *projective curve bundle* (PCB) over $N$ is a $1$-dimensional subbundle $\pi:K\rightarrow N$ of $p\tau^{\ast}$: $$\begin{array}
[c]{ccc}\quad K & \hookrightarrow & PT^{\ast}N\quad\\
\pi\downarrow & & \downarrow p\tau^{\ast}\\
\quad N & \overset{id}{\longrightarrow} & N\quad\quad
\end{array}$$ The fiber $\pi^{-1}(y),\,y\in N$, is a smooth curve in the projective space $PT_{y}^{\ast}N$. A diffeomorphism $\Phi:N\rightarrow N^{\prime}$ lifts canonically to a diffeomorphism $PT^{\ast}N\rightarrow PT^{\ast}N^{\prime}$. This lift sends a PCB $\pi$ over $N$ to a PCB over $N^{\prime}$, denoted by $\Phi\pi$. A PCB $\pi$ over $N$ and a PCB $\pi^{\prime}$ over $N^{\prime}$ are *equivalent* if there exist a diffeomorphism $\Phi:N\rightarrow N^{\prime}$ such that $\pi^{\prime}=\Phi\pi$.
Let $\pi:K\rightarrow N$ be a PCB and $\theta=<\rho>\in K$ with $\rho\in T_{\pi(\theta)}^{\ast}N$. Denote by $W_{\theta}$ the $3$-dimensional subspace of $T_{\pi(\theta)}N$ annihilated by $\theta$, i.e.,$$W_{\theta}=\left\{ \xi\in
T_{\pi(\theta)}N\,|\,\rho(\xi)=0\right\} .$$ Two distributions are canonically defined on $K$. First of them is the $1$-dimensional distribution $\mathcal{R}_{\pi}$ formed by all vertical with respect to $\pi$ vectors. The second one, denoted by $\mathcal{C}_{\pi}$, is defined by $$(\mathcal{C}_{\pi})_{\theta}=\left\{ \eta\in T_{\theta}K\,|\,d_{\theta}\pi(\eta)\in W_{\theta}\right\} .$$
Obviously, $\rm{dim}\,\mathcal{C}_{\pi}=4$ and $\mathcal{R}_{\pi}\subset\mathcal{C}_{\pi}$. If, locally, $\mathcal{R}_{\pi}=\left\langle Z\right\rangle ,\,Z\in D(K)$, and $\mathcal{C}_{\pi}=Ann(U_{\pi}),\,U_{\pi}\in\Lambda^{1}(K)$, then the *osculating distributions* of $\pi$ are defined as $$\mathcal{Z}_{s}^{\pi}=Ann(U_{\pi},Z(U_{\pi}),...,Z^{s}(U_{\pi})),\qquad
s=0,1,2,3.$$ It is easy to see that this definition does not depend on the choice of $Z$ and $U_{\pi}$. Note that $\mathcal{C}_{\pi}=\mathcal{Z}_{0}^{\pi}$. Also, $\mathcal{R}_{\pi}\subset\mathcal{Z}_{s}^{\pi},\,\forall\,s\geq0$, as it easily follows from $\left[ i_{Z},L_{Z}\right] =0$ and $U_{\pi}(Z)=0$. Moreover, generically forms $U_{\pi},Z(U_{\pi}),...,Z^{3}(U_{\pi})$ are independent and so, by dimension arguments, $\mathcal{R}_{\pi}=\mathcal{Z}_{3}^{\pi}$.
We say that $\pi$ is a *regular PCB* iff the following two conditions are satisfied: (i) $\mathcal{R}_{\pi}=\mathcal{Z}_{3}^{\pi}$ and (ii) $\mathcal{C}_{\pi}$ is a contact structure on $K$. We emphasize that regularity is a generic condition. Moreover, conditions (i)-(ii) are equivalent to the fact that $\mathcal{R}_{\pi}$ is of type 3 with respect to the contact distribution $\mathcal{C}_{\pi}$. So, by Proposition \[caratt\], the distribution $$\mathcal{D}_{\pi}=\{X\in\mathcal{R}_{\pi}^{\mathcal{\bot}}|L_{X}(\mathcal{R}_{\pi})\subset\mathcal{R}_{\pi}^{\mathcal{\bot}}\}.$$ with $\mathcal{R}_{\pi}^{\bot}$ being the $\mathcal{C}_{\pi}$-orthogonal complement of $\mathcal{R}_{\pi}$ is bidimensional and Lagrangian for a regular PCB $\pi$. Thus we have
\[PCB\_equiv\_PMA\]\[GNR\]If $\pi$ is a regular PCB, then $\mathcal{D}_{\pi}$ is a Lagrangian subdistribution of $\mathcal{C}_{\pi}$ and $(K,\mathcal{C}_{\pi},\mathcal{D}_{\pi})$ is a generic PMA whose directing distribution is $\mathcal{R}_{\pi}$. Conversely, a generic PMA locally determines a regular PCB.
The first assertion of the Theorem is already proved. It remains to represent a generic PMA $\left(
\mathcal{M},\mathcal{C},\mathcal{D}\right) $ as a regular PCB. Integral curves of its directing distribution $\mathcal{R}$ foliate $\mathcal{M}$. Locally, this foliation may be considered as a fiber bundle $\pi:\mathcal{M}\rightarrow N$ over a 4-dimensional manifold $N$. Since $\mathcal{R}(\theta)\subset\mathcal{C}(\theta)$, the subspace $V_{\theta }=d_{\theta}\pi(\mathcal{C}(\theta))\subset
T_{y}N,\,y=\pi(\theta)$, is $3$-dimensional. Put $\Gamma_{y}=\pi^{-1}(y)$. The map $\pi_{y}:\Gamma _{y}\rightarrow
G_{4,3}(y),\,\theta\mapsto V_{\theta}$, with $G_{4,3}(y)$ being the Grassmanian of $3$-dimensional subspaces in $T_{y}N$, is almost everywhere of rank $1$. Indeed, the assumption that locally this rank is zero leads, as in the proof of Theorem \[INT\], to conclude that locally the contact distribution $\mathcal{C}$ is the pullback via $\pi$ of a $3$-dimensional distribution on $N$.
The correspondence $\iota_{y}:G_{4,3}(y)\rightarrow
PT_{y}^{\ast}N$ that sends a $3$-dimensional subspace $V\subset
T_{y}N$ to $Ann(V)$ is, obviously, a diffeomorphism. Hence the composition $\iota_{y}\circ\pi_{y}$ is a local embedding with exception of a thin subset of singular points. Now, it is easy to see that images of $\Gamma_{y}$’s via $\iota_{y}\circ\pi_{y}$’s give the required PCB.
The above construction associating a PCB with a given PMA is manifestly functorial, i.e., an equivalence $F:\left( \mathcal{M},\mathcal{C},\mathcal{D}\right) \longrightarrow\left( \mathcal{M}^{\prime},\mathcal{C}^{\prime},\mathcal{D}^{\prime}\right) $ of PMAs induces an equivalence $\Phi:\left( N,K,\pi\right)
\longrightarrow\left( N^{\prime },K^{\prime},\pi^{\prime}\right)
$ of associated PCBs. Indeed, $F$ sends $\mathcal{R}$ to $\mathcal{R}^{\prime}$ and hence integral curves of $\mathcal{R}$ (locally, fibers of $\pi$) to integral curves of $\mathcal{R}^{\prime}$ (locally, fibers of $\pi^{\prime}$). This defines a map $\Phi$ of the variety $N$ of fibers of $\pi$ to the variety $N^{\prime}$ of fibers of $\pi^{\prime}$, etc. Thus we have
The problem of local contact classification of generic PMAs is equivalent to the problem of local classification of regular PCBs with respect to diffeomorphisms of base manifolds.
Now we observe that there is another, in a sense, dual PCB associated with a given PMA equation. Namely, associate with a point $\theta\in K$ the line $L_{\theta}=d_{\theta}\pi(D_{\pi}(\theta))\subset
T_{y}N,\,y=\pi(\theta)$. The correspondence $\theta\mapsto
L_{\theta}\in PT_{y}N$, where $PT_{y}N$ denotes the projective space of lines in $T_{y}N$, defines a map of $\pi^{-1}(y)$ to $PT_{y}N$, i.e., a (singular) curve in $PT_{y}N$. As before this defines locally a $1$-dimensional subbundle in the projectivization $PTN$ of $TN$. It will be called the *second* PCB associated with the considered PMA.
PCBs may be considered as canonical models of PMAs. Besides other they suggest a geometrically transparent construction of scalar differential invariants of PMAs.
Let $\mathcal{I}$ be a scalar projective differential invariant of curves in $\mathbb{R}P^{3}$, say, the *projective curvature* (see [@Wilc; @SuB]), $\theta\in K$ and $y=\pi(\theta)$. The value of this invariant for the curve $\Gamma_{y}=\pi^{-1}(y)$ in $PT_{y}^{\ast}$ is a function on $\Gamma_{y}$. Denote it by $\mathcal{I}_{\pi,y}$ and put $\mathcal{I}_{\pi}(\theta
)=\mathcal{I}_{\pi,y}(\theta)$ if $\theta\in\Gamma_{y}\subset K$. Then, obviously, $\mathcal{I}_{\pi}\in C^{\infty}(K)$ is a differential invariant of the PCB $\pi$ and hence of the PMA represented by $\pi$.
The differential invariants of the form $\mathcal{I}_{\Psi}$ are sufficient for a complete classification of generic PMA equations on the basis of the “principle of $n$-invariants”.
According to the “principle of $n$-invariants”, it is sufficient to construct $n=\mathrm{{dim}\,\mathcal{M}=5}$ independent differential invariants of PMAs in order to solve the classification problem. Such invariants of the required form will be constructed in the next section.
For the “principle of $n$-invariants” the reader is referred to [@AVL; @SDI].
Differential invariants of generic PCBs
=======================================
Let $\pi:K\rightarrow N$ be a regular PCB and, as before, $\mathcal{R}_{\pi}=<Z>$, $\mathcal{D}_{\pi}=<Z,X>$ and $\mathcal{C}_{\pi }=Ann(U)$. Here the vector field $Z$ and the 1-form $U$ are unique up to a functional nowhere vanishing factor, while $X$ is unique up to a transformation $X\longmapsto
gX+\varphi Z,\,g,\varphi\in C^{\infty}(K)$ with nowhere vanishing $g$.
Since the considered PCB is regular we have the following flag of distributions $$\mathcal{R}_{\pi}\subset\mathcal{D}_{\pi}\subset\mathcal{R}_{\pi}^{\bot
}\subset\mathcal{C}_{\pi}\subset\mathcal{D}(K)$$ of dimensions increasing from $1$ to $5$, respectively. In terms of $X,Z$ and $U$ they are described as follows:
\[theorem\_1\]Locally, with the exception of a thin set of singular points we have:
1. $\mathcal{R}_{\pi}=Ann(Z^{i}(U):i=0,1,2,3)=\left\langle
Z\right\rangle ; $
2. $\mathcal{D}_{\pi}=Ann(U,Z(U),Z^{2}(U))=\left\langle
Z,X\right\rangle ;$
3. $\mathcal{R}_{\pi}^{\bot}=Ann(U,Z(U))=<Z,X,Z(X)>;$
4. $\mathcal{C}_{\pi}=Ann(U)=<Z,Z^{i}(X):i=0,1,2>;$
5. $\{Z,X,Z(X),Z^{2}(X),Z^{3}(X)\}$ is a base of the $C^{\infty}(K)$-module $\mathcal{D}(K)$;
6. for some functions $r_{i}\in C^{\infty}(K)$ $$Z^{4}(U)+r_{1}Z^{3}(U)+r_{2}Z^{2}(U)+r_{3}Z(U)+r_{4}U=0
\label{curva_proj_esterna}$$
Assertions (1)-(3) are direct consequences of Lemma \[simple\], Proposition \[caratt\] and definitions. Assertion (4) follows from independence of $Z^{2}(X)$ from $Z,X,Z(X)$. This is so because otherwise $Z$ would be a characteristic of $\mathcal{R}_{\pi}^{\bot}=\quad<Z,X,Z(X)>$ in contradiction with the fact that $\mathcal{R}_{\pi}$ is of rank 3. Similarly, $Z^{3}(X)$ is independent of $Z,X,Z(X),Z^{2}(X)$. Indeed, otherwise $Z$ would be a characteristic of $\mathcal{C}_{\pi}$. This proves (5).
Finally, forms $Z^{s}(U),\,s\geq0$, are annihilated by $Z$. Since $\mathrm{{dim}\,\mathcal{M}=5}$ this implies that $Z^{4}(U)$ depends on $Z^{s}(U),\,s\leq 3$. But this is equivalent to (\[curva\_proj\_esterna\]).
\[pippo\] If $0\leq k,l\leq3$ and $k+l=3$, then $$Z^{k}(X)\lrcorner Z^{l}(U)=(-1)^{k+1}Z^{3}(X)\lrcorner U\neq0.$$
Assertions (5) and (6) of the above Proposition show that $Z^{3}(X)$ completes a basis of $\mathcal{C}$ to a basis of $D(K)$. So, $Z^{3}(X)\lrcorner U$ is a nowhere vanishing function.
It follows from the standard formula $\left[ i_{X},L_{Y}\right]
=i_{\left[ X,Y\right] }$ that $$\begin{aligned}
Z^{k}(X)\lrcorner Z^{l}(U) & =\left[ i_{Z^{k}(X)},L_{Z}\right]
(Z^{l-1}(U))-L_{Z}(Z^{k}(X)\lrcorner Z^{l-1}(U))=\nonumber\label{r+s}\\
& -Z^{k+1}(X)\lrcorner Z^{l-1}(U)-L_{Z}(Z^{k}(X)\lrcorner
Z^{l-1}(U))\end{aligned}$$ According to Proposition \[theorem\_1\], (1)-(3), $Z^{r}(X)\lrcorner Z^{s}(U)=0$ if $r+s=2$. So, for $k+l=3$ relation (\[r+s\]) becomes $$Z^{k}(X)\lrcorner Z^{l}(U)=-Z^{k+1}(X)\lrcorner Z^{l-1}(U)$$
By decomposing $Z^{4}(X)$ with respect to the base in (5) of Proposition \[theorem\_1\] we get $$Z^{4}(X)+\rho_{1}Z^{3}(X)+\rho_{2}Z^{2}(X)+\rho_{3}Z(X)+\rho_{4}X+\rho_{5}Z=0
\label{curva_proj_interna_*}$$ for some functions $\rho_{i}\in C^{\infty}(K)$ and hence $$Z^{4}(X\wedge Z)+\rho_{1}Z^{3}(X\wedge Z)+\rho_{2}Z^{2}(X\wedge
Z)+\rho
_{3}Z(X\wedge Z)+\rho_{4}X\wedge Z=0. \label{curva_proj_interna}$$
The last is a relation binding the bivector $X\wedge Z$, which generates the distribution $\mathcal{D}_{\pi}$. This bivector is unique up to a functional factor.
Functions $r_{i}$’s in decomposition (\[curva\_proj\_esterna\]) are expressed in terms of iterated Lie derivatives $Z^{i}(U),Z^{j}(X)$ as follows: $$\begin{array}
[c]{l}r_{1}=-\dfrac{X\lrcorner Z^{4}(U)}{X\lrcorner Z^{3}(U)},\qquad r_{2}=-\dfrac{Z(X)\lrcorner Z^{4}(U)+r_{1}Z(X)\lrcorner
Z^{3}(U)}{Z(X)\lrcorner
Z^{2}(U)},\vspace{0.1in}\\
r_{3}=-\dfrac{Z^{2}(U)\lrcorner Z^{4}(U)+r_{1}Z^{2}(X)\lrcorner Z^{3}(U)+r_{2}Z^{2}(X)\lrcorner Z^{2}(U)}{Z^{2}(X)\lrcorner Z(U)},\vspace{0.1in}\\
r_{4}=-\dfrac{Z^{3}(X)\lrcorner Z^{4}(U)+r_{1}Z^{3}(X)\lrcorner Z^{3}(U)+r_{2}Z^{3}(X)\lrcorner Z^{2}(U)+r_{3}Z^{3}(X)\lrcorner Z(U)}{Z^{3}(X)\lrcorner U}.
\end{array}$$
By subsequently inserting fields $Z^{s}(X),\,0\leq s\leq3$, in (\[curva\_proj\_esterna\]) one easily gets the result by taking into account Proposition \[theorem\_1\] and Corollary \[pippo\].
Similarly, we have
Functions $\rho_{i}$ in decomposition (\[curva\_proj\_interna\]) are expressed in terms of iterated Lie derivatives $Z^{i}(U),Z^{j}(X)$ as follows: $$\begin{array}
[c]{l}\rho_{1}=-\dfrac{Z^{4}(X)\lrcorner U}{Z^{3}(X)\lrcorner
U},\qquad\rho
_{2}=-\dfrac{Z^{4}(X)\lrcorner Z(U)+\rho_{1}Z^{3}(X)\lrcorner Z(U)}{Z^{2}(X)\lrcorner Z(U)},\vspace{0.1in}\\
\rho_{3}=-\dfrac{Z^{4}(U)\lrcorner
Z^{2}(U)+\rho_{1}Z^{3}(X)\lrcorner
Z^{2}(U)+\rho_{2}Z^{2}(X)\lrcorner Z^{2}(U)}{Z(X)\lrcorner Z^{2}(U)},\vspace{0.1in}\\
\rho_{4}=-\dfrac{Z^{4}(X)\lrcorner
Z^{3}(U)+\rho_{1}Z^{3}(X)\lrcorner
Z^{3}(U)+\rho_{2}Z^{2}(X)\lrcorner Z^{3}(U)+\rho_{3}Z(X)\lrcorner Z^{3}(U)}{X\lrcorner Z^{3}(U)}.
\end{array}$$
As before we get the desired result by subsequently inserting the vector field in the left hand side of (\[curva\_proj\_interna\_\*\]) into $1$-forms $Z^{s}(U), \,0\leq s\leq3$.
By introducing functions $\alpha_{kl}=Z^{k}(X)\lrcorner Z^{l}(U)$ we see that $r_{i}$’s and $\rho_{i}$’s are rational functions of $\alpha_{kl}$’s: $$r_{1}=\frac{\alpha_{04}}{\alpha_{03}},\qquad r_{2}=\frac{\alpha_{03}\alpha_{14}+\alpha_{04}\alpha_{13}}{\alpha_{03}\alpha_{12}},\qquad
etc.$$
Differential invariants we are going to construct are projective differential invariants of curves composing the considered PCB. Clearly, it is not possible to describe explicitly these curves. So, the problem is how to express these invariants in terms of the data at our disposal, i.e., $X,Z$ and $U$. In what follows this problem is solved on the basis of a rather transparent analogy. For instance, the field $Z$ restricted to one of these curves may be thought as the derivation with respect to a parameter along this curve, and so on. So, with similar interpretations in mind it is sufficient just to mimic a known construction of projective differential invariants for curves in order to obtain the desired result. In doing that we follows classical Wilczynsky’s book [@Wilc]. By stressing the used analogy we pass to Wilczynsky’s $p_{i}$’s and $q_{j}$’s instead of above $r_{i}$’s and $\rho_{j}$’s: $$r_{1}=4p_{1},\qquad r_{2}=6p_{2},\qquad r_{3}=4p_{3},\qquad
r_{4}=p_{4}
\label{Wilc_coeff_esterna}$$ and$$\rho_{1}=4q_{1},\qquad\rho_{2}=6q_{2},\qquad\rho_{3}=4q_{3},\qquad\rho
_{4}=q_{4}. \label{Wilc_coeff_interna}$$ In terms of these functions relations (\[curva\_proj\_esterna\]) and (\[curva\_proj\_interna\]) read$$Z^{4}(U)+4p_{1}Z^{3}(U)+6p_{2}Z^{2}(U)+4p_{3}Z(U)+p_{4}U=0,
\label{curva_proj_esterna_adatt}$$$$Z^{4}(X\wedge Z)+4q_{1}Z^{3}(X\wedge Z)+6q_{2}Z^{2}(X\wedge
Z)+4q_{3}Z(X\wedge
Z)+q_{4}X\wedge Z=0. \label{curva_proj_interna_adatt}$$ These relations are identical to Wilczynsky’s formulas (see equation (1), page 238 of [@Wilc]).
$Z$ and $U$ are unique up to a “gauge” transformation $(Z,U)\longmapsto
(\overline{Z},\overline{U})$$$Z=f\overline{Z},\hspace{0.3in}U=h\overline{U} \label{gauge}$$ with nowhere vanishing $f,h\in C^{\infty}(K)$. The corresponding transformation of coefficients $\{p_{i}\}\longmapsto\{\bar{p}_{i}\}$ can be easily obtained from (\[curva\_proj\_esterna\_adatt\]) by a direct computation: $$\begin{array}
[c]{l}p_{1}\mapsto\bar{p}_{1}={\dfrac{Z(h)}{h}}+{\dfrac{p_{1}}{f}}+\,{\dfrac
{3Z(f)}{2f},\rule[-0.2in]{0in}{0.3in}}\\
p_{2}\mapsto\bar{p}_{2}={\dfrac{p_{2}}{{f}^{2}}+\dfrac{2p_{1}\,Z(f)}{{f}^{2}}}+{\dfrac{2p_{1}\,Z(h)}{3hf}}+3\,{\dfrac{Z(h)Z(f)}{hf}}+{\dfrac{7Z(f)^{2}}{6{f}^{2}}\rule[-0.16in]{0in}{0.3in}}\\
\qquad\qquad+{\dfrac{2Z^{2}(f)}{3f}}+{\dfrac{Z^{2}(h)}{h},\rule[-0.2in]{0in}{0.3in}}\\
p_{3}\mapsto\bar{p}_{3}={\dfrac{p_{3}}{{f}^{3}}+{\dfrac{3p_{2}Z(h)}{h{f}^{2}}}+{\dfrac{3p_{2}Z(f)}{2{f}^{3}}}+\dfrac{p_{1}\,Z(f)^{2}}{{f}^{3}}+\dfrac{p_{1}Z^{2}(f)}{{f}^{2}}\rule[-0.16in]{0in}{0.3in}}\\
\qquad\qquad{+{\dfrac{6p_{1}Z(h)Z(f)}{h{f}^{2}}}}+{\dfrac{3p_{1}Z^{2}(h)}{hf}+{\dfrac{Z^{3}(f)}{4f}}+{\dfrac{Z(f)^{3}}{4{f}^{3}}}+{\dfrac{Z^{3}(h)}{h}}\rule[-0.16in]{0in}{0.3in}}\\
\qquad\qquad{+{\dfrac{9Z(f)Z^{2}(h)}{2hf}}}+{\dfrac{7Z(h)Z(f)^{2}}{2h{f}^{2}}+{\dfrac{2Z(h)Z^{2}(f)}{hf}}\rule[-0.16in]{0in}{0.3in}}\\
\qquad\qquad{+{\dfrac{Z^{2}(f)Z(f)}{{f}^{2}}},\rule[-0.2in]{0in}{0.3in}}\\
p_{4}\mapsto\bar{p}_{4}={\dfrac{p_{4}}{{f}^{4}}+{\dfrac{4p_{3}Z(h)}{h{f}^{3}}}}+{\dfrac{6p_{2}Z^{2}(f)}{h{f}^{2}}}+{\dfrac{6p_{2}Z(h)Z(f)}{h{f}^{3}}\rule[-0.16in]{0in}{0.3in}}\\
\qquad\qquad{+{\dfrac{4p_{1}\,Z^{3}(h)}{hf}+\dfrac{4p_{1}Z(h)Z^{2}(f)}{h{f}^{2}}}}+{{\dfrac{4p_{1}Z(h)Z(f)^{2}}{h{f}^{3}}}+{\dfrac{Z^{4}(h)}{h}}\rule[-0.16in]{0in}{0.3in}}\\
\qquad\qquad+{\dfrac{12p_{1}Z(f)Z^{2}(h)}{h{f}^{2}}+{\dfrac{6Z(f)Z^{3}(h)}{hf}}+{\dfrac{7Z(f)^{2}Z^{2}(h)}{h{f}^{2}}}\rule[-0.16in]{0in}{0.3in}}\\
\qquad\qquad{+{\dfrac{Z(h)Z^{3}(f)}{hf}}+{\dfrac{4Z(h)Z^{2}(f)Z(f)}{h{f}^{2}}}+{\dfrac{Z(f)^{3}Z(h)}{h{f}^{3}}}\rule[-0.16in]{0in}{0.3in}}\\
\qquad\qquad+{\dfrac{4Z^{2}(h)Z^{2}(f)}{hf}.}\end{array}
\label{trasf_Pi}$$
Now the problem is to combine $p_{i}$’s in a way to obtain expressions which are invariant with respect to transformations (\[trasf\_Pi\]). To this end we first normalize $(Z,U)$ by the condition $p_{1}=0$. This can be easily done with $f=1$ and a solution $h$ of the equation$${Z(h)}+p_{1}h{=0.}$$ After this normalization, equation (\[curva\_proj\_esterna\_adatt\]) takes a simpler form$$Z^{4}(\overline{U})+6P_{2}Z^{2}(\overline{U})+4P_{3}Z(\overline{U})+P_{4}\overline{U}=0 \label{curva_esterna_semi_can}$$ with $$\begin{array}
[c]{l}P_{2}:=p_{2}-Z(p_{1})-Z(p_{1})^{2},\vspace{0.13in}\\
P_{3}:=p_{3}-Z(Z(p_{1}))-3p_{1}p_{2}+2p_{1}^{3},\vspace{0.13in}\\
P_{4}:=p_{4}-4p_{1}p_{3}-3p_{1}^{4}-Z(Z(Z(p_{1})))+3Z(p_{1})^{2}\vspace
{0.1in}\\
\qquad+6p_{1}^{2}Z(p_{1})+6p_{1}^{2}p_{2}-6Z(p_{1})p_{2}.
\end{array}
\label{Ps_per_tipo_3}$$
Transformations (\[gauge\]) preserving the normalization $p_{1}=0$ are subject to the condition$$Z(h)+{\dfrac{3}{2}Z(\ln(f))h=0.} \label{gauge_cond}$$
A direct computation.
Now the problem reduces to finding invariant combinations of $P_{2},P_{3}$ and $P_{4}$ with respect to normalized, i.e., respecting condition (\[gauge\_cond\]), transformations (\[gauge\]). This can be done, for instance, by mimicking the construction of projective curvature and torsion in [@Wilc]. Namely, introduce first the functions$$\begin{array}
[c]{l}\Theta_{3}=P_{3}-\frac{3}{2}Z(P_{2}),\vspace{0.1in}\\
\Theta_{4}=P_{4}-2Z(P_{3})+\frac{6}{5}Z(Z(P_{2}))-\frac{81}{25}P_{2}^{2},\vspace{0.1in}\\
\Theta_{3\cdot1}=6\Theta_{3}Z(Z(\Theta_{3}))-7Z(\Theta_{3})^{2}-\frac{108}{5}P_{2}\Theta_{3}^{2}\end{array}
\label{P_tipo_3}$$ They are *semi-invariant* with respect to normalized transformations (\[gauge\]), i.e., they are transformed according to formulas $$\overline{\Theta}_{3}=\frac{\Theta_{3}}{f^{3}},\hspace{0.45in}\overline
{\Theta}_{4}=\frac{\Theta_{4}}{f^{4}},\hspace{0.45in}\overline{\Theta}_{3\cdot1}=\frac{\Theta_{3\cdot1}}{f^{8}}. \label{theta_transformation}$$
Obviously, the following combinations of the $\Theta_{i}$’s $$\kappa_{1}=\frac{\Theta_{4}}{\Theta_{3}^{4/3}},\hspace{0.45in}\kappa_{2}=\frac{\Theta_{3\cdot1}}{\Theta_{3}^{8/3}} \label{first_proj_inv}$$ are invariant with respect to normalized transformations (\[gauge\]).
Thus we have
$\kappa_{1}$ and $\kappa_{2}$ are scalar differential invariants of parabolic Monge-Ampere equations (\[MA\_1\]) with respect to contact transformations.
Explicit expressions of $\kappa_{1}$ and $\kappa_{2}$ in terms of coefficients of PMA (\[MA\_1\]) can be straightforwardly obtained from those of $Z$ and $U$. However, they are not very instructive and too cumbersome to be reported here.
Another invariant, which can be readily extracted from (\[theta\_transformation\]), is the *invariant vector field* $$\label{N1}N_{1}=\Theta_{3}^{-1/3}Z.$$
Another set of scalar differential invariants can be constructed in a similar manner by starting from equation (\[curva\_proj\_interna\]). Indeed, the vector field $Z$ and the bivector field $X\wedge Z$ generating distributions $\mathcal{R}$ and $\mathcal{D}$, respectively, are unique up to transformations $$Z=f\overline{Z},\hspace{0.45in}X\wedge
Z=g\overline{X}\wedge\overline{Z}
\label{gauge_interno}$$ with nowhere vanishing $f,g\in C^{\infty}(K)$.
It is easy to check that coefficients $q_{i}$’s are transformed according to formulas (\[trasf\_Pi\]) and one can repeat what was already done previously in the case of equation (\[curva\_proj\_esterna\]). In particular, equation (\[curva\_proj\_interna\]) can be normalized as $$Z^{4}(\overline{X}\wedge\overline{Z})+6Q_{2}Z^{2}(\overline{X}\wedge
\overline{Z})+4Q_{3}Z(\overline{X}\wedge\overline{Z})+Q_{4}\overline{X}\wedge\overline{Z}=0 \label{curva_proj_interna_ad}$$ with $$\begin{array}
[c]{l}Q_{2}=q_{2}-Z(q_{1})-Z(q_{1})^{2},\vspace{0.13in}\\
Q_{3}=q_{3}-Z(Z(q_{1}))-3q_{1}q_{2}+2q_{1}^{3},\vspace{0.13in}\\
Q_{4}=q_{4}-4q_{1}q_{3}-3q_{1}^{4}-Z(Z(Z(q_{1})))+3Z(q_{1})^{2}\vspace
{0.1in}\\
\qquad+6q_{1}^{2}Z(q_{1})+6q_{1}^{2}q_{2}-6Z(q_{1})q_{2}.
\end{array}
\label{Qs_curva_proj_int}$$
This way we obtain the following scalar differential invariants of PMAs $$\tau_{1}:=\frac{\Lambda_{4}}{\Lambda_{3}^{4/3}},\hspace{0.45in}\tau_{2}:=\frac{\Lambda_{3\cdot1}}{\Lambda_{3}^{8/3}} \label{second_inv}$$ with *semi-invariants* $\Lambda_{i}$’s defined by $$\begin{array}
[c]{l}\Lambda_{3}=Q_{3}-\frac{3}{2}Z(Q_{2}),\vspace{0.1in}\\
\Lambda_{4}=Q_{4}-2Z(Q_{3})+\frac{6}{5}Z(Z(Q_{2}))-\frac{81}{25}Q_{2}^{2},\vspace{0.1in}\\
\Lambda_{3\cdot1}=6\Lambda_{3}Z(Z(\Lambda_{3}))-7Z(\Lambda_{3})^{2}-\frac
{108}{5}Q_{2}\Lambda_{3}^{2}.
\end{array}
\label{Lambdas}$$
Similarly, $$N_{2}=\Lambda_{3}^{-1/3}Z.$$ is an invariant vector field.
The observed parallelism in construction of two sets of differential invariants is explained by the fact that in both cases we compute the same invariants in two different PCB, namely, the first and the second ones, associated with the considered PMA.
Since vector fields $N_{1}$ and $N_{2}$ are invariant and $N_{1}=\lambda N_{2}$ the factor $\lambda=\Theta_{3}^{-1/3}\Lambda_{3}^{1/3}$ is a scalar differential invariant. So, $$\gamma_{3}:=\lambda^{3}=\frac{\Lambda_{3}}{\Theta_{3}}$$ is a scalar differential invariant of a new “mixed” kind as well as ratios $$\gamma_{4}:=\frac{\Lambda_{4}}{\Theta_{4}},
\qquad\lambda_{3\cdot1}:=
\frac{\Lambda_{3\cdot1}}{\Theta_{3\cdot1}}$$
By applying to already constructed scalar differential invariants $$\kappa_{1}, \quad\kappa_{2}, \quad\tau_{1}, \quad\tau_{2},
\quad\gamma_{3},
\quad\gamma_{4}, \quad\gamma_{3\cdot1}$$ various algebraic operations and arbitrary compositions of invariant vector fields $N_{1}$ and $N_{2}$ one can construct many other scalar differential invariants of PMAs. These does not exhaust all invariants. Nevertheless, various quintuples of (functionally) independent invariants can be composed from them, and this is the only one need in order to apply the “principle of $n$ invariants”. For instance, we have
Quintuples $\left(
\kappa_{1},\kappa_{2},\tau_{1},\tau_{2},\gamma_{3}\right) $ and $\left( \gamma_{3},N_{1}(\gamma_{3}),N_{1}^{2}(\gamma_{3}),\kappa
_{1},\kappa_{2}\right) $ are composed of independent invariants.
It is sufficient to exhibit an example for which invariants composing each of these two quintuples are independent. For instance, for both quintuples such is the equation determined by the directing distribution $\mathcal{R}$ generated by$$Z=q\partial_{x}+y\partial_{y}+(qp+yq)\partial_{z}+x\partial_{p}-xz\partial
_{q}.$$ In this case the corresponding Lagrangian distribution $\mathcal{D}$ is generated by $$X_{1}=a_{1}(\partial_{x}+p\partial_{u})+a_{2}\partial_{p}+a_{3}\partial
_{q},\qquad X_{2}=a_{1}(\partial_{y}+q\partial_{u})+a_{3}\partial_{p}+a_{4}\partial_{q}$$ with $$\begin{array}
[c]{l}a_{1}=xyu-(x-y)q,\qquad a_{2}=-(x-y)x-(u+xp)y^{2},\\
a_{3}=x^{2}u+(u+xp)yq,\qquad a_{4}=-(q+xu)xu-(u+xp)q^{2},
\end{array}$$ and the corresponding PMA is$$\begin{array}
[c]{l}a_{1}^{2}(rt-s^{2})+a_{1}a_{4}r+2a_{1}a_{3}s-a_{1}a_{2}t\\
\qquad+xa_{1}(xyup-xu-xpq+yu^{2}-uq)=0.
\end{array}
\label{example}$$
Explicit expressions of invariants $\left(
\kappa_{1},\kappa_{2},\tau _{1},\tau_{2},\gamma_{3}\right) $ and $\left(\gamma_{3},N_{1}(\gamma _{3}),\right. \allowbreak\left.
N_{1}^{2}(\gamma_{3}), \kappa_{1},\kappa _{2}\right)$ for equation (\[example\]) are too cumbersome to be reported here. A direct check shows that they are functionally independent in each of above two quintuples.
Thus, according to the principle of $n$-invariants (see [@AVL; @SDI]), the proven existence of five independent scalar differential invariants solves in principle the equivalence problem for generic PMA equations. It should be stressed, however, that a practical implementation of this result could meet some boring computational problems.
Concluding remarks
==================
Representation of a PMA equation $\mathcal{E}$ by means of the associated PCB makes clearly visible the nature of its nonlinearities. For example, if all curves of this bundle are projectively nonequivalent each other, then $\mathcal{E}$ does not admit contact symmetries, etc. The instance of this can be detected by means of invariants constructed in the previous section. On the contrary, it may happen that all curves composing a PCB are projectively equivalent, i.e., nonlinearities of the corresponding PMA $\mathcal{E}$ are “homogeneous”. The above constructed invariants are not sufficient to distinguish one homogeneous in this sense PMA from another, and a need of new finer invariants arises. It is remarkable that in similar situations PCBs themselves give an idea of how such invariants can be constructed. For instance, in the above homogeneous case one can observe that the bundle $PT^*N\rightarrow N$ is naturally supplied with a full parallelism structure which immediately furnishes the required new invariants. It is not difficult to imagine various intermediate situations, which demonstrate the diversity and complexity of the world of parabolic Monge-Ampere equations. In particular, the problem of describing all strata of the characteristic diffiety (see [@SDI]) for parabolic Monge-Ampere equations is a task of a rather large scale. Further results in this direction will appear in a series of forthcoming publications.
D. V. Alekseevsky, V. V.Lychagin, A. M. Vinogradov: Basic ideas and concepts of differential geometry, Encyclopedia of Math. Sciences, **28** (Springer-Verlag, Berlin 1991).
Su Buchin: The General Projective Theory of Curves (Science Press, Peking 1973).
I. S. Krasil’shchik, V. V. Lychagin and A. M. Vinogradov: Geometry of Jet spaces and Nonlinear Partial Differential Equations (Gordon and Breach, New York 1986)
I. S. Krasil’shchik, A. M. Vinogradov et al.: Symmetries and Conservation Laws for Differential Equations of Mathematical Physics, Translation of Mathematical Monographs Series 182 (Amer. Math. Soc., Providence, RI 1999).
A. Kushner, V. Lychagin, V. Rubtsov: Contact Geometry and Nonlinear Differential Equations (Cambridge Univ. Press, Cambridge 2007).
T. Morimoto: *Monge-Ampere equations viewed from contact geometry*, Banach Center Publ., **39** (1997) 105-121.
J. Nestruev: Smooth Manifolds and Observables (Springer-Verlag, Berlin - New-York 2002).
A. M. Vinogradov: *On geometry of Second Order Parabolic Differential Equations in Two Indipendent Variables*, Dokl.Math. **423** (2008).
A. M. Vinogradov: *Geometric Singularities of Solutions of Nonlinear Partial Differential Equations*, in Proceedings of Conference “Differential Geometry and Applications”, Brno, 1986.
A. M. Vinogradov: in *Mechanics, Analysis and Geometry: 200 Years after Lagrange*, Ed. by M. Francaviglia (North Holland, Holland 1991).
E. J. Wilczynsky: Projective Differential Geometry of Curves and Ruled Surfaces (Chealsea, 1961).
|
---
abstract: 'In the search for exact solutions to Einstein’s field equations the main simplification tool is the introduction of spacetime symmetries. Motivated by this fact we develop a method to write the field equations for general matter in a form that fully incorporates the character of the symmetry. The method is being expressed in a covariant formalism using the framework of a double congruence. The basic notion on which it is based is that of the geometrisation of a general symmetry. As a special application of our general method we consider the case of a spacelike conformal Killing vector field on the spacetime manifold regarding special types of matter fields. New perspectives in General Relativity are discussed.'
---
psfig.tex
.80cm
[ [**INCORPORATION OF SPACETIME SYMMETRIES IN EINSTEIN’S FIELD EQUATIONS**]{}\
Elias Zafiris\
[*Theoretical Physics Group\
Imperial College\
The Blackett Laboratory\
London SW7 2BZ\
U.K.\
e.mail:[email protected]\
*]{} IMPERIAL/TP/96-97/06, PACS 04 ]{}
Introduction
============
In recent years there has been a lot of research work in symmetries in General Relativity. Originally, the motivation was the need to simplify Einstein’s field equations in the search for exact solutions, and the introduction of symmetries or collineations served as the basic tool.
The types of symmetries dealt with are those which arise from the existence of a Lie algebra of vector fields on the spacetime manifold which are invariant vector fields of certain geometrical objects on this manifold. The symmetries can be expressed in relations of the form $L_{\xi}W=Y$ where $W$ and $Y$ are two geometrical objects on the spacetime manifold and $\xi^{a}$ is the vector field generating the symmetry \[1\].
The most important and common symmetries are those for which $W$ and $Y$ are one of the fundamental tensor fields of Riemannian geometry, namely $$g_{ab},\quad {\Gamma^{a}}_{ bc},\quad R_{ab},\quad
R_{abcd}$$ A diagram defining these symmetries and giving their relative hierarchy is given in \[2\].
The most extensively studied symmetries in the context of General Relativity are the Killing vectors and the Conformal Killing vectors \[3,4,5,6,7\]. Some work has also been done on Affine collineations \[8,9,10\], Curvature collineations \[2,11,12\] and contracted Ricci collineations \[13,14,15\]. The latter symmetries are much more difficult to be studied, because besides the metric they involve other geometrical objects which obey concrete conditions, that make the hadling and the interpretation of the equations difficult. A unified approach to all these symmetries from a purely differential geometric point of view has been recently given in \[16\].
It seems to us that although the motivation to study all the above mentioned symmetries was the simplification of Einstein’s equations in the search for exact solutions, this initial aim has been partially unfulfilled. More concretely there doesn’t seem to appear in the literature, so far as we know, a general framework which permits the incorporation of a symmetry of every possible kind in Einstein’s equations for general matter.
Towards this direction we develop a method to write the field equations for general matter in a form that fully incorporates the character of the symmetry. Our method is based on the notion of geometrisation of a general symmetry, that is we employ the description of it as necessary and sufficient conditions on the geometry of the integral lines of the vector field which generates the symmetry. Attempts towards this direction have been done for timelike Conformal Killing vectors \[17,18\], using the theory of timelike congruences \[19,20,21,22,23\], for spacelike Conformal Killing vectors \[24\], using the theory of spacelike congruences \[25,26\], and recently a step towards a generalisation of this idea has been presented in \[27\].
The most important aspect of the “geometrisation” method to study a symmetry is that the symmetry is expressed in a form that it is suited to the simplification of the field equations in a direct and inherent way.
The method we develop may be outlined as follows:
The introduction of a symmetry is most conveniently studied if we consider the Lie derivative of the field equations with respect to the vector $\xi^a$ which generates the symmetry. After doing this we obtain an expression which on the left-hand side contains the Lie derivative of the Ricci tensor and on the right-hand side contains the Lie derivative of the energy momentum tensor. Eventually, we obtain the field equations as Lie derivatives along the symmetry vector of the dynamical variables. The $L_{\xi}R_{ab}$ can be calculated directly from the symmetry in terms of $\xi_{a;b}$. If we employ the corresponding theory of congruences we can express the $\xi_{a;b}$ in terms of the kinematical quantities (expansion, vorticity, shear) which characterise the congruence generated by the symmetry vector. The next step is to use the expression of a general symmetry as an equivalent set of conditions on the kimematical quantities characterising the congruence, namely we apply the geometrisation of a symmetry. If we substitute these conditions into the general expression for $L_{\xi}R_{ab}$ we manage to write directly the field equations, for any type of matter, in a way that they inherit the symmetry of $\xi_{a}$.
In this work we compute the $L_{\xi}T_{ab}$ using the most general form of the energy-momentum tensor $T_{ab}$, and the method is being demonstrated assuming only that $\xi_{a}$ is a spacelike vector orthogonal to the 4-velocity $u^{a}$ of the observers. This choice is justified by the fact that we wish to keep the length of the equations as short as possible and at the same time exhibit all the steps of the method in a transparent manner, without falling into irrelevent to our purposes complications. Although we work with a spacelike symmetry vector it is plausible that our approach can be applied to an arbitrary (non-null) $\xi_{a}$, tilted with respect to $u^{a}$. (We note that the case in which $\xi_{a}$ is timelike is much more simpler).
Having considered general matter it is evident that all the studies referring to various simplified types of matter like perfect fluids, charged fluids, anisotropic fluids and so on, consist special cases of our general scheme and they offer the ground to check the validity of our results. Thus we finally manage to recover and extend the results of the current literature, for example those of references \[28,29,30\] and \[31\].
The structure of the paper is as follows:
In section 2 we introduce the double congruence covariant framework permitting the introduction of arbitraty reference frames. In section 3 we present the generic formulation of symmetries in General Relativity and then we study the geometrisation of spacetime symmetries. The key formal results of the paper are contained in section 4 where we construct the symmetries-incorporated Einstein’s field equations for general matter. Section 5 shows how these results are used in the situation of a spacelike Conformal Killing vector symmetry discussing the perfect fluid case. Finally we summarize and conclude in section 6, discussing further avenues of research. The applications considered here are taken just far enough to demonstrate and provide a familiar context for the techniques developed in this paper. I expect to return in later papers to the new applications which these techniques make possible.
The notation will be the usual one. [**M**]{} will denote the spacetime manifold with metric [**g**]{} of Lorentz signature (–,+,+,+), which is assumed to be smooth. The Riemann, and Ricci tensors are denoted by $R_{abcd}$, and $R_{ab}$, respectively, whilst a semi-colon denotes a covariant derivative, a comma a partial derivative and L is the Lie derivative. Round and square brackets will denote the usual symmetrisation and skew-symmetrisation of indices. Latin indices take the values 0,1,2,3 and units are used for which the speed of light and Einstein’s gravitational constant are both unity.
The double congruence framework
===============================
In order to develop our method we are going to use the most general approach to the description of reference frames, namely the theory of congruences. This approach is explicitly general covariant at each its step, permitting use of abstract representation of tensor quantities.
Timelike Congruences
--------------------
The notion of reference frames is considered in the scope of classical physics, namely in the assumption that the observation and measurement procedures do not disturb the spacetime geometry. This means that the reference bodies are test ones and the only limitation on their motion is due to the relativistic causality principle: since a reference body represents an idealisation of sets of measuring devices and local observers, its’ points worldlines should be timelike. Therefore we shall assume that the motion of a reference body is described by a congruence of timelike worldlines.
It is also obviously possible to an observer to move together with a frame of reference, locally or in a spacetime region (globally). When an object moves together with a reference frame, it is geometrically identified with the latter, so that the worldlines of its mass points form a congruence. Such a congruence presents a complete characterisation of the reference frame, and with any reference frame a conceptual object (thus a test one) of the above type can be associated, which is called the body of reference. Since it models a set of observers and their measuring devices, but no photons, the reference frame congruence should be timelike.
We conclude that in the spacetime region where such a reference frame is realised, we consider a congruence of integral curves of a unit timelike vector field which is naturally interpreted as a field of 4-velocities of local observers, or equivalently of the worldlines of particles forming a reference body. Every local observer represents such a test particle.
It is important to note that the congruence concept is essential because for the sake of regularity of the mathematical description of the frame, these lines have not to mutually intersect, and they must cover completely the spacetime region under consideration, so that at every world point one has to find one and only one line passing through it. The simplest way to describe a reference frame is to identify the congruence of local observers with a congruence of the time coordinate lines, which should be timelike \[32\].
Thus all the tensor quantities, as well as all the differential operators, defined on the spacetime manifold, can be projected onto the physical time direction of a frame of reference and its local three-space with the help of appropriate projectors.
Let us assume that $u$ is a future pointing unit timelike vector field ($u^a u_a =-1$) representing the 4-velocity field of a family of test observers filling the spacetime or some open submanifold of it.
The observer-orhogonal decomposition or \[1+3\] decomposition of the tangent space, and, in turn of the algebra of spacetime tensor fields, is accomplished by the temporal projection operator $v(u)$ along $u$ and the spatial projection operator $h(u)$ onto the orthogonal local rest space, which may be identified with mixed second rank tensors acting by contraction $${\delta^a}_b={v(u)^a}_{b} + {h(u)^a}_{b}$$ where $${v(u)^a}_{b} = -u^a u_b$$ $${h(u)^a}_{b}={\delta^a}_b + u^a u_b$$
These satisfy the usual orthogonal projection relations $$v(u)^2=v(u), \quad h(u)^2=h(u)$$ and $$v(u)h(u)=h(u)v(u)=0$$
If $S$ is a general tensor, then the “measurement” of $S$ by the observer congruence is the family of spatial tensor fields which result from the spatial projection of all possible contractions of $S$ by any number of factors of $u$. For example, if $S$ is a (1,1)-tensor, then its measurement $${S^a}_b \rightarrow (u^d u_c {S^c}_d, {h(u)^a}_{c} u^d {S^c}_d, {h(u)^d}_{a} u_c {S^c}_d, {h(u)^a}_{c}{h(u)^d}_{b} {S^c}_d)$$ results in a scalar field, a spatial vector field, a spatial 1-form and a spatial (1,1)-tensor field. It is exactly this family of fields which occur in the orthogonal decomposition of $S$ with respect to the observer congruence $$\begin{aligned}
{S^a}_b&=&[{v(u)^a}_{c}+ {h(u)^a}_{c}][{v(u)^d}_{b}+ {h(u)^d}_{b}] {S^c}_d \nonumber \\
&=&[u^d u_c {S^c}_d]u^a u_b + \ldots + {h(u)^a}_{c}{h(u)^d}_{b} {S^c}_d\end{aligned}$$
The reference frame generated by $u^a$ is described in the theory of timelike congruences by the introduction of its kinematical quantities $$\sigma_{ab},\quad \omega_{ab},\quad \theta,\quad \dot{u_a}$$ which are defined by the irreducible decomposition or “measurement” of the covariant derivative $u_{a;b}$ with respect to $u^a$ \[22,23\] $$u_{a;b}=\sigma_{ab} + \omega_{ab} +\frac{1}{3} \theta h_{ab} -\dot{u_a}u_b$$ where $\dot{u_a}$ denotes the acceleration vector of reference frame; $\theta$ the volume rate of expansion scalar; $\sigma_{ab}$ is the rate of shear tensor, with magnitude $$\sigma^2=\frac{1}{2}\sigma_{ab}
\sigma^{ab}$$ and $\omega_{ab}$ is the vorticity tensor. It is convenient to define a vorticity vector $$\omega^a= \frac{1}{2}
\eta^{abcd} \omega_{bc} u_d$$ denoting the angular velocity vector of frame of reference where $\eta^{abcd}$ is the totally skew-symmetric permutation tensor.
We note that that an overdot over a kernel letter means derivation with respect to $u_a$, hence $\dot{u_a}=u_{a;b}u^b$.
These concepts were borrowed by the references frames theory from hydrodynamics where they play an important role. They all are also important in the theory of null congruences, often used in the classification of gravitational fields by principal null directions (the Petrov types) and also generation of exact Einstein-Maxwell solutions \[33\].
Finally, we note that a partial splitting of spacetime based only on a timelike congruence (splitting off time alone) is referred as the congruence or “time plus space” or \[1+3\] decomposition, whereas a spacelike slicing of spacetime (splitting off space alone) is referred as the hypersurface splitting or “space plus time” or \[3+1\] decomposition \[34,35,36\]. The two formalisms coincide in the case of the observer-orthogonal decomposition of the tangent space of the spacetime manifold.
If we had chosen the vector field generating a spacetime symmetry to be timelike, then the covariant formalism provided by the theory of timelike congruences would be enough to develop our method. Instead, we have chosen the more complicated case in which we have a spacelike symmetry vector. In this case a more sophisticated covariant formalism is needed and we are naturally led to use the concept of a double congruence.
Double Congruences
------------------
For our purposes we consider a double congruence which involves two vector fields: a timelike vector field $u^a$ representing the 4-velocity of a family of test observers and a spacelike vector field $\xi^a$, which corresponds to a physical observable vector field, for example electric field or magnetic field. We demonstrate the previous point by considering the electromagnetic field strength tensor $F_{ab}=F_{[ab]}$.
Then $$E^a={F^a}_b u^b \quad and \quad H^a=\frac{1}{2} \eta^{abcd} u_b
F_{cd}$$ Hence $$E^a u_a=H^a u_a=0$$
and both the electric and magnetic field as measured by the observers $u^a$ have spacelike character.
We set $\xi^a=\xi \eta^a$, where $\eta^a$ is a unit spacelike vector $\eta^a \eta_a=1$ normal to the 4-velocity vector $u^a$.
To observe the given spacelike curves generated by $\eta$ in the vicinity of the spacetime point $P$, we introduce, at that point, an observer moving with a 4-velocity $u^a$. We further suppose that the spacelike curve $C$ is orthogonal to $u^a$ at the point $P$; then $$u^a \eta_a=0$$ It is important to emphasize that given a spacelike vector there is not a unique, orthogonal, timelike unit vector associated with it. We may add to the vector $u^a$ any vector $t^a$ such that $$w^a=u^a+t^a$$ where $t^a$ satisfies the conditions $$t_a u^a=0 \quad and \quad t_at^a+2t_au^a=0$$ This freedom in our choice of an observer is essential in the covariant character of the theory.
In what follows we shall restrict attention to the observer moving with a 4-velocity $u^a$. Furthermore for the purpose of observation this observer erects a screen orthogonal to the spacelike curve $C$ at $P$. That is, the congruence of curves passes perpendicularly through the screen.
Except at the given point $P$, the motions of the observers employed along the curve $C$ have still to be specified. We require that the 4-velocities $u^a$ of the observers used along $C$ are related by: $${p^a}_c {u_c}^{\ast}={p^a}_c \dot{\eta^c}$$
$$(u_a u^a)^{\ast}=0, \quad (u_a \eta^a)^{\ast}=0$$
where an asterisk denotes derivation with respect to $\eta^a$, hence $${u^c}^{\ast}=u_{a;b}\eta^b$$
The above ensures that $u^a$ is always a unit vector orthogonal to $\eta^a$ along $C$. Equations (14) and (15) are equivalent to the single condition $${u_a}^{\ast}= \dot{\eta^a}+(\dot{\eta^b} u^b)u^a-({\eta_b}^{\ast} u^b) \eta^a$$ which we call the Greenberg’s transport law for $\eta^a$ \[25\].
The decomposition with respect to the double comgruence $(u,\eta)$ or \[1+1+2\] decomposition of the tangent space, and, in turn of the algebra of spacetime tensor fields, is accomplished by the temporal projection operator $v(u)$ along $u$, the spatial projection operator $s(\eta)$ along $\eta$, and the screen projection operator $p(u,\eta)$ which projects normally to both $(u,\eta)$ onto an orthogonal two dimensional space, called the screen space.
All the above projection operators may be identified with mixed second rank tensors acting by contraction. $${\delta^a}_b={v(u)^a}_b+{s(\eta)^a}_b+{p(u,\eta)^a}_b$$ $${v(u)^a}_b=-u^a u_b$$ $${s(u)^a}_b=\eta^a \eta_b$$ $${p(u,\eta)^a}_b={\delta^a}_b+u^a u_b- \eta^a \eta_b={h^a}_b-\eta^a \eta_b$$
The covariant derivative $\eta_{a;b}$ can be decomposed with respect to the double congruence $(u,\eta)$ as follows $$\eta_{a;b}=A_{ab}+{\eta_a}^* \eta_b+u_a[\eta^tu_{t;b}+(\eta^t \dot u_t)u_b-(\eta^t{u_t}^*)\eta_b]$$ where $$A_{ab}={p^c}_a {p^d}_b \eta_{c;d}$$ We decompose $A_{ab}$ further into its irreducible parts with respect to the orthogonal group: $$A_{ab}={\mathcal T}_{ab}+{\mathcal R}_{ab}+\frac{1}{2}{\mathcal E} p_{ab}(u,\eta)$$
where ${\mathcal T}_{ab}={\mathcal T}_{ba}$,${{\mathcal T}^a}_a=0 $ is the traceless part of $A_{ab}$, and ${\mathcal R}_{ab}$ is the rotation of $A_{ab}$. We have the relations $${\mathcal T}_{ab}={p^c}_a {p^d}_b \eta_{(c;d)}-\frac{1}{2}{\mathcal E} p_{ab}$$ $${\mathcal R}_{ab}={p^c}_a {p^d}_b \eta_{[c;d]}$$ $${\mathcal E}=p^{cd} \eta_{c;d}={\eta^a}_{;a} + {\dot \eta}^a u_a$$
The tensors ${\mathcal R}_{ab}$,${\mathcal T}_{ab}$ and the scalar ${\mathcal E}$ are defined as the kinematical quantities of the spacelike congruence and have the following physical significance: ${\mathcal R}_{ab}$ represents the screen rotation,${\mathcal T}_{ab}$ the screen shear and ${\mathcal E}$ the screen expansion.
It is easy to show that in (16) the $u^a$ term in parentheses can be written in a very useful form as follows: $$-N_b+2\omega_{tb}\eta^t +{p^t}_b {\dot \eta}_t$$ where $$N_b={p^a}_b({\dot \eta}_a-{u_a}^{\ast})$$
On using (22), equation (16) takes the form $$\eta_{a;b}=A_{ab}+{\eta_a}^* \eta_b+{p^c}_b {\dot \eta}_c u^a +
(2\omega_{tb}\eta^t -N_b) u_a$$
The vector $N^a$, which is called Greenberg’s vector, is of fundamental importance in the theory of double congruences. Geometrically the condition $N^a=0$ means that the congruences $u^a$ and $\eta^a$ are two surface forming. Kinematically, it means that the field $\eta^a$ is “frozen in” along the observers $u^a$.
We show in this work that the role of Greenberg’s vector is more general and establishes a connection between the field equations and the symmetries at kinematical level.
Using (16) we can also prove the following useful identities that the Lie derivatives of the projection tensors $p_{ab}$ and $h_{ab}$ obey and which we are going to use later $$\frac{1}{\xi}L_{\xi}p_{ab}=2[{\mathcal T}_{ab}+\frac{1}{2}{\mathcal E}p_{ab}]-2u_{(a}N_{b)}$$ $$\frac{1}{\xi}L_{\xi}h_{ab}=2[{\mathcal
T}_{ab}+\frac{1}{2}{\mathcal E}p_{ab}]-2u_{(a} N_{b)}+2(\log {\xi})_{(,a}
\eta_{b)}+2\eta_{(a}^{\ast} \eta_{b)}$$
Geometrisation of Spacetime Symmetries
======================================
The types of symmetries we are going to deal with, in what follows are those which arise from the existence of a Lie algebra of vector fields on the spacetime manifold which are invariant vector fields of certain geometrical objects on this manifold.
In Riemannian geometry the building block is the metric tensor $g_{ab}$ in the sense that all the important geometrical objects of this geometry are expressed in terms of $g_{ab}$.
Following the standard literature \[2,16,27\], we define the generic form of a symmetry to be $$L_{\xi}g_{ab}=2 \Psi g_{ab} + H_{ab}$$
where $\Psi$ is a scalar field and $H_{ab}$ is a symmetric traceless tensor field. Both of the fields satisfy a unique set of conditions specific to each particular symmetry and lead to a unique decomposssition of $L_{\xi}g_{ab}$. As special examples we mention that the Killing symmetries are characterized by $$\Psi=H_{ab}=0$$ the Homothetic symmetries by $$\Psi=\phi=constant \quad and \quad
H_{ab}=0$$ the Conformal symmetries by $$\Psi=\omega(x^a) \quad and \quad
H_{ab}=0$$ whereas the Affine symmetries by $\Psi$, $H_{ab}$ such that $$\Psi_{;c}=0 \quad and \quad H_{ab;c}=0$$
The generic form of a spacetime symmetry permits us to treat all of them in a unifying manner and is essential to our approach.
The geometrisation of spacetime symmetries is managed if we describe it as necessary and sufficient conditions on the geometry of the integral lines of the vector field which generates the symmetry. For our purposes we are going to study the geometrisation of a general spacetime symmetry generated by a spacelike vector field $ \xi^a= \xi
\eta^a$ using the framework of the double congruence $(u, \eta)$ developed in section 2.
In the above framework the geometrisation of a a spacetime symmetry is established through the following theorem:
[**Theorem**]{}:The vector field $ \xi^a= \xi \eta^a$ is a solution of $$L_{\xi}g_{ab}=2 \psi g_{ab} + H_{ab}$$ if and only if
$${\mathcal T}_{ab}=\frac{1}{2 \xi}({p^c}_a {p^d}_b -\frac{1}{2}
p^{cd}p_{ab})H_{cd}$$
$${\dot \eta}^a u_a=\frac{1}{\xi}(- \Psi) +\frac{1}{2 \xi} H_{11}$$
$${\eta^a}^{\ast}=-u^a[- \dot {\log \xi}
+\frac{1}{\xi} H_{21}]+p^{ab}[- (\log \xi)_{,b} + \frac{1}{\xi} H_{b2}]$$
$$\xi^{\ast}= \psi + \frac{1}{2} H_{22}$$
$${\mathcal E}=\frac{2 \psi}{\xi} + \frac{1}{2 \xi}p^{ab} H_{ab}$$
$$N_a=-2 \omega_{ab} \eta^b +\frac{1}{\xi} {p^b}_a H_{b1}$$
where we use the notational convention $$Z \ldots _a \ldots u^a=Z \ldots _1 \ldots \quad and \quad Z \ldots _a \ldots \eta^a=Z \ldots _2 \ldots$$ for every tensor field $Z$.
[**Proof**]{}
The equation $$L_{\xi}g_{ab}=2 \Psi g_{ab} + H_{ab}$$ can be written equivalently in the form $$\xi (\eta_{a;b} + \eta_{b;a}) +\xi_{,a} \eta_b +\xi_{,b} \eta_a=2\Psi g_{ab}+H_{ab}$$ We decompose the above equation with respect to the double congruence $(u,\eta)$. This can be done by contracting (26) with $u^a u^b$, $u^a
\eta^b$, $u^a {p^b}_c$, $\eta^a \eta^b$, $\eta^a {p^b}_c$ and ${p^a}_c
{p^b}_d$ respectively.
We obtain in turn: $$u^a u^b:\quad {\dot \eta}^a u_a=\frac{1}{\xi}(- \Psi) +\frac{1}{2 \xi} H_{11}$$ $$u^a
\eta^b:\quad {\eta^a}^{\ast}u_a =- \dot {\log \xi}
+\frac{1}{\xi} H_{21}$$ $$u^a {p^b}_c:\quad \xi {p^a}_c ({\dot \eta}_a + \eta_{b;a} u^b)={p^a}_c H_{a1}$$ $$\eta^a \eta^b:\quad \xi^{\ast}=\Psi +\frac{1}{2} H_{22}$$ $$\eta^a {p^b}_c:\quad {p^b}_c \eta_b^\ast={p^b}_c[-(\log \xi)_{,b} +\frac{1}{\xi} H_{b2}]$$ $${p^a}_c{p^b}_d:\quad {\mathcal T}_{cd}+\frac{1}{2} {\mathcal E}
p_{cd}=\frac{1}{2 \xi} {p^a}_c {p^b}_d H_{ab}+\frac{1}{\xi}\Psi p_{cd}$$
In equation (36) the first term of the lhs can be written in the form $$\begin{aligned}
\xi {p^a}_c ({\dot \eta}_a+\eta_{b;c}u^b)&=& \nonumber \\
\xi {p^a}_c({\dot \eta}_a-{u_a}^{\ast}+u_{a;b} \eta^b-u_{b;a} \eta^b)&=& \nonumber \\
\xi N_c+ \xi {p^a}_c (u_{a;b}-u_{b;a}) \eta^b&=& \nonumber \\
\xi N_c + 2\xi {p^a}_c \omega_{ab} \eta ^b\end{aligned}$$
Substituting (40) in (36) and using $${p^b}_a \omega_{bc}
\eta^c=\omega_{ac} \eta^c$$ we obtain $$N_a=-2 \omega_{ab} \eta^b + \frac{1}{\xi} {p^b}_a H_{b1}$$
Equtions (35) and (38) give us the components of ${\eta^a}^*$ along $u^a$ and on the screen space of $\eta^a$,$u^a$. It is possible to combine them into a single equation $${\eta^a}^{\ast}=-u^a[- \dot {\log \xi}
+\frac{1}{\xi} H_{21}]+p^{ab}[- (\log \xi)_{,b} + \frac{1}{\xi} H_{b2}]$$
Next we consider the trace and the traceless part of (39) and we obtain correspondingly $$Trace:\quad{\mathcal E}=\frac{2 \Psi}{\xi} + \frac{1}{2 \xi}p^{ab} H_{ab}$$ $$Traceless \quad part:\quad {\mathcal T}_{ab}=\frac{1}{2 \xi}({p^c}_a {p^d}_b -\frac{1}{2}
p^{cd}p_{ab})H_{cd}$$
The converse of the theorem is proved as follows:
We consider the tensor $$\xi_{(a;b)}-\Psi g_{ab} -\frac{1}{2}H_{ab}= \xi \eta_{(a;b)}+\xi_{(,a}
\eta_{b)} - \Psi g_{ab} -\frac{1}{2} H_{ab}$$ Contracting this with $u^a u^b$, $u^a
\eta^b$, $u^a {p^b}_c$, $\eta^a \eta^b$, $\eta^a {p^b}_c$ and ${p^a}_c
{p^b}_d$ respectively and applying relations (27)-(32), we prove that this tensor vanishes. The above completes the proof of the theorem.
The above theorem is of major importance to our approach because if we use it the symmetry is expressed in a form that it is possible to be incorporated in Einstein’s field equations in a direct and inherent way.
Besides the theorem can be used in other ways depending on the information supplied. For example if information is available on the vector field $\xi^a$ which generates the symmetry, one can investigate what type of symmetry the given vector field can generate. Conversely if information is available on the scalar and tensor fields $\psi$,$H_{ab}$ one can use the theorem to obtain information on the vector field $\xi^a$ which generates the symmetry.
The symmetries-incorporated Einstein’s equations
================================================
The Einstein field equations with a non-zero cosmological constant can be written as $$R_{ab}=T_{ab}+(\Lambda-\frac{T}{2})g_{ab}$$
where $\Lambda$ is the cosmological constant and $T$ is the trace of the energy-momentum tensor $T_{ab}$.
We wish to incorporate a general spacetime symmetry into Einstein’s field equations. This is desirable because one can effectively eliminate the symmetry from the field equations and find solutions that will by construction comply with the symmetry.
The effects of the symmetries at the dynamical level are obtained by the Lie derivation of the Einstein field equations $$L_{\xi}R_{ab}=L_{\xi}[T_{ab}+(\Lambda-\frac{T}{2})g_{ab}]$$
Eventually we obtain the field equations as Lie derivatives along the symmetry vector. We assume that the symmetry vector $\xi^a$ is spacelike orthogonal to the 4-velocity $u^a$ of the observers.
First of all we express the energy-momentum tensor $T_{ab}$ in terms of its constituent dynamical variables by irreducibly decomposing it with respect to the double congruence $(u, \eta)$. Next we compute the rhs of (47) in terms of the Lie derivatives of the dynamical variables. The next step is to compute the $L_{\xi}R_{ab}$ directly from the generic form of the symmetry in terms of $\xi_{a;b}$. Employing the $(u, \eta)$ double congruence framework we use the expression of $\xi_{a;b}$ in terms of the kinematical quantities, namely we use the geometrisation of the symmetry applying the theorem proved in section 3. Thus we finally manage to write the field equations in a way that they fully incorporate the character of a spacetime symmetry.
$[1+1+2]$ Irreducible decomposition of $T_{ab}$
-----------------------------------------------
It is well known from the literature that the irreducible decomposition of the energy momentum tensor $T_{ab}$ with respect to the four velocity $u^a$ of observers, defines the dynamical variables of spacetime \[22,23\] $$T_{ab}= \mu u_a u_b+ph_{ab}+2q_{(a}u_{b)}+\pi_{ab}$$
where $\mu$ and $p$ denote the total energy density and the isotropic pressure, $q_a$ is the heat flux vector, and $\pi_{ab}$ is the traceless anisotropic stress tensor. These quantities include contributions by all sources, for example by an electromagnetic field. In particular $q_a$ represents processes such as heat conduction and diffusion as well as the electromagnetic flux.
We can further decompose irreducibly the quantities $q_a$ and $\pi_{ab}$ using the double congruence framework, since -except the timelike congruence of observers- there is also defined a spacelike congruence on spacetime, representing covariantly a physical observable field as well as the character of the symmetry vector. $$q^a=v \eta^a+Q^a$$ $$\pi_{ab}=\gamma(\eta_a \eta_b-\frac{1}{2}p_{ab})+2P_{(a} \eta_{b)} +D_{ab}$$
where $$v=q^a \eta_a,\quad Q^a={p^a}_b q_b, \quad
\gamma=\pi_{ab}\eta^a \eta^b$$, $$P_a={p^a}_b {\pi^b}_c \eta^c, \quad
D_{ab}=({p^c}_a {p^d}_b-\frac{1}{2}p_{ab}p^{cd}) \pi_{cd}$$
The tensors $Q^a$, $P^a$, $D_{ab}$ are on the screen space of $(u,
\eta)$ and $D_{ab}$ is traceless.
The dynamical variables are constrained to obey the conservation equations $${T^{ab}}_{;b}=0$$ which result from the identity ${G^{ab}}_{;b}=0$, where $G_{ab}$ denotes the Einstein tensor. The above equations can be irrecucibly decomposed in the double congruence $(u,
\eta)$ framework into the following system of equations, which consist the conservation laws that the dynamical variables of spacetime have to satisfy. $$\dot \mu+(\mu +p) \theta +{q^a}_{;a}+q^a \dot u^a + \pi^{ab} \sigma_{ab}=0$$ $$\begin{aligned}
(\mu +p) (\dot u_a \eta^a)+(p_{;c}+\dot q_c+{\pi^b}_{c;b}) \eta^c
+\theta (q^a \eta_a) \nonumber \\
+q^b {u_b}^{\ast} -2({p^c}_b q^b) \omega_{bt} \eta^t =0\end{aligned}$$ $$\begin{aligned}
(\mu +p) \dot u_a p_{ac} +({p_{;}}^a +{\dot q}^a+ {\pi^{ba}}_{;b}) p_{ac}
\nonumber \\
+\theta q^a p_{ac} +q^b {u^a}_{;b} p_{ac} =0\end{aligned}$$
$[1+1+2]$ irreducible decomposition of the rhs of Einstein’s field equations
----------------------------------------------------------------------------
### Computation of $L_\xi R_{ab}$ using the field equations
We are going to compute the Lie derivative of the Ricci tensor using the field equations. The field equations for general matter (46) using (48) can be written in the form $$R_{ab}=(\mu +p) u_a u_b +\frac{1}{2} (\mu -p+2 \Lambda) g_{ab}
+2q_{(a} u_{b)} +\pi_{ab}$$
We study the case in which $\xi^a=\xi \eta^a$ corresponding to $\xi^a$ spacelike and orthogonal to the 4-velocity of the observers. We decompose the above equation with respect to the double congruence $(u,\eta)$. This is achieved by contracting the Lie derivative of the Ricci tensor with $u^a u^b$, $u^a
\eta^b$, $u^a {p^b}_c$, $\eta^a \eta^b$, $\eta^a {p^b}_c$ and ${p^a}_c
{p^d}_b$ respectively. If we denote by $\frac{1}{\xi} L_\xi
R_{ab}[uu]$, $\frac{1}{\xi} L_\xi R_{ab}[u \eta]$, $\frac{1}{\xi} L_\xi
R_{ab}[u p]$, $\frac{1}{\xi} L_\xi R_{ab}[\eta \eta]$, $\frac{1}{\xi}
L_\xi R_{ab}[\eta p]$, $\frac{1}{\xi} L_\xi R_{ab}[p p]$ the independent projections correspondingly we obtain: $$\begin{aligned}
\frac{1}{\xi} L_\xi R_{ab}[uu]&=&[\frac{1}{2} (\mu +3p)^* +(\mu
+3p-2\Lambda)(\dot u^c \eta_c) \cr
&-& 2(q^c \eta_c) \dot {(\log \xi)} -2(q^c N_c) +2(q^c
\eta_c)({u^d}^{\ast} \eta_d)]u_a u_b\end{aligned}$$ $$\begin{aligned}
\frac{1}{\xi} L_\xi R_{ab}[u \eta]&=&- 2[-\frac{1}{2}(\mu -p
+2\Lambda)({u^c}^{\ast} \eta_c)
-{(q^c \eta_c)}^* \pi_{cd}
\eta^c({u^d}^{\ast} - \dot \eta^d) \cr
&+& (q^c \eta_c)(\dot \eta^d u_d)
+ \frac{1}{2}(\mu -p +2\Lambda +2 \pi_{cd} \eta^c \eta^d) \dot {(\log
\xi)} \cr
&-&(q^c \eta_c) {(\log \xi)}^*]u_{(a} \eta_{b)}\end{aligned}$$ $$\begin{aligned}
\frac{1}{\xi} L_\xi R_{ab}[u p]&=&- 2[\frac{1}{2}(\mu -p +2\Lambda)N_d + (\mu +3p-2\Lambda)
\omega_{cd} \eta^c \cr
&-&\pi_{ce} {p^c}_d ({u^e}^{\ast} - \dot \eta^e)-q_c A_{cd}-
{q^c}^{\ast} p_{cd} \cr
&+& {p^c}_d q_c (\dot \eta^e u_e)-(q^c \eta_c) {p^e}_d {(\log \xi)}_{;e}+
\eta^e {p^c}_d \pi_{ec} \dot {(\log \xi)}]u_{(a}{p^d}_{b)} \end{aligned}$$ $$\frac{1}{\xi} L_\xi R_{ab}[\eta \eta]= [\frac{1}{2}(\mu -p +2 \pi_{cd} \eta^c \eta^d)^* + (\mu -p
+2\Lambda +2 \pi_{cd} \eta^c \eta^d) (\log \xi)^*] \eta_a \eta_b$$ $$\begin{aligned}
\frac{1}{\xi} L_\xi R_{ab}[\eta p]&=& 2 [\frac{1}{2}(\mu -p +2\Lambda)p_{cd} \eta^{c*} +{p^c}_d
(\pi_{ce} \eta^e)^* \cr
&+&{\pi^c}_e \eta^e A_{cd} +2(q_t \eta^t)
\omega_{dc} \eta^c
+ \eta^c {p^e}_d \pi_{ce} (\log \xi)^* \cr
&+&\frac{1}{2}(\mu -p +2
\Lambda +2 \pi_{ce} \eta^c \eta^e){p^f}_d {(\log \xi)}_{,f}]\eta_{(a}
{p^d}_{b)} \end{aligned}$$ $$\begin{aligned}
\frac{1}{\xi} L_\xi R_{ab}[p p]&=&
[(\mu -p +2\Lambda) ({\mathcal T}_{cd}+\frac{1}{2} {\mathcal E}
p_{cd}) +\frac{1}{2}{(\mu-p)}^* p_{cd} \cr
&+&{\pi^e}_f(A_{ec}{p_d}^f
+A_{ed}{p_c}^f)
+ {\pi^{ef}}^* p_{ec} p_{fd} \cr
&+&4q_c{p^e}_{(c} \omega_{d)f} \eta^f +2
\eta^e \pi_{ef} {p_c}^{(f}{p^{t)}}_d {(\log \xi)}_{,t}]{p^c}_{(a}{p^d}_{b)} \end{aligned}$$
The above projections of the Lie derivative of the Ricci tensor have been obtained taking into account the irreducible decomposition of the symmetric $(0,2)$ tensor $T_{ab}$ with respect to $u^a$. However, since $\xi^a$ is spacelike we have further irreducible decompositions of the tensor fields. Concretely if we use equations (49) and (50) we express the Lie derivative of the Ricci tensor in terms of the irreducible parts of the dynamical variables. Then the projections obtain the following irreducible form: $$\begin{aligned}
\frac{1}{\xi} L_\xi R_{ab}[uu]&=&[\frac{1}{2} (\mu +3p)^* +(\mu
+3p-2\Lambda)(\dot u^c \eta_c) \cr
&-&2Q^c N_c -2v[\dot {(\log
\xi)}+\eta^{c*}u_c]]u_a u_b \end{aligned}$$ $$\begin{aligned}
\frac{1}{\xi} L_\xi R_{ab}[u \eta]&=&- 2[\frac{1}{2}(\mu -p +2\Lambda+2 \gamma)[\dot {(\log
\xi)}+\eta^{c*}u_c] \cr
&+& P^c N_c-v^*-v[ {(\log
\xi)}^{\ast}-\dot \eta^{c}u_c]]u_{(a} \eta_{b)}\end{aligned}$$ $$\begin{aligned}
\frac{1}{\xi} L_\xi R_{ab}[u p]&=&
-2[\frac{1}{2}(\mu -p +2\Lambda- \gamma)N_c +(\mu
+3p-2\Lambda) \omega_{dc} \eta^d \cr
&+& P_c[\dot {(\log
\xi)}+\eta^{c*}u_c]
+D_{dc}N^d - v{p^d}_c[{\eta_d}^* +{\log \xi}_{,d}] \cr
&+& Q_c(\dot \eta^d u_d)-Q_d
{A^d}_c -{p^d}_c Q_d^*]{p^c}_{(a}u_{b)} \end{aligned}$$ $$\begin{aligned}
\frac{1}{\xi} L_\xi R_{ab}[\eta \eta]=
\frac{1}{2} [(\mu -p +2 \gamma)^* +(\mu -p +2\Lambda+2 \gamma)
{(\log \xi)}^* ] \eta_a \eta_b \end{aligned}$$ $$\begin{aligned}
\frac{1}{\xi} L_\xi R_{ab}[p \eta]&=&
2[p_{dc} P^* +\frac{1}{2}(\mu -p +2\Lambda+2 \gamma){p^d}_c
[{\eta_d}^* +{(\log \xi)}_{,d}] \cr
&+& P^d [A_{dc}+{(\log \xi)}^* p_{cd}]-2v \omega_{dc} \eta^d]
{p^c}_{(a} \eta_{b)}\end{aligned}$$ $$\begin{aligned}
\frac{1}{\xi} L_\xi R_{ab}[p p]&=&
\frac{1}{2}[{(\mu -p - \gamma)}^* + (\mu -p
+2\Lambda-\gamma) {\mathcal E}]p_{ab} \cr
&+& {p^c}_a {p^d}_b {D_{cd}}^* +2A_{c(b}{D^c}_{a)}+(\mu -p
+2\Lambda-\gamma) {\mathcal T}_{ab} \cr
&+&4Q_{(a} \omega_{b)c} \eta^c
+2P_{(a} {p^c}_{b)} [{\eta_c}^* + {(\log \xi)}_{,c}]\end{aligned}$$
### Incorporation of symmetries
Using the theorem proved in section 3 we reexpress the Lie derivative of the Ricci tensor in terms of the quantities $\Psi$, $H_{ab}$ which characterise the generic form of a spacetime symmetry. Thus we finally obtain the independent projections of the Lie derivative of the Ricci tensor along the symmetry generating vector in terms of the fields characterising the symmetry $\Psi$,$H_{ab}$ and the irreducible dynamical variables in the following form:
$$\begin{aligned}
\frac{1}{\xi} L_\xi R_{ab}[uu]&=&[\frac{1}{2} (\mu +3p)^* +(\mu
+3p-2\Lambda) (\frac{\Psi}{\xi}
-\frac{1}{2 \xi} H_{11}) \cr
&-&\frac{2v}{\xi} H_{21}-2Q^c N^c]u_a u_b\end{aligned}$$
$$\begin{aligned}
\frac{1}{\xi} L_\xi R_{ab}[u \eta]&=&
-2[\frac{1}{2 \xi} (\mu -p+2\Lambda+2\gamma)H_{21} \cr
&+&P_c
N^c-v^*-v\frac{1}{2\xi}(4\Psi +H_{22}-H_{11})]u_{(a} \eta_{b)}\end{aligned}$$
$$\begin{aligned}
\frac{1}{\xi} L_\xi R_{ab}[\eta \eta]&=&
[\frac{1}{2}{[( \mu-p+2\Lambda)+2\gamma]}^* \cr
&+&[(
\mu-p+2\Lambda)+2\gamma]{(\log \xi)}^*] \eta_a \eta_b \end{aligned}$$
$$\begin{aligned}
\frac{1}{\xi} L_\xi R_{ab}[pu]&=&
-2[\frac{1}{2}[(\mu-p+2\Lambda)+2\gamma]N_c+(\mu+3p-2\Lambda)\omega_{tc}\eta^t-{Q_t}^*
{p^t}_c \cr
&-&\frac{v}{\xi} H_{t2}{p^t}_c-Q^t H_{tc}
+ D_{tc}N^t +\frac{1}{\xi} H_{21}P_c-Q^t {\mathcal R}_{tc} \cr
&-&Q_c(\frac{2\Psi}{\xi}
+\frac{1}{4\xi}p^{ef}
H_{ef}-\frac{1}{2\xi}H_{11})]{p^c}_{(a}u_{b)} \end{aligned}$$
$$\begin{aligned}
\frac{1}{\xi} L_\xi R_{ab}[p \eta]&=&
2[{p^d}_c {P_d}^* +P_c(\frac{2\Psi}{\xi}+\frac{1}{4\xi}p^{ef}
H_{ef}+\frac{1}{2\xi}H_{22})+P_d {H^d}_c +P_d {{\mathcal R}^d}_c
\nonumber \\
&+& \frac{1}{2\xi}[( \mu-p+2\Lambda)+2\gamma]{p^d}_c H_{d2}-2v \eta^d
\omega_{dc}]{p^c}_{(a} \eta_{b)} \end{aligned}$$
$$\begin{aligned}
\frac{1}{\xi} L_\xi R_{ab}[pp]&=&
\frac{1}{2}{(\mu-p-\gamma)}^*p_{ab}+[(
\mu-p+2\Lambda)+2\gamma][H_{ab} \cr
&+&\frac{1}{4\xi}p^{cd} H_{cd} p_{ab}
+\frac{\Psi}{\xi} p_{ab}]
+ 2D_{c(a} {{\mathcal R}^c}_{b)} +{\mathcal E} D_{ab} +2D_{c(a}
{H_{b)}}^c \cr
&+&{p^c}_{(a}{p^d}_{b)} {D_{cd}}^*+ \frac{2}{\xi} P_{(a}
{p^c}_{b)} H_{c2}+4Q_{(a} \omega_{b)t} \eta^t\end{aligned}$$
$[1+1+2]$ irreducible decomposition of the lhs of Einstein’s field equations
----------------------------------------------------------------------------
We consider the generic form of a spacetime symmetry $$L_\xi g_{ab}=2\Psi g_{ab}+H_{ab}$$ where $${H^a}_a=0 \quad and \quad H_{ab}=H_{ba}$$
The Lie derivative of the connection coefficients in terms of the Lie derivative of the metric tensor is expressed as follows \[4\]: $$L_\xi {\Gamma^a}_{bc}=\frac{1}{2} g^{ad}[(L_\xi g_{ab})_{;c}+(L_\xi g_{ac})_{;b}-
(L_\xi g_{bc})_{;d}]$$
Combining equations (26) and (74) we obtain directly: $$L_\xi {\Gamma^a}_{bc}=\frac{1}{2} g^{ad}[2\Psi_{;c} g_{bd} +2\Psi_{;b} g_{dc}-
2\Psi_{;d} g_{b} +H_{db;c}+H_{dc;b}+H_{bc;d}]$$
Equation (75) is equivalent to the following: $$L_\xi {\Gamma^a}_{bc}=\Psi_{;c}{\delta^a}_b+\Psi_{;b}{\delta^a}_c-{\Psi_{;}}^a{g^b}_c+\frac{1}{2}[H_{db;c}+H_{dc;b}+H_{bc;d}]$$
Furthermore it is easy to obtain $$L_\xi {\Gamma^a}_{ab}=4\Psi_{;b}$$
In order to calculate the Lie derivative of the Ricci tensor we apply the relation $$L_\xi R_{ab}={(L_\xi{\Gamma^s}_{ab})}{;s}-{(L_\xi{\Gamma^s}_{as})}{;b}$$
Hence we obtain: $$\begin{aligned}
L_\xi R_{ab}&=&{[\Psi_{;a}{\delta^s}_b+\Psi_{;b}{\delta^s}_a
-{\Psi_{;}}^s
g_{ab}+\frac{1}{2}({H^s}_{a;b}+{H^s}_{b;a}-{H_{ab;}}^s)]}_{;s}-4 \Psi_{;a}
\nonumber \\
&=& (\psi_{;ab}+\Psi_{b;a}-{{\psi_;}^s}_s g_{ab}-4\Psi_{;ab})+\frac{1}{2}({{H^s}_a}_{;bs}+{{H^s}_b}_{;as}+{{{H_{ab}}_{;}}^s}_s)\end{aligned}$$
Equivalently we obtain: $$L_\xi R_{ab}=-2\Psi_{;ab}-\Box \psi g_{ab}+\frac{1}{2}({{H^s}_a}_{;bs}+{{H^s}_b}_{;as}+{{{H_{ab}}_{;}}^s}_s)$$
or $$L_\xi R_{ab}=-2\Psi_{;ab}-\Box \psi g_{ab}+\frac{1}{2} \Lambda_{ab}$$ where $$\Lambda_{ab}={{H^s}_a}_{;bs}+{{H^s}_b}_{;as}+{{{H_{ab}}_{;}}^s}_s$$
Next we decompose $\Psi_{;ab}$ with respect to the double congruence $(u,\eta)$ framework into its irreducible parts $$\begin{aligned}
\Psi_{;ab}&=&\lambda _{\Psi} u_a u_b-2k_\Psi \eta_{(a} u_{b)}-2s_{\Psi(a} u_{b)}
\nonumber \\
&+& \gamma_\Psi \eta_a \eta_b +2p_{\Psi(a}\eta_{b)}+D_{\Psi
ab}+\frac{1}{2} a_\Psi p_{ab} \end{aligned}$$
where $$\lambda _{\Psi}=\Psi_{;ab} u^a u^b,\quad k_\Psi =\Psi_{;ab}\eta^{(a}
u^{b)},\quad s_{\Psi a}={p_a}^b \Psi_{;bc} u^c$$ $$\gamma_\Psi=\Psi_{;ab}\eta^a \eta^b,\quad 2p_{\Psi a}= {p_a}^b
\Psi_{;bc} \eta^c$$ $$D_{\Psi ab}=({p^c}_a
{p^d}_b-\frac{1}{2}p_{ab}p^{cd}) \Psi_{;cd},\quad a_\Psi=\Psi_{;ab} p^{ab}$$ Moreover we have $$g_{ab} \Psi_{;ab}=-\lambda _{\Psi}+\gamma_\Psi+ a_\Psi$$
Thus the sum $-2\Psi_{;ab}-\Box \psi g_{ab}$ is decomposed irreducibly as $$\begin{aligned}
-2\Psi_{;ab}-\Box \psi g_{ab}&=& (-3\lambda _{\Psi}+\gamma_\Psi+
a_\Psi)u_a u_b +4k_\Psi \eta_{(a} u_{b)} \nonumber \\
&+& 4s_{\Psi (a} u_{b)} +(\lambda _{\Psi}-3\gamma_\Psi-a_\Psi) \eta_a
\eta_b -4p_{\Psi(a}\eta_{b)} \nonumber \\
&-&2D_{\Psi ab} +(\lambda _{\Psi}-\gamma_\Psi-2a_\Psi)p_{ab} \end{aligned}$$
Similarly the tensor $\Lambda_{ab}$ defined by equation (82) is decomposed irreducibly as follows: $$\begin{aligned}
\frac{1}{2} \Lambda_{ab}&=&\lambda_\Lambda u_a u_b-2k_\Lambda \eta_{(a} u_{b)}-2
s_{\Lambda (a} u_{b)} \nonumber \\
&+& \gamma_\Lambda \eta_a \eta_b +2p_{\Lambda(a} \eta_{b)} +D_{\Lambda
ab} +\frac{1}{2} a_\Lambda p_{ab}\end{aligned}$$
Thus the lhs of Einstein’s equations is decomposed irreducibly as follows: $$\frac{1}{\xi} L_\xi R_{ab}[uu]=\frac{1}{\xi} (\lambda_\Lambda -3\lambda
_{\Psi}+\gamma_\Psi+a_\Psi)u_a u_b$$ $$\frac{1}{\xi} L_\xi R_{ab}[u \eta]=\frac{1}{\xi}
(-2(k_\Lambda-2k_\Psi))
\eta_{(a}u_{b)}$$ $$\frac{1}{\xi} L_\xi R_{ab}[up]=\frac{1}{\xi}
[-2(s_{\Lambda c}-2s_{\Psi c})] {p^c}_{(a}
u_{b)}$$ $$\frac{1}{\xi} L_\xi R_{ab}[\eta \eta]=\frac{1}{\xi}
[(\gamma_\Lambda-3\gamma_\Psi)+(\lambda_\Psi-a_\Psi)] \eta_a
\eta_b$$ $$\frac{1}{\xi} L_\xi R_{ab}[\eta p]=\frac{1}{\xi}
[2(p_{\Lambda c}-2p_{\Psi c})]{p^c}_{(a} \eta_{b)}$$ $$\frac{1}{\xi} L_\xi R_{ab}[p p]= \frac{1}{\xi}
[D_{\Lambda ab}
-2D_{\Psi ab}+ (\lambda_\Psi-\gamma_\Psi-2a_{\Psi}+\frac{1}{2}
a_\Lambda) p_{ab}]$$
The symmetries-incorporated field equations for the irreducible dynamical variables
-----------------------------------------------------------------------------------
Our purpose is to construct the field equations for general matter that the dynamical variables $\mu,p,\gamma,v,Q^a,P^a,D_{ab}$ of spacetime satisfy, in such a way that the information of any particular spacetime symmetry imposed, is directly incorporated in the form of the equations. As a first step we equate the results we have obtained previously for the rhs and the lhs of Einstein’s equations: $$\begin{aligned}
&[\frac{1}{2} (\mu +3p)^* +(\mu
+3p-2\Lambda)(\frac{\Psi}{\xi}-\frac{1}{2 \xi} H_{11})
-\frac{2v}{\xi} H_{21}-2Q^c N^c]= \nonumber \\
&\frac{1}{\xi} (\lambda_\Lambda -3\lambda
_{\Psi}+\gamma_\Psi+a_\Psi)\end{aligned}$$
$$\begin{aligned}
&[\frac{1}{2}{[(
\mu-p+2\Lambda)+2\gamma]}^*+[(\mu-p+2\Lambda)+2\gamma](\frac{\Psi}{\xi}+\frac{1}{2\xi}H_{22})]=
\nonumber \\
&\frac{1}{\xi} ((\gamma_\Lambda-3\gamma_\Psi)+(\lambda_\Psi-a_\Psi)) \end{aligned}$$
$$\begin{aligned}
&[\frac{1}{2 \xi} (\mu -p+2\Lambda+2\gamma]H_{21} +P_c
N^c-v^*-v\frac{1}{2\xi}(4\Psi +H_{22}-H_{11})= \nonumber \\
&\frac{1}{\xi} (k_\Lambda-2k_\Psi)\end{aligned}$$
$$\begin{aligned}
&[\frac{1}{2}[(\mu-p+2\Lambda)-\gamma]N_c+(\mu+3p-2\Lambda)\omega_{tc}\eta^t-{Q_t}^*
{p^t}_c -\frac{v}{\xi} H_{t2}{p^t}_c-Q^t H_{tc} \nonumber \\
&+ D_{tc}N^t +\frac{1}{\xi} H_{21}P_c-Q^t {\mathcal R}_{tc}
-Q_c(\frac{2\Psi}{\xi}+\frac{1}{4\xi}p^{ef}
H_{ef}-\frac{1}{2\xi}H_{11})] \nonumber \\
&=\frac{1}{\xi} (s_{\Lambda c}-2s_{\Psi c}) \end{aligned}$$
$$\begin{aligned}
&[{p^d}_c {P_d}^* +P_c(\frac{2\Psi}{\xi}+\frac{1}{4\xi}p^{ef}
H_{ef}+\frac{1}{2\xi}H_{22})+P_d {H^d}_c +P_d {{\mathcal R}^d}_c
\nonumber \\
&+ \frac{1}{2\xi}[( \mu-p+2\Lambda)+2\gamma]{p^d}_c H_{d2}-2v \eta^d
\omega_{dc}] \nonumber \\
&=\frac{1}{\xi}(p_{\Lambda c}-2p_{\Psi c}) \end{aligned}$$
$$\begin{aligned}
&{(\mu-p-\gamma)}^*+(\mu-p+2\Lambda-\gamma)(\frac{1}{2\xi}
p^{ef}H_{ef}+\frac{2\Psi}{\xi}) \nonumber \\
&2D^{cd}H_{cd} +\frac{2}{\xi}P^c H_{c2}+4Q^c \omega_{ct} \eta^t
\nonumber \\
&=2(\lambda_\Psi-\gamma_\Psi-2a_{\Psi}+\frac{1}{2}
a_\Lambda) \end{aligned}$$
$$\begin{aligned}
&{p^c}_a{p^d}_b{D_{cd}}^* +{\mathcal E} D_{ab}
+(\mu-p+2\Lambda-\gamma)H_{ab}+2D_{c(a} {{\mathcal
R}^c}_{b)} \nonumber \\
&+2({p^c}_{(a}{p^d}_{b)}-\frac{1}{2} p_{ab}p^{cd})(D_{ec}
{H^e}_d+2Q_{(c} \omega_{d)t} \eta^t) \nonumber \\
&=D_{\Lambda ab}-2D_{\Psi ab}\end{aligned}$$
The above system of equations can give us the irreducible equations that each of the dynamical variables of spacetime obey, if we manage to disentagle it through appropriate algebraic manipulations.
Equations (92),(93) and (97) can be written equivalently as follows correspondingly: $$\begin{aligned}
&\mu^*+(\mu+\Lambda)(\frac{2\Psi}{\xi}-\frac{1}{\xi}H_{11})+3(p^*+(p-\Lambda))(\frac{2\Psi}{\xi}-\frac{1}{\xi}H_{11})
\nonumber \\
&=
\frac{2}{\xi}(\lambda_\Lambda-3\lambda_\Psi+\gamma_\Psi+a_\Psi)+\frac{4v}{\xi}H_{21}+4Q_c
N^c\end{aligned}$$
$$\begin{aligned}
&\mu^*+(\mu+\Lambda)(\frac{2\Psi}{\xi}+\frac{1}{\xi}H_{22})-[p^*+(p-\Lambda)(\frac{2\Psi}{\xi}+\frac{1}{\xi}H_{22})]
\nonumber \\
&
+2[\gamma^*+\gamma(\frac{2\Psi}{\xi}-\frac{1}{\xi}H_{22})]=\frac{2}{\xi}[\lambda_\Psi
-a_\Psi+(\gamma_\Lambda-3\gamma_\Psi)]\end{aligned}$$
$$\begin{aligned}
&\mu^*+(\mu+\Lambda)(\frac{2\Psi}{\xi}+\frac{1}{2\xi}p^{ef}H_{ef})-[p^*+(p-\Lambda)(\frac{2\Psi}{\xi}+\frac{1}{2\xi}p^{ef}H_{ef})]
\nonumber \\
&-[\gamma^*+\gamma(\frac{2\Psi}{\xi}+\frac{1}{2\xi}p^{ef}H_{ef})]=-4Q^c
\omega_{ct} \eta^t \nonumber \\
&+\frac{2}{\xi}(\lambda_\Psi-\gamma_\Psi-2a_\Psi+\frac{1}{2}a_\Lambda)-2D^{cd}H_{cd}-\frac{2}{\xi}P^c
H_{c2}\end{aligned}$$
Moreover we note that $$g^{ab}H_{ab}=0 \quad or \quad p^{ab}H_{ab}=H_{11}-H_{22}$$
Hence equation (101) can be written in the form $$\begin{aligned}
&\mu^*+(\mu+\Lambda)[\frac{2\Psi}{\xi}+\frac{1}{2\xi}(H_{11}-H_{22})]-[p^*+(p-\Lambda)(\frac{2\Psi}{\xi}+\frac{1}{2\xi}(H_{11}-H_{22}))]
\nonumber \\
&-[\gamma^*+\gamma(\frac{2\Psi}{\xi}+\frac{1}{2\xi}(H_{11}-H_{22}))]=-4Q^c
\omega_{ct} \eta^t \nonumber \\
&+\frac{2}{\xi}(\lambda_\Psi-\gamma_\Psi-2a_\Psi+\frac{1}{2}a_\Lambda)-2D^{cd}H_{cd}-\frac{2}{\xi}P^c
H_{c2}\end{aligned}$$
Next if (102) is multiplied by a factor of two and added to (100) gives $$\begin{aligned}
&\mu^*+(\mu+\Lambda)(\frac{2\Psi}{\xi}+\frac{1}{3\xi}H_{11})-(p^*+(p-\Lambda)(\frac{2\Psi}{\xi}+\frac{1}{3\xi}H_{11}))
\nonumber \\
&+\gamma(\frac{1}{\xi}H_{22}-\frac{1}{3\xi}H_{11})=\frac{1}{3\xi}(6\lambda_\Psi-10a_\Psi+2\gamma_\Lambda-10\gamma_\Psi+2a_\Lambda)
\nonumber \\
& -\frac{8}{3}Q^c \omega_{ct} \eta^t-\frac{4}{3}
D^{cd}H_{cd}-\frac{4}{3\xi}P^c H_{c2}\end{aligned}$$
Furthermore we multiply (103) by a factor of three and we add to (99) $$\begin{aligned}
&\mu^*+(\mu+\Lambda)\frac{2\Psi}{\xi}-(p-\Lambda)
\frac{1}{\xi}H_{11}+\frac{1}{4\xi} \gamma(3H_{22}-H_{11}) \nonumber \\
&=\frac{1}{2\xi}[\lambda_\Lambda+\gamma_\Lambda+a_\Lambda-4\gamma_\Psi-4a_\Psi]+\frac{v}{\xi}H_{21}+Q_c
N^c \nonumber \\
&-2Q^c \omega_{ct} \eta^t -D^{cd} H_{cd}-\frac{1}{\xi}P^cH_{c2}\end{aligned}$$
Next we consider the difference of (99) and (103) $$\begin{aligned}
&p^*+(p-\Lambda)(\frac{2\Psi}{\xi}-\frac{2}{3\xi} H_{11})-\frac{1}{4\xi}
\gamma (H_{22}-\frac{1}{3}H_{11})-\frac{1}{3\xi}(\mu+\Lambda)H_{11}
\nonumber \\
&=\frac{1}{6\xi}(3\lambda_\Lambda-\gamma_\Lambda-a_\Lambda+8a_\Psi+8\gamma_\Psi-12
\lambda_\Psi)+Q_c N^c \nonumber \\
&+\frac{2}{3} Q^c \omega_{ct} \eta^t +\frac{1}{3}
D^{cd}H_{cd}+\frac{1}{3\xi}P^c H_{c2} +\frac{v}{\xi}H_{21}\end{aligned}$$
Finally we subtract (102) from (100) and we obtain: $$\begin{aligned}
&[\gamma^*+\gamma(\frac{2\Psi}{\xi}+\frac{1}{6\xi}H_{11}+\frac{1}{2\xi}H_{22})]+(\mu+\Lambda)
\frac{1}{2\xi}(-\frac{1}{\xi}H_{11}+H_{22}) \nonumber \\
&+(p-\Lambda)\frac{1}{2\xi}(\frac{1}{\xi}H_{11}-H_{22})=\frac{2}{3\xi}(\gamma_\Lambda-\frac{1}{2}a_\Lambda
-2\gamma_\Psi+a_\Psi) \nonumber \\
&+\frac{2}{3} D^{cd} H_{cd} +\frac{2}{3} \frac{1}{\xi} P^c H_{c2} +\frac{4}{\xi}Q^c \omega_{ct} \eta^t\end{aligned}$$
Thus the final irreducible form of the symmetries-incorporated Einstein field equations for the dynamical variables resulting from the decomposition of the energy-momentum tensor for general matter is described from the following system of equations:
[**Symmetries-incorporated field equations for $\mu, p, \gamma$**]{}
$$\begin{aligned}
&\mu^*+(\mu+\Lambda)\frac{2\Psi}{\xi}-(p-\Lambda)
\frac{1}{\xi}H_{11}+\frac{1}{4\xi} \gamma(3H_{22}-H_{11}) \nonumber \\
&=\frac{1}{2\xi}[\lambda_\Lambda+\gamma_\Lambda+a_\Lambda-4\gamma_\Psi-4a_\Psi]+\frac{v}{\xi}H_{21}+Q_c
N^c \nonumber \\
&-2Q^c \omega_{ct} \eta^t -D^{cd} H_{cd}-\frac{1}{\xi}P^cH_{c2}\end{aligned}$$
$$\begin{aligned}
&p^*+(p-\Lambda)(\frac{2\Psi}{\xi}-\frac{2}{3\xi} H_{11})-\frac{1}{4\xi}
\gamma (H_{22}-\frac{1}{3}H_{11})-\frac{1}{3\xi}(\mu+\Lambda)H_{11}
\nonumber \\
&=\frac{1}{6\xi}(3\lambda_\Lambda-\gamma_\Lambda-a_\Lambda+8a_\Psi+8\gamma_\Psi-12
\lambda_\Psi)+Q_c N^c \nonumber \\
&+\frac{2}{3} Q^c \omega_{ct} \eta^t +\frac{1}{3}
D^{cd}H_{cd}+\frac{1}{3\xi}P^c H_{c2} +\frac{v}{\xi}H_{21}\end{aligned}$$
$$\begin{aligned}
&[\gamma^*+\gamma(\frac{2\Psi}{\xi}+\frac{1}{6\xi}H_{11}+\frac{1}{2\xi}H_{22}]+(\mu+\Lambda)
\frac{1}{2\xi}(-\frac{1}{\xi}H_{11}+H_{22}) \nonumber \\
&+(p-\Lambda)\frac{1}{2\xi}(\frac{1}{\xi}H_{11}-H_{22})=\frac{2}{3\xi}(\gamma_\Lambda-\frac{1}{2}a_\Lambda
-2\gamma_\Psi+a_\Psi) \nonumber \\
&+\frac{2}{3} \frac{1}{\xi} P^c H_{c2} +\frac{4}{\xi}Q^c \omega_{ct}
\eta^t +\frac{2}{3} D^{cd} H_{cd}\end{aligned}$$
[**Symmetries-incorporated field equations for $v, Q^a, P^a, D_{ab}$**]{}
$$\begin{aligned}
&v^*+v\frac{1}{2\xi}(4\Psi +H_{22}-H_{11})
\nonumber \\
&=[-\frac{1}{2 \xi} (\mu -p+2\Lambda+2\gamma)]H_{21} +P_c
N^c
-\frac{1}{\xi} (k_\Lambda-2k_\Psi)\end{aligned}$$
$$\begin{aligned}
&{Q_t}^*
{p^t}_c +Q^t H_{tc}
+Q^t {\mathcal R}_{tc}
+Q_c(\frac{2\Psi}{\xi}-\frac{1}{4\xi}(H_{11}+H_{22})] \nonumber \\
&=-\frac{1}{\xi} (s_{\Lambda c}-2s_{\Psi c})-
[+\frac{1}{2}[(\mu-p+2\Lambda)-\gamma]N_c+(\mu+3p-2\Lambda)\omega_{tc}\eta^t
\nonumber \\
&-\frac{v}{\xi} H_{t2}{p^t}_c+ D_{tc}N^t +\frac{1}{\xi} H_{21}P_c\end{aligned}$$
$$\begin{aligned}
&{p^d}_c {P_d}^* +P_c[\frac{2\Psi}{\xi}+\frac{1}{4\xi}(H_{11}+H_{22})]+P_d {H^d}_c +P_d {{\mathcal R}^d}_c
\nonumber \\
&=- \frac{1}{2\xi}[( \mu-p+2\Lambda)+2\gamma]{p^d}_c H_{d2}+2v \eta^c
\omega_{dc} \nonumber \\
&+\frac{1}{\xi}(p_{\Lambda c}-2p_{\Psi c}) \end{aligned}$$
$$\begin{aligned}
&{p^c}_a{p^d}_b{D_{cd}}^* +{\mathcal E} D_{ab}
+2D_{c(a} {{\mathcal
R}^c}_{b)} \nonumber \\
&+2({p^c}_{(a}{p^d}_{b)}-\frac{1}{2} p_{ab}p^{cd})D_{ec}
{H^e}_d \nonumber \\
&=D_{\Lambda ab}-2D_{\Psi ab}-(\mu-p+2\Lambda-\gamma)H_{ab}-4Q_{(c} \omega_{d)t} \eta^t({p^c}_{(a}{p^d}_{b)}-\frac{1}{2} p_{ab}p^{cd})\end{aligned}$$
The system of equations (107)-(113) provide the desirable irreducible decomposition of Einstein’s equations for general matter when an arbitrary symmetry has been introduced in spacetime, such that the information regarding the symmetry is explicitly contained in the field equations.
This system of equations provides the key formal results of the paper and generalises all the previous attempts to attack the problem, in an elegant and unifying manner. Moreover it greatly enlarges the scope of previous works since it can be applied to all types of symmetries as well as to all types of matter. Thus it consists the unified geometrical framework that we have to take into account when discussing Einstein’s equations in spacetimes with general symmetries. Aside from elegance and covariance properties it provides the direct link among seperate approaches discussing particular symmetries and matter fields.
It is evident that the exact physical significance of the various parameters involved in the equations is provided by the special type of symmetry and matter that somebody considers applying the formalism. It is also expectable that the system of the symmetries-incorporated Einstein equations will reduce to familiar forms when applied to well-studied specific matter fields as well as symmetries. Even in this case there are some new insights to be gained from seeing these old calculations in the new general setting.
In the following section we demonstrate how these formal results are used in the case of a spacelike Conformal Killing vector symmetry, discussing the perfect fluid case. We expect to explore the applications of the symmetries-incorporated Einstein equations in the situation of Affine and Curvature collineations for various types of matter in later papers.
Application : The case of a spacelike conformal Killing vector symmetry
=======================================================================
Kinematical Level
-----------------
A spacelike conformal Killing vector $\xi^a=\xi \eta^a (\eta^a
u_a=0)$ satisfies the equation $$L_\xi g_{ab}=2\Psi g_{ab}$$ We note that in this case $H_{ab}=0$.
From the theorem proved in section 3 the geometrisation of this particular symmetry is described by the following set of conditions: $${\mathcal T}_{ab}=0$$ $${\eta_a}^*+{(\log \xi)}_{,a}=\frac{1}{2} {\mathcal E} \eta_a$$ $$\dot{\eta_a} u^a=-\frac{1}{2} {\mathcal E}$$ $${p^b}_a(\dot {\eta_b}+u^t \eta_{t;b})=0$$
The conformal factor $\Psi$ reads $$\Psi=\frac{1}{2} \xi {\mathcal E}={\xi}^*$$
Due to the condition $u^a \eta_a=0$ equation (118) can be written in the form $$N_a=-2 \omega_{ab} \eta^b$$
Moreover the decomposition of the covariant derivative of the spacelike vector field $\eta_a$ is written as $$\eta_{a;b}=A_{ab}+{\eta_a}^* \eta_b- \dot{\eta_a} u_b +{p^c}_b
\dot{\eta_c} u_a$$ from which immediately obtain $$L_\xi \eta^a=-\psi \eta^a$$
Moreover using (120) we easily find $$L_\xi u^a=-\Psi u^a-\xi N^a$$
From (24) and (25) we can also compute $$L_\xi p_{ab}=2\Psi p_{ab}-2\xi u_{(a} N_{b)}$$
$$L_\xi h_{ab}=2\Psi h_{ab}-2\xi u_{(a} N_{b)}$$
If $N^a=0$ then $(u, \eta)$ span a two dimensional surface in spacetime. The screen space is the orthogonal complement to this surface. The screen space admits $\xi^a$ as a spacelike conformal Killing vector with conformal factor $\Psi$ and its metric is defined by $p_{ab}(u,\eta)$. The rest space of the observers is the three dimensional space normal to $u^a$. The rest space admits admits $\xi^a$ as a spacelike conformal Killing vector with conformal factor $\Psi$ and its metric is defined by $h_{ab}(u)$.
Finally, for a conformal Killing vector we have from (80) the following further equations $$L_\xi R_{ab}=-2\Psi_{;ab}-g_{ab} \Box \Psi$$ $$L_\xi R=-2\Psi R-6 \Box \Psi$$
Dynamical Level
---------------
We use the set of equations (107)-(113) in the special case we study and the results from the kinematical level. It is straightforward to obtain the following field equations:
$$\mu^*+(\mu+\Lambda) {\mathcal E}=-\frac{2}{\xi}(a_\Psi+\gamma_\Psi)+2Q_aN^a$$
$$p^*+(p-\Lambda){\mathcal E}=\frac{2}{3\xi}(2a_\Psi+2\gamma_\Psi-3\lambda_\Psi)+\frac{2}{3}Q_aN^a$$
$$\gamma^*+\gamma{\mathcal E}=-\frac{4}{3\xi}(\gamma_\Psi-\frac{1}{2} a_\Psi)-\frac{2}{3}Q_aN^a$$
$$v^*+v{\mathcal E}=\frac{2}{\xi}k_\Psi +P_a N^a$$
$${p^d}_c {Q^d}^* +{\mathcal E}Q_c=\frac{2}{\xi}s_{\Psi
c}+(\mu+p-\frac{1}{2}\gamma)N_c+D_{dc}-Q_d {{\mathcal R}^d}_c$$
$${p^d}_c {P_d}^* +{\mathcal E}P_c=-\frac{2}{\xi}P_{\Psi c}+v N_c
-P_d {{\mathcal R}^d}_c$$
$${p^c}_a {p^d}_b{D_{cd}}^*+{\mathcal E}D_{ab}=-\frac{2}{\xi}D_{\Psi
ab}-2{\mathcal R}_{c(a} {D^c}_{b)}+2({p^c}_a
{p^d}_b-\frac{1}{2}p^{cd} p_{ab}) Q_{(c} N_{d)}$$
The dynamics is fully specified if in addition to the field equations we consider the conservation equations (52)-(54), which if expressed in terms of their ireducible parts read
$$\begin{aligned}
& \dot \mu+(\mu+p-\frac{\gamma}{2}) \theta +v^*+2v {\mathcal E}
+p^{ab} Q_{a;b} +Q^a {(\log \xi)}_{;a} \nonumber \\
&+2Q^a \dot{u_a}+\frac{3}{2}\gamma \dot{(\log \xi)} +2 \sigma_{ab}P^a \eta^b+\sigma_{ab}D^{ab}=0\end{aligned}$$
$$\begin{aligned}
&(p+\gamma)^*+\frac{1}{2}(\mu+p+4\gamma){\mathcal E}+ \dot v \nonumber
\\
&+v[\theta+ \dot{(\log \xi)} ]+p^{ab}P_{a;b}+P_a(\dot {u^a}- {\eta^a}^*)=0\end{aligned}$$
$$\begin{aligned}
&{p^c}_a[(\mu+p-\gamma/2) \dot{u_c} +p_{;c}+ \dot{Q_c}+\frac{4}{3}
\theta Q_c+\sigma_{cd}Q^d \nonumber \\
&+2v\sigma_{cd} \eta^d +{P_c}^* +{\mathcal R}_{cd} P^d +{\mathcal E}
P_c +{{D_c}^d}_{;d}-\frac{3}{2}\gamma{(\log \xi)}_{;c}-\frac{1}{2}\gamma_{;c}]=0\end{aligned}$$
The system of equations (128)-(137) characterise completely the dynamics induced by the Einstein equations when a spacelike Conformal Killing vector symmetry exists in spacetime. Thus it is possible if we specify a concrete type of matter field and after introducing appropriate systems of coordinates to obtain solutions which by construction comply with this symmetry.
Since all the dynamical information is contained in (128)-(137) all of the propositions or no-go theorems which can arise in the above setting, and gradually have appeared in the literature using various methods, are contained implicitly in the system we have constructed. In order to clarify this point, in what follows, we specialise our discussion in the case of a perfect fluid spacetime.
The Perfect fluid case
----------------------
In the case of a perfect fluid the following relations hold: $$\psi_{;ab}=\lambda_\Psi u_a u_b +\gamma_\Psi h_{ab}+\xi (\mu+p)N_{(a} u_{b)}$$ $$\Box \Psi=3\gamma_\Psi-\lambda_\Psi$$
$$\begin{aligned}
&L_\xi R_{ab}=3(\gamma_\Psi-\lambda_\Psi) u_a u_b \nonumber \\
&+(-5\gamma_\Psi+\lambda_\Psi)h_{ab}-2\xi(\mu+p)N_{(a}u_{b)}\end{aligned}$$
$$L_\xi R=-2\Psi R-6(3\gamma_\Psi-\lambda_\Psi)$$
The general field equations in the case of a perfect fluid when a conformal Killing vector symmetry is introduced take the form: $$\mu^*+(\mu+\Lambda) {\mathcal E}=-\frac{6}{\xi} \gamma_\Psi$$ $$p^* +(p-\Lambda) {\mathcal E}=\frac{2}{\xi}(2\gamma_\Psi-\lambda_\Psi)$$ $$(\mu+p)N_a=-\frac{2}{\xi}s_{\Psi a}$$ $$0=k_\Psi=P_{\Psi a}=D_{\Psi ab}$$ $$a_\Psi=2\gamma_\Psi$$
The consevation equations become $$p^*+\frac{1}{2}(\mu+p) {\mathcal E}=0$$ $$(\mu-p+2\Lambda \Psi=4\gamma_\Psi+2\lambda_\Psi$$ $$(\mu+p){p^b}_a \dot{u_b}+{p^c}_ap_{,c}=0$$
Using the above sets of equations we can recover all the theorems that have been proved in the literature in a large series of publications using other methods, quite easily. Since the large majority of these theorems are well known we are not going to state all of them, but we will restrict ourselves in mentioning two examples of propositions of this kind, so as to prove the increased flexibility of our formalism in familiar situations, and at the same time, to gain new insights by their embedding in a unified geometrical framework.
[*Proposition:* ]{}Let $\xi^a$ be a proper homothetic spacelike Killing vector orthogonal to the 4-velocity of observers of a perfect fluid spacetime ($\Psi_{;a}=0$, $\Psi\neq 0$). Then $p-\Lambda=\mu+\Lambda$, namely matter is stiff and the current $j^a:={\xi^{[a;b]}};b$ vanishes.
[*Proof:* ]{} If $\xi^a$ is a conformal Killing vector we can easily obtain that the vector field $\Psi_{;a}$ obeys the following identity: $$\begin{aligned}
\Psi_{;a}&=&-\frac{1}{3}(\mu+p)(u_t \xi^t)u_a +
\frac{1}{6}(p-\mu-2\Lambda) \xi^a \cr
&-&\frac{2}{3}q_{(a}u_{b)} \xi^b- \frac{1}{3} \pi_{ab} \xi^b
-\frac{1}{3} j^a\end{aligned}$$
The above identity results if we apply the Ricci identity and use the expression of the Ricci tensor in terms of the dynamical variables from the field equations.
Since we refer to a perfect fluid spacetime the above equation reads $$(2\gamma_\Psi-\lambda_\Psi)\xi_a=\Psi(3\Psi_{;a}+j^a)$$
Next we use (143) and we immediately see that the condition $\Psi_{;a}$ implies that $p-\Lambda=\mu+\Lambda$, or else matter is stiff and the current $j^a:={\xi^{[a;b]}};b$ vanishes.
[*Proposition:* ]{}Let $\xi^a$ be a spacelike conformal Killing vector orthogonal to the 4-velocity of observers of a perfect fluid spacetime. Then if $\Lambda=0$, $\mu+p=0$ and the dominant energy condition holds, spacetime is a de Sitter spacetime with constant positive curvature and $\xi^a$ reduces to a Killing vector.
[*Proof:* ]{}From conservation equations we find that $\mu=const$ and from our assumptions positive since $\mu+p=0$. Field equations (142) and (143) imply $\gamma_\Psi=-\lambda_\Psi$ and field equation (144) that $s_{\Psi
a}=0$. From these results we have: $$\Psi_{;ab}=\gamma_{\Psi}g_{ab}$$ $$L_\xi R_{ab}=-\gamma_\Psi g_{ab}$$ $$L_\xi R=-2 \Psi R -24\gamma-\Psi$$ From Einstein’s field equations we find in this case: $$R_{ab}=\mu g_{ab}$$ $$R=4\mu=const$$ Thus spacetime is a de-Sitter spacetime. Moreover equation (155), (156) give $$\mu \Psi=-3\gamma_\Psi$$ Applying Ricci’s identity to the vector $\Psi_{;a}$ we find $R^{ab}\Psi_{;a}=0$ which upon replacing $R_{ab}$ from (155) we obtain $\mu \Psi_{;a}=0$. But we have $\mu>0$ hence the above gives $\Psi_{;a}=0$ or $\gamma_\Psi=0$. Thus from (157) we conclude that $\Psi=0$, hence $\xi^a$ reduces to a Killing vector.
Summary and Discussion
======================
The basic tool to simplify Einstein’s field equations, in search for exact solutions, has been the introduction of spacetime symmetries. The latter form Lie algebras of vector fields on the spacetime manifold which are invariant vector fields of certain geometrical objects on this manifold.
Related to the initial motivation, it would be very desirable to have in hand a general framework permitting the incorporation of a symmetry of every possible kind in Einstein’s equations for general matter. There lacks in the literature a unified approach that will apply to all the symmetries and also to general matter. We have presented a general method to handle the field equations for general matter when a spacetime symmetry is introduced in its work.
Our method has been expressed in a covariant formalism, using the framework of a double congruence $(u,\eta)$, permitting the introduction of arbitrary reference frames and eventually systems of coordinates. The basic notion on which it is based is that of the geometrisation of a general symmetry, namely the description of it as an equivalent set of conditions on the kinematical quantities characterising the congruence of the integral lines of the vector field that generates the symmetry.
Using the above notion we finally manage to write the field equations which the dynamical variables obey for any type of matter, in a way that they inherit the symmetry of the generating vector.
The method has been applied in the case of a spacelike Conformal Killing vector symmetry recovering completely the existing literature. Further applications have been considered recovering results obtained by other methods in the case of a perfect fluid spacetime.
Thus we finally generalise and extend the results of the current literature in the form of the symmetries-incorporated Einstein’s field equations for general matter.
The usefulness of such a construction lays on the following facts:
Firstly the field equations for general matter obtain an irreducible form with respect to the covariant congruence framework and specify completely the evolution of the spacetime dynamical variables inheriting, at the same time, the symmetry of the vector that generates it.
Secondly the system of the symmeties-incorporated Einstein equations give us the opportunity to eliminate the symmetry from the field equations trivially and find exact solutions that will by construction comply with the symmetry.
Thirdly, the set of the symmetries-incorporated field equations can be also used in another perspective equally significant. That is the above set of equations can be considered as a system of integrability conditions for the existence of a spacetime symmetry of a particular kind, in spacetime, when a concrete form of the energy-momentum tensor is specified. Conversely it is possible imposing a spacetime symmetry to examine what types of matter can be present in spacetime. Related to the above remark the existing literature, using other methods, give us the information that some of the discussed symmetries are either absent or tightly restricted if a specific form of the energy momentum tensor is given. For example, affine and conformal vector fields cannot exist in vaccum spacetimes and affines are also forbidden when we have a perfect fluid with $0
\leq p \neq \rho >0$ and are also severely restricted for Einstein-Maxwell spacetimes \[16,37\]. Curvature symmetries are also heavily restricted in a similar way \[38\]. Along these lines research is in progress, using the formalism developed in this paper.
We finally wish to mention a short remark, which shows another fruitful direction in the same framework.
The method presented in this paper can be also used to study the symmetry inheritance of a kinematical or dynamical variable. This concept has been defined in the literature \[29,31,39\] to mean that in the presence of a spacetime symmetry a kinematical or dynamical variable $X$, satisfies an equation of the form $$L_\xi X+k\psi X=0$$ where $k$ is a constant depending on the tensorial character of $X$. It is an appealing idea because it relates the symmetry with all the variables kinematical and dynamical making full use of the field equations.
[**Aknowledgements**]{}
It is my great pleasure to thank M. Tsamparlis for many stimulating discussions, as well as G.S. Hall and C. Casares. I would also especially like to thank C. Isham and J.J. Halliwell for their comments. This work has been partially supported by A.S. Onassis public benefit foundation. v
[100]{}
S.Kobayashi and K.Nomizu. “Foundations of Differential Geometry”. Interscience, Vol. I, New York (1963)
G.H.Katzin, J.Levine, and W.R.Davis, J.Math.Phys. [**10**]{}, 617 (1969)
S.W.Hawking and G.F.R.Ellis, “The Large Scale Structure of Space-Time”, Cambridge Univ.Press, Cambridge, UK, 1973.
K.Yano, “The Theory of Lie Derivatives”, North Holland, Amsterdam, 1955.
C.W.Misner, K.S.Thorne, and J.A.Wheeler, “Gravitation”, Freeman, San Francisco, 1973.
L.Defrise-Carter, Comm.Math.Phys, [**40**]{}, 273 (1975)
G.S.Hall, J.Math.Phys. [**31**]{}, 1198 (1990)
R.Maartens, J.Math.Phys. [**28**]{}, 2051 (1987); A.A.Coley and B.O.J.Tupper, ibid. [**31**]{}, 649 (1990); S.Hojman and D.Nunez, ibid. [**32**]{}, 234 (1991)
G.S.Hall and J.da Costa, J.Math.Phys. [**29**]{}, 2465 (1988)
G.S.Hall, J.Math.Phys. [**32**]{}, 181 (1991)
C.D.Collinson, J.Math.Phys. [**11**]{}, 818 (1970); P.C.Aichelburg, ibid [**11**]{}, 2458 (1970); R.A.Tello-Llanos, Gen.Rel.Grav. [**20**]{}, 765 (1988)
G.S.Hall, Gen.Rel.Grav. [**15**]{}, 581 (1983)
W.R.Davies, L.H.Green, and L.K.Norris, Nuovo Cimento B [**34**]{}, 256 (1976)
M.Tsamparlis and D.P.Mason, J.Math.Phys. [**31**]{}, 1707 (1990)
L.Nunez, U.Percoco, and V.M.Villalba, J.Math.Phys. [**31**]{}, 137 (1990)
G.S.Hall, CBPF-MO-002/93
J.Ehlers, P.Geren, and R.K.Sachs, J.Math.Phys. [**9**]{}, 1344 (1968)
D.R.Oliver and W.R.Davis, Gen.Rel.Grav. [**8**]{}, 905 (1977)
J.Ehlers, P.Jordan, W.Kundt, and R.Sachs, Akad.Wiss.Lit.Mainz Abh.Math.-Natur. Kl, [**11**]{}, (Mainz 4) 793 (1961)
G.F.R. Ellis, J.Math.Phys. [**8**]{}, 1171 (1967)
F.J.Greenberg, J.Math.Anal. and Applic. [**29**]{}, 645 (1970)
G.F.R.Ellis, in “General Relativity and Cosmology: Proceedings of Course 47 of the International School of Physics Enrico Fermi” (R.Sachs, Ed.), Academic Press, New York, (1971)
G.F.R.Ellis in “Cargese Lectures in Physics”, Vol.6 (E.Schatzman, Ed.), Gordon and Breach, New York, 1973.
D.P.Mason and M.Tsamparlis, J.Math.Phys. [**26**]{}, 2881 (1985)
F.J.Greenberg, J.Math.Anal. and Applic. [**30**]{}, 128 (1970)
D.P.Mason and M.Tsamparlis, J.Math.Phys. [**24**]{}, 1577 (1983)
M.Tsamparlis, J.Math.Phys. [**33**]{}, 1472 (1992)
D.P.Mason and M.Tsamparlis, J.Math.Phys. [**31**]{}, 1707 (1990)
A.A.Coley and B.O.Tupper, J.Math.Phys. [**30**]{}, 2616 (1989)
C.B.Collins, J.Math.Phys. [**25**]{}, 995 (1984)
E.Saridakis and M.Tsamparlis, J.Math.Phys. [**32**]{}, 1541 (1991)
Y.S.Vladimirov, N.Y.Mitskievich, and J.Horsky, “Space, Time, Gravitation”, (Moscow:Mir) (1987)
D.Kramer, H.Stephani, M.MacCallum, and E.Herlt “Exact Solutions of Einstein’s Field Equations” (Cambridge, UK: Cambridge Univ. Press) (1980)
R.Arnowitt, S.Deser, and C.W.Misner, in “Gravitation: An Introduction to Current Research” (L.Witten, Ed.), Wiley, New York, (1962)
A.L.Zelmanov, Sov.Phys.Dokl. [**18**]{}, 231 (1973)
A.E.Fischer and J.E.Marsden, in “General Relativity: An Einstein Centenary Survey” (S.W.Hawking and W.Israel, Eds.), Cambridge Univ.Press (1979)
G.S.Hall, Gen.Rel.Grav. [**20**]{} 399 (1988)
C.D.Collinson, and E.G.L.R.Vaz, Gen.Rel.Grav. [**14**]{}, 5 (1982)
B.Carter and R.N.Henriksen, J.Math.Phys. [**32**]{}, 2580 (1991)
|
---
abstract: 'Recently, a non-trivial $4D$ Einstein-Gauss-Bonnet (EGB) theory of gravity, by rescaling the GB coupling parameter as $\alpha/(D-4)$, was formulated in \[D. Glavan and C. Lin, Phys. Rev. Lett. 124, 081301 (2020)\], which bypasses Lovelock’s theorem and avoids Ostrogradsky instability. The theory admits a static spherically symmetric black hole, unlike $5D$ EGB or general relativity counterpart, which can have both Cauchy and event horizons and also free from the singularity. We generalize previous work, on gravitational lensing by a Schwarzschild black hole, in the strong and weak deflection limits to the $4D$ EGB black holes to calculate the deflection coefficients $\bar{a}$ and $\bar{b}$, while former increases and later decrease with increasing $\alpha$. We also find that the deflection angle $\alpha_D$, angular position $\theta_{\infty}$ and $u_{m}$ decreases, but angular separation $s$ increases with $\alpha$. The effect of the GB coupling parameter $\alpha$ on positions and magnification of the source relativistic images is discussed in the context of SgrA\* and M87\* black holes. A brief description of the weak gravitational lensing using the Gauss-Bonnet theorem is presented.'
author:
- 'Shafqat Ul Islam$^{a}$'
- 'Rahul Kumar$^{a}$'
- 'Sushant G. Ghosh$^{a,\;b}$'
title: 'Gravitational lensing by black holes in the $4D$ Einstein-Gauss-Bonnet gravity'
---
Introduction
============
String theories, in the low-energy limits, give rise to the effective field theories of gravity such that the Lagrangian for these theories contains terms of quadratic and higher orders in the curvature in addition to the usual scalar curvature term [@Gross]. Further, the gravitational action may be modified to include the quadratic curvature correction terms while keeping the equations of motion to second order, provided the quadratic terms appear in specific combinations corresponding to the Gauss-Bonnet (GB) invariants defined by [@Lanczos:1938sf] $$\begin{aligned}
\label{GB}
\mathscr{G} &=& R^2 -4 R_{\mu \nu}R^{\mu \nu}+R_{\mu \nu \rho \sigma}R^{\mu \nu \rho \sigma},\end{aligned}$$ and such a theory is termed as $D\geq 5$ dimensional Einstein Gauss-Bonnet (EGB) gravity theory with the $D-4$ extra dimensions. It turns out that the four-dimensional GB term arises as the next to leading order correction to the gravitational effective action in string theory [@Gross]. The EGB gravity is a special case of the Lovelock theory of gravitation [@Lovelock:1971yv]. Since its equations of motion have no more than two derivatives of the metric, it has been proven to be free of ghosts [@Boulware:1985wk]. Boulware and Deser [@Boulware:1985wk], independently by Wheeler [@Wheeler:1985nh], found exact black hole solutions in EGB gravity theories, which are generalizations of the Schwarzschild-Tangherlini black holes [@Tangherlini:1963bw]. However, the GB term (\[GB\]) is a topological invariant in $4D$ as its contribution to all components of Einstein’s equations are in fact proportional to $(D-4)$, i.e, it does not contributes to the equations of motion and one requires $D\geq 5$ for non-trivial gravitational dynamics.
However, it was shown by Glavan and Lin [@Glavan:2019inb] that by re-scaling the GB coupling constant, in the EGB gravity action, as $\alpha\to \alpha/(D-4)$, the GB invariant makes a non-trivial contribution to the gravitational dynamics even in the $D=4$. This is evident by considering maximally symmetric spacetimes with curvature scale ${\cal K}$ $$\label{gbc}
\frac{g_{\mu\sigma}}{\sqrt{-g}} \frac{\delta \mathscr{G}}{\delta g_{\nu\sigma}} = \frac{\alpha (D-2) (D-3)}{2(D-1)} {\cal K}^2 \delta_{\mu}^{\nu},$$ obviously the variation of the GB action does not vanish in $D=4$ because of the re-scaled coupling constant [@Glavan:2019inb]. This $4D$ EGB gravity has already attracted much attentions and being extensively studied [@Konoplya:2020juj]. The Boulware and Deser [@Boulware:1985wk] version of the spherically symmetric static black hole to the $4D$ EGB gravity was obtained in [@Glavan:2019inb], which was also extended to the charged case [@Fernandes:2020rpa]; this kind of solution for the Lovelock gravity has been obtained in Ref. [@Konoplya:2020qqh; @Casalino:2020kbt]. The corresponding black holes in a string cloud model was considered in [@Singh:2020nwo]. Also, nonstatic Vaidya-like spherical radiating black hole solutions have been obtained in $4D$ EGB gravity [@Ghosh1:2020vpc]. More black hole solutions can be found in Refs. [@Wei:2020ght; @Kumar:2020owy; @Doneva:2020ped; @Konoplya:2020ibi; @Kumar:2020uyz; @Singh:2020xju; @HosseiniMansoori:2020yfj]. Study of photon geodesics and the effects of GB coupling parameter on the shadow of $4D$ EGB black hole is presented in Ref. [@Guo:2020zmf; @Konoplya:2020bxa; @Wei:2020ght; @Kumar:2020owy]
Gravitational lensing is one of the most powerful astrophysical tools for investigations of the strong-field features of gravity. Motivated by the pioneering work of Darwin [@Darwin], strong gravitational lensing by compact astrophysical objects with a photon sphere, such as black holes, has been extensively studied. The deflection of electromagnetic radiation in a gravitational field is commonly referred to as the gravitational lensing and an object causing a deflection is known as a gravitational lens. Virbhadra [@Virbhadra:1999nm] studied the strong-field situation to obtained the gravitational lens equation using an asymptotically flat background metric and also analyzed the gravitational lensing by a Schwarzschild black hole. The results have been applied to the supermassive black hole Sgr A\* using numerical techniques [@Virbhadra:1999nm]. Later on, using a different formulation, an exact lens equation and integral expressions for its solutions were obtained [@Frittelli:1999yf]. However, it was Bozza [*et al.*]{} [@Bozza:2001xd] who first defined the strong-field deflection limit to analytically investigate the Schwarzschild black hole lensing. This technique has been applied to static, spherically symmetric metrics which includes Reissner$-$Nordstrom black holes [@Eiroa:2003jf], braneworld black holes [@Eiroa:2004gh; @Whisker:2004gq], charged black hole of heterotic string theory [@Bhadra:2003zs], and was also generalized to an arbitrary static, spherically symmetric metric by Bozza [@Bozza:2002zj]. On the other hand, lensing in the strong gravitational field is a powerful tool to test the nature of compact objects, therefore, it continues to receive significant attention, and more recent works include lensing from other black holes [@Chen:2009eu; @Sarkar:2006ry]. However, the qualitative features in lensing by these black holes, in the presence of a photon sphere, is similar to the Schwarzschild case. Also, strong gravitational lensing from various modifications of Schwarzschild geometry in modeling the galactic center has been studied, e.g., lensing from regular black holes was studied in [@Eiroa:2010wm].
The aim of this paper is to apply the prescription of Bozza [*et al.*]{} [@Bozza:2002zj] to investigate the gravitational lensing properties of the $4D$ EGB black hole. In particular, we calculate the strong lensing coefficients of the static spherically symmetric EGB black hole, from which we calculate the positions and magnification of the source’s images and contrast with the results for the Schwarzschild black hole. The effect of the GB coupling parameter on the weak gravitational lensing is also investigated. The paper is organized as follows: we begin in Sect. \[sec2\] with the discussion of the static spherically symmetric black hole of the novel $4D$ EGB gravity. Gravitational deflection of light in strong-field limits of these black holes is investigated in Sect. \[sec3\]. Corresponding lensing observables and numerical estimations of deflection angle, image positions, and magnifications in the context of supermassive black holes, namely Sgr A\* and M87\* are presented in Sect. \[sec4\]. Section \[sec5\] is devoted to the weak gravitational lensing. Finally, we summarize our main findings in Sect. \[sec6\].
The $4D$ EGB black hole {#sec2}
=======================
The EGB gravity action, with rescaled GB coupling $\alpha/(D-4)$ to restore the dimensional regularization, in the $D$-dimension spacetime yields [@Glavan:2019inb] $$\begin{aligned}
S &=& \frac{1}{16\pi G} \int d^{D}x\sqrt{-g}[R + \frac{\alpha}{D-4} \mathscr{G}],\end{aligned}$$ where $g$ is the determinant of metric tensor $g_{\mu\nu}$, $R$ is the Ricci scalar, $\alpha $ is the GB coupling constant considered to be positive and $\mathscr{G}$ is the GB quadratic invariant term solely constructed from the curvature tensors invariants [@Lanczos:1938sf]. The $4D$ EGB theory is defined in the limit of $D\rightarrow 4$ at the level of equations of motion rather than in action, thereby the GB term makes a non-trivial contribution in gravitational dynamic [@Glavan:2019inb]. The static spherically symmetric black hole solution in $4D$ EGB theory is given as [@Glavan:2019inb] $$\begin{aligned}
ds^2 &=& -f(r) dt^2 + \frac{1}{f(r)} dr^2 + r^2 (d\theta ^2 +\sin ^2\theta\,d\phi^2),\label{NR}\end{aligned}$$ with $$f_{\pm}(r)= 1+\frac{r^2}{32\pi \alpha G}\left( 1\pm \sqrt{1+\frac{128 \pi \alpha M G^2}{r^3}}\right).\label{fr}$$ Here $M$ is the black hole mass parameter and $G$ is the Newton’s gravitational constant which is set to unity hereafter. Two different branches of solutions correspond to the $``\pm"$ sign in Eq. (\[fr\]). At large distances, Eq. (\[fr\]) reduces to $$f_-(r)=1-\frac{2M}{r},\;\;\;
f_+(r)=1+\frac{2M}{r}+\frac{r^2}{16\pi\alpha },$$ we will be focusing only on the negative branch that asymptotically goes over to the Schwarzschild black hole with correct mass sign [@Glavan:2019inb]. In the limit of vanishing GB coupling constant, $\alpha\to 0$, the metric (\[NR\]) with $f_{-}(r)$ in Eq. (\[fr\]) retains to the Schwarzschild black hole metric. It is convenient to redefine the metric (\[NR\]) using the dimensionless variables as follow [@Bozza:2002zj] $$\begin{aligned}
d\tilde{s}^2 &=& (2M)^{-2} ds^2 = -A(x)dT^2+\frac{1}{A(x)}dx^2+C(x)(d\theta ^2 +\sin ^2\theta\,d\phi^2),\label{NR1}\end{aligned}$$ where we have defined $$\begin{aligned}
x = \frac{r}{2M}, \quad T=\frac{t}{2M}, \quad \tilde{\alpha} = \frac{\alpha}{M^2},\end{aligned}$$ and accordingly the metric (\[NR\]) modifies to $$A(x) = 1+\frac{x^2}{8\pi \tilde{\alpha} }\left( 1 - \sqrt{1+\frac{16 \pi \tilde{\alpha}}{x^3}} \right), \quad C(x)=x^2.$$
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![(Left panel) Plot showing the horizons for various values of GB coupling parameter. The black solid line corresponds to the extremal value of $\tilde{\alpha}$. (Right panel) The behavior of event horizon radii $x_+$ (solid black line), Cauchy horizon radii $x_-$ (dashed red line), and photon sphere radii $x_m$ (dashed blue line) with GB coupling parameter $\tilde{\alpha}$. []{data-label="plot"}](Horizon.eps "fig:") ![(Left panel) Plot showing the horizons for various values of GB coupling parameter. The black solid line corresponds to the extremal value of $\tilde{\alpha}$. (Right panel) The behavior of event horizon radii $x_+$ (solid black line), Cauchy horizon radii $x_-$ (dashed red line), and photon sphere radii $x_m$ (dashed blue line) with GB coupling parameter $\tilde{\alpha}$. []{data-label="plot"}](Horizon1.eps "fig:")
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The event horizon is the largest positive root of $g^{rr}=0$, i.e., horizon radii can be determined by solving $$\label{Delta}
A(x)=0,$$ which admits solutions $$\begin{aligned}
x_{\pm} &=& \frac{1}{2}\Big(1 \pm \sqrt{1-16\pi \tilde{\alpha}} \Big).\end{aligned}$$ Depending on the values of $\tilde{\alpha}$ and $M$, Eq. (\[Delta\]) has upto two real positive roots corresponding to the inner Cauchy horizon ($x_{-}$) and outer event horizon ($x_{+}$). It turns out that $A(x)=0$ has no solution if $\tilde{\alpha}>\tilde{\alpha}_E$ i.e., no black hole exists. Whereas, it has one double zero if $\tilde{\alpha}=\tilde{\alpha}_E$, and two simple zeros if $\tilde{\alpha}<\tilde{\alpha}_E$ (cf. Fig. \[plot\]), corresponding, respectively, $4D$ EGB black hole with degenerate horizon ($x_{-}=x_{+}\equiv x_E$), and a non-extremal black hole with two horizons ($x_{-}\neq x_{+}$). The event horizon radii decrease whereas Cauchy horizon radii increase with increasing $\tilde{\alpha}$ (cf. Fig. \[plot\]).
Strong Gravitational lensing {#sec3}
============================
In this section, we focus on the gravitational deflection of light in the static spherically symmetric $4D$ EGB black hole spacetime (\[NR1\]) and consider the propagation of light on the equatorial plane $(\theta =\pi/2)$, the metric (\[NR1\]) reduces to $$\begin{aligned}
\label{metric2}
d\tilde{s}^2 &=& -A(x)dT^2 + A(x)^{-1}dx^2 + C(x)d\phi ^{2}.\end{aligned}$$ Since the spacetime is static and spherically symmetric, the projection of photon four-momentum along the Killing vectors of isotmetries are conserved quantities, namely the energy $\mathcal{E}=-p_{\mu}\xi^{\mu}_{(t)}$ and angular momentum $\mathcal{L}=p_{\mu}\xi^{\mu}_{(\phi)}$ are constant along the geodesics, where $\xi^{\mu}_{(t)}$ and $\xi^{\mu}_{(\phi)}$ are, respectively, the Killing vectors due to time-translational and rotational invariance [@Chandrasekhar:1992]. This yields $$\frac{dt}{d\tau}=-\frac{\mathcal{E}}{A(x)},\qquad \frac{d\phi}{d\tau}=\frac{\mathcal{L}}{C(x)},\label{const}$$ where $\tau$ is the afine parameter along the geodesics. Using Eq. (\[const\]) for the null geodesics equation $ds^2=0$, we obtain $$\left(\frac{dx}{d\tau}\right)^2\equiv \dot{x}^2={\cal E}^2-\frac{\mathcal{L}^2A(x)}{C(x)}.$$ The radial effective potential $V_{\text{eff}}(x)=\mathcal{L}^2A(x)/C(x)$ describes the different kinds of possible orbits. In particular, photons simply move on circular orbits of constant radius $x_m$, at points where the potential is flat $$\frac{dV_{\text{eff}}(x)}{dx}=0\quad \Rightarrow\quad \frac{A'(x)}{A(x)}= \frac{C'(x)}{C(x)},$$ solving which, gives the radius of photon circular orbits $x_m$; at $x=x_m$ potential has a unique maximum. These orbits are unstable against small radial perturbations, which would finally drive photons into the black hole or toward spatial infinity [@Chandrasekhar:1992]. Due to spherical symmetry, these orbits are planer and generate a photon sphere around the black hole. Photons, coming from the far distance source, approach the black hole with some impact parameter and get deflected symmetrically to infinity, meanwhile reaching a minimum distance near the black hole. The impact parameter $u$ is related to the closest approach distance $x_0$, which follows from intersecting the effective potential $V_{\text{eff}}(x)$ with the energy of photon $\mathcal{E}^2$, this reads as $$V_{\text{eff}}(x)=\mathcal{E}^2\quad \Rightarrow \quad u\equiv \frac{\cal L}{\cal E} =\sqrt{\frac{C(x_0)}{A(x_0)}}.$$ For $x_0=x_m$, the corresponding impact parameter is $u_m$. The gravitational deflection angle of light is described as the angle between the asymptotic incoming and outgoing trajectories, and reads as [@weinberg:1972; @Virbhadra:1999nm] $$\begin{aligned}
\alpha_D (x_0) &=& I(x_0)-\pi,\label{def}\end{aligned}$$ where $$\label{def3}
I(x_0)=2\int_{x_0}^\infty \frac{d\phi}{dx}dx,\qquad
\frac{d\phi}{dx}=\frac{1}{\sqrt{A(x)C(x)}\sqrt{\frac{C(x)A(x_0)}{C(x_0)A(x)}-1}}.$$ It is worthwhile to note here that the impact parameter coincides with the distance of closest approach only in the limit of vanishing deflection angle. Due to spacetime symmetries, the total change in $\phi$ as $x$ decreases from $\infty$ to its minimum value $x_0$ and then increases again to $\infty$ is just twice its change from $\infty$ to $x_0$. In the absence of a black hole, photons will follow the straight line trajectory and this change in $\phi$ is simply $\pi$ and therefore by Eq. (\[def\]) the deflection angle is identically zero . The deflection angle $\alpha_D (x_0)$ monotonically increases as the distance of minimum approach $x_0$ decreases, and becomes higher than $2\pi$ resulting in the complete loops of the light ray around the black hole before escaping to the observer for $x_0\simeq x_m$ (or $ u\simeq u_m$) and leads to the set of infinite or relativistic source’s images [@weinberg:1972; @Virbhadra:1999nm]. Only for the $x_0=x_m$ (or $u=u_m$), deflection angle diverges logarithmically, whereas, photons with impact parameters $u<u_m$ get captured by the black hole and fall into the horizon. We consider the sign convention such that for $\alpha_D (x_0)>0$, light bend toward the black hole, whereas for $\alpha_D(x_0)<0$, light bend away from it. We are interested in the deflection of light in the strong-field limit, viz., light rays passing close to the photon sphere. In the strong deflection limit, we can expand the deflection angle near the photon sphere, where it diverges [@Bozza:2002zj]. For this purpose, we define a new variable [@Bozza:2002zj] $$\begin{aligned}
z &=& \frac{A(x)-A(x_0)}{1-A(x_0)},\end{aligned}$$ the integral (\[def\]) can be re-written as $$\begin{aligned}
I(x_0) &=& \int_{0}^{1}R(z,x_0) f(z,x_0) dz,\label{def1}\end{aligned}$$ with $$\begin{aligned}
R(z,x_0) &=& \frac{2\sqrt{C(x_0)}(1-A(x_0))}{C(x)A'(x)},\\
f(z,x_0) &=& \frac{1}{\sqrt{A(x_0)-\frac{A(x)}{C(x)}C(x_0)}}\label{f(x)},\end{aligned}$$ where functions without subscript “$0$" are evaluated at $x=A^{-1}[(1-A(x_0))z+A(x_0)]$. Performing the Taylor series expansion of the function inside the square root of Eq. (\[f(x)\]), we get $$\begin{aligned}
f_0(z,x_0) &=& \frac{1}{\sqrt{\phi(x_0) z + \beta(x_0) z^2}},\end{aligned}$$ where $$\begin{aligned}
\phi(x_0) &=& \frac{1-A(x_0)}{A'(x_0) C(x_0)}\left[\left(C^\prime(x_0) A(x_0)-A^\prime(x_0) C(x_0)\right)\right],\\
\beta(x_0) &=& \frac{\left(1-A(x_0)\right)^2}{2 A^\prime(x_0){^3} C(x_0)^2}\Big[2C(x_0)C^\prime(x_0) A^\prime(x_0){^2} + A(x_0)A^\prime(x_0) C(x_0)C^{\prime\prime}(x_0) \nonumber\\
&&-C(x_0)C^\prime(x_0) A(x_0) A^{\prime\prime}(x_0) -2C^\prime(x_0){^2}A(x_0)A^\prime(x_0) \Big].\end{aligned}$$
------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------
![Plot showing the strong lensing coefficients $\bar{a}$ and $\bar{b}$ as funtion of $\tilde{\alpha}$.[]{data-label="plot2"}](abar.eps "fig:") ![Plot showing the strong lensing coefficients $\bar{a}$ and $\bar{b}$ as funtion of $\tilde{\alpha}$.[]{data-label="plot2"}](bbar.eps "fig:")
------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------
In Eq. (\[def1\]), $R(z,x_0)$ is regular for all values of $z$ and $x_0$, however, $f(z,x_0)$ diverges for $x=x_0$ or $z\to 0$. With the above given definitions, the integral in Eq. (\[def\]) can be split into two, diverging and regular, parts as follows [@Bozza:2002zj] $$\begin{aligned}
I(x_0) &=& I_D(x_0)+I_R (x_0),\end{aligned}$$ with $$\begin{aligned}
I_D(x_0) &=& \int_0 ^1 R(0,x_m)f_0(z,x_0)dz,\label{divergent}\\
I_R(x_0) &=& \int_0 ^1 \Big(R(z,x_0)f(z,x_0)-R(0,x_m)f_0(z,x_0)\Big)dz.\label{regular}\end{aligned}$$ The integral $I_D(x_0)$ converges for $x_0 \ne x_m$, as $f_0(x_0)=1/\sqrt{z}$. But when $x_0=x_m$ the integral $I_D(x_0)$ has logrithmic divergences as $\phi(x_0)=0 $ and $f_0(x_0)=1/z$. Integral $I_R(x_0)$ is the regularized term with the divergence subtracted. Solving the integrals in Eqs. (\[divergent\]) and (\[regular\]), the deflection angle can be simplified as [@Bozza:2002zj] $$\begin{aligned}
\label{defangle}
\alpha_D(u) &=& -\bar{a} \log\Big(\frac{u}{u_m}-1\Big)+\bar{b} + O(u-u_m),\end{aligned}$$ where $$\bar{a}=\frac{R(0,x_m)}{2\beta(x_m)}, \qquad
\bar{b}= -\pi + I_R(x_m) + \bar{a} \log \Big( \frac{2\beta(x_m)}{A(x_m)}\Big).$$ Here $\bar{a}$ and $\bar{b}$ are called the strong deflection limit coefficients, which depend on the metric functions evaluated at the $x_m$. We plotted $\bar{a}$ and $\bar{b}$ as a function of GB coupling constant $\tilde{\alpha}$ in Fig. \[plot2\]. It is evident that $\bar{a}$ increases whereas $\bar{b}$ decreases with the $\tilde{\alpha}$, such that in the limit $\tilde{\alpha}\to 0$, they smoothly retain the values for the Schwarzschild black hole, viz., $\bar{a}=1$ and $\bar{b}=-0.4002$. The deflection angle $\alpha_D(u)$ for the static spherically symmetric EGB black hole (\[NR\]) in the strong deflection limits is calculated and plotted as a function af impact parameter $u$ for various values of $\tilde{\alpha}$ in Fig. (\[plot3\]). The deflection angle $\alpha_D(u)$ diverges for $u=u_m$ and steeply falls with $u$ (cf. Fig. \[plot3\]). It is evident that for fixed values of $u$, deflection angle decrease with increasing GB coupling parameter $\tilde{\alpha}$. It is worthwhile to note that the presented results are valid only in the strong deflection limit $u\gtrsim u_m$, whereas for $u>>u_m$, the strong deflection limit is not a valid approximation. There exist a value of $x_0=x_z$ or $u=u_z$ for which deflection angle becomes zero.
![Plot showing the behavior of deflection angle $\alpha_D(u)$ for strong-gravitational lensing with impact parameter $u$ for different values of $\tilde{\alpha}$. The colored points on the horizontal axis correspond to the impact parameter $u=u_m$, for which deflection angle diverges. []{data-label="plot3"}](deflection.eps)
Lensing Observables {#sec4}
===================
Once we have known the deflection angle due to strong gravitational lensing Eq. (\[defangle\]), we can easily calculate the image positions using the lens equation. Lens equation, establishing a relation between the observational setup geometry, namely the positions of source $S$, observer $O$ and the black hole $L$ in a given coordinate system, and the position of the lensed images in the observer’s sky, is given by [@Bozza:2001xd] $$\begin{aligned}
\tan\beta &=& \tan \theta - \frac{D_{LS}}{D_{OS}}[\tan \theta + \tan(\alpha-\theta)],\label{lensEq}\end{aligned}$$ where $\beta$ and $\theta$ are the angular separations, respectively, of the source and the image from the black hole. The distance between the source and black hole is $D_{LS}$, whereas distance from the source to the observer is $D_{OS}$; all distances are expressed in terms of the Schwarzschild radius $x$. Photon emitted with a impact parameter $u$ from the source approaches the black hole and is received by an observer. The deflection angle $\alpha_D$ is identified as the angle between the tangents to the emission and detection directions. Since it diverges for $u=u_m$ and the light rays perform several loops around the black hole before escaping to the observer, therefore, we can replace $\alpha_D$ by $2n\pi + \Delta \alpha_n $, where $n$ is the positive integer number corresponding to the winding number of loops around black hole and $\Delta \alpha_n $ is the offset of the deflection angle. For the case of far distant observer and the source and their nearly perfect alignment with the black hole, the lens equation (\[lensEq\]) can be simplified as [@Bozza:2001xd; @Bozza:2008ev] $$\begin{aligned}
\label{lenseq}
\beta &=& \theta -\frac{D_{LS}}{D_{OS}}\Delta \alpha_n.\end{aligned}$$ However, the lens equation has also been defined in more general setup [@Frittelli:1998hr; @Perlick:2004zh; @Perlick:2003vg; @Frittelli:1999yf; @Eiroa:2003jf; @Eiroa:2004gh; @Bozza:2007gt]. One can notice that in Eq. (\[lenseq\]), only the offset angle $\Delta \alpha_n $ comes into the lens equation rather than the complete deflection angle. To get the offset deflection angle for $n^{th}$ relativistic image, $\Delta \alpha_n $, we first solve the $\alpha_D(\theta_n^0)=2n\pi$, where $\theta_n^0$ is the image position for the $\alpha_D=2n\pi$, this yields $$\label{theta0}
\theta_n ^0=\frac{u_m}{D_{OL}}(1+e_n),$$ with $$e_n= e^{\frac{\bar{b}-2n\pi}{\bar{a}}}.$$ Now, making a Taylor series expansion of the deflection angle about $(\theta_n ^0)$ to the first order, gives $$\begin{aligned}
\label{Taylor}
\alpha_D(\theta) &=& \alpha_D(\theta_n ^0) +\frac{\partial \alpha_D(\theta)}{\partial \theta } \Bigg |_{\theta_n ^0}(\theta-\theta_n ^0)+O(\theta-\theta_n ^0),\end{aligned}$$ using $\Delta \theta_n=\theta-\theta_n ^0 $ and the deflection angle Eq. (\[defangle\]), the Eq. (\[Taylor\]) becomes $$\begin{aligned}
\Delta\alpha_n &=& -\frac{\bar{a}D_{OL}}{u_m e_n}\Delta\theta_n.\end{aligned}$$ Neglecting the higher-order terms, the lens equation finally gives the position of $n^{th}$ image [@Bozza:2002zj] $$\theta_n\simeq \theta_n ^0 + \frac{D_{OS}}{D_{LS}}\frac{u_me_n}{D_{OL}\bar{a}}(\beta-\theta_n ^0),\label{defcorrection}$$ for $\beta=\theta_n ^0$, viz., the image position coincides with the source position, the correction to the $n^{th}$ image position identically vanishes. The Eq. (\[defcorrection\]) gives source’s images on the same side of source ($\theta >0$). However, we can replace $\beta$ by $-\beta$ in order to get the images on the other side. The brightness of the source’s images will be magnified due to the gravitational lensing, the magnification of $n^{th}$ loop image is defined as [@Bozza:2002zj] $$\begin{aligned}
\mu_n &=& \frac{1}{\Big(\frac{\beta}{\theta}\Big)\Big(\frac{d\beta}{d\theta}\Big)}\Bigg|_{\theta_n ^0}.\label{magn}\end{aligned}$$ The brightness magnification decreases steeply with the order $n$ of the images, such that unless the source is almost perfectly aligned with the black hole and the observer, these images will be very faint as a result of high demagnification. Also Eq. (\[magn\]) infers that for the perfect alignment of the source and the black hole, $\beta\to 0$, the brightness magnification diverges.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Plot showing the behavior of lensing observables, $\theta_\infty$, $r$, and $s$ as a function of $\tilde{\alpha}$ for Sgr A\* (left panel) and M87\* (right panel) black holes. []{data-label="plot4"}](theta.eps "fig:") ![Plot showing the behavior of lensing observables, $\theta_\infty$, $r$, and $s$ as a function of $\tilde{\alpha}$ for Sgr A\* (left panel) and M87\* (right panel) black holes. []{data-label="plot4"}](theta1.eps "fig:")
![Plot showing the behavior of lensing observables, $\theta_\infty$, $r$, and $s$ as a function of $\tilde{\alpha}$ for Sgr A\* (left panel) and M87\* (right panel) black holes. []{data-label="plot4"}](magr.eps "fig:") ![Plot showing the behavior of lensing observables, $\theta_\infty$, $r$, and $s$ as a function of $\tilde{\alpha}$ for Sgr A\* (left panel) and M87\* (right panel) black holes. []{data-label="plot4"}](magr.eps "fig:")
![Plot showing the behavior of lensing observables, $\theta_\infty$, $r$, and $s$ as a function of $\tilde{\alpha}$ for Sgr A\* (left panel) and M87\* (right panel) black holes. []{data-label="plot4"}](angs.eps "fig:") ![Plot showing the behavior of lensing observables, $\theta_\infty$, $r$, and $s$ as a function of $\tilde{\alpha}$ for Sgr A\* (left panel) and M87\* (right panel) black holes. []{data-label="plot4"}](angs1.eps "fig:")
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Considering if only the outermost one-loop image $\theta_1$ is distinguishable as a single image from the remaining inner packed images $\theta_{\infty}$, we can have three characteristic observables [@Bozza:2002zj] $$\begin{aligned}
\theta_\infty &=& \frac{u_m}{D_{OL}},\\
s &=& \theta _1-\theta _\infty = \theta_\infty (e^{\frac{\bar{b}-2\pi}{\bar{a}}}),\label{s}\\
r &=& \frac{\mu_1}{\sum{_{n=2}^\infty}\mu_n } = e^{\frac{2 \pi}{\bar{a}}},\qquad r_{\text{mag}}= 2.5 \log(r) = \frac{5\pi}{\bar{a}\ln 10}.\label{obs}\end{aligned}$$ Here, $\theta_{\infty}$ is the angular radius of the photon sphere, i.e., the position of the innermost packed images, $s$ is the angular separation between the one-loop image and the images at $\theta_\infty$, and $r$ is the ratio of the brightness flux from the outermost relativistic image at $\theta_1$ to those from the remaining relativistic images at $\theta_{\infty}$. It must be noted that, in contrary to $\theta_{\infty}$ and $s$, $r_{\text{mag}}$ is independent of the black hole distance from the observer.
Now, Considering the supermassive black hole candidates at the galactic center of the Milky-Way and the near-by galaxy Messier 87, respectively, Sgr A\* and M87\*, as the static spherically symmetric EGB black hole described by metric (\[NR\]), we calculate the corresponding lensing observables. Based on the latest observational data, we have taken their masses and distances from the Earth as, $M=3.98\times 10^6 M_{\odot}$ and $D_{OL}=7.97$ kpc for Sgr A\* [@Do:2019txf], and $M=(6.5\pm 0.7)\times 10^9 M_{\odot}$ and $D_{OL}=(16.8\pm 0.8)$ Mpc for M87\* [@Akiyama:2019cqa]. It is interesting to estimate the correction due to the GB coupling parameter $\tilde{\alpha}$ by comparing the results of EGB black hole with those for the Schwarzschild black hole (cf. Table \[table3\] and Table \[table4\]). The lensing observables for the Sgr A\* and M87\* black holes are shown in Fig. (\[plot4\]) as a function of $\tilde{\alpha}$. Figure (\[plot4\]) clearly infers that the relativistic images have largest angular separation for small values of $\tilde{\alpha}$. By measuring the lensing observables, namely $\theta_\infty$, $s$, and $r$, for the EGB black hole and inverting the Eqs. (\[s\]) and (\[obs\]), we can calculate the strong deflection coefficients $\bar{a}$ and $\bar{b}$. The theoretically predicted values can be compared with the values inferred from the observational data to find the black hole parameters.
[ Lensing observables]{} [$\tilde{\alpha}=0$]{} [$\tilde{\alpha}=0.005$]{} [$\tilde{\alpha}=0.01$ ]{} [ $\tilde{\alpha}=0.015$]{} [ $\tilde{\alpha}=0.019$]{}
---------------------------- ------------------------ ---------------------------- ---------------------------- ----------------------------- -----------------------------
$\theta_\infty $ ($\mu$as) $25.530$ $25.0328$ $24.4737$ $23.8293$ $23.218$
$s$ ($\mu$as) $0.0319879$ $0.0438235$ $0.0629012$ $0.096509$ $0.146437$
$r_m$ $6.82051$ $6.41293$ $5.9309$ $5.33045$ $4.69159$
[Lensing observables ]{} [$\tilde{\alpha}=0$]{} [$\tilde{\alpha}=0.005$]{} [$\tilde{\alpha}=0.01$ ]{} [ $\tilde{\alpha}=0.015$]{} [ $\tilde{\alpha}=0.019$]{}
---------------------------- ------------------------ ---------------------------- ---------------------------- ----------------------------- -----------------------------
$\theta_\infty $ ($\mu$as) $19.780$ $19.395$ $18.9618$ $18.4625$ $17.9889$
$s$ ($\mu$as) $0.031987$ $0.0438235$ $0.0629012$ $0.096509$ $0.146437$
$r_m$ $6.82051$ $6.41293$ $5.9309$ $5.33045$ $4.69159$
: Estimated values of strong lensing observables for the M87\* black hole for different values of $\tilde{\alpha}$. []{data-label="table4"}
Weak Gravitational lensing {#sec5}
==========================
Gibbon and Werner [@Gibbons:2008rj] invoked the Gauss-Bonnet theorem, in the context of optical geometry to calculate the deflection angle of light in the weak-field limits of the spherically symmetric black hole spacetime [@Carmo]. Later, considering the source and observers at finite distances from the black hole, corrections in the deflection angle due to static spherically symmetric black hole were calculated by Ishihara *et al.* [@Ishihara:2016vdc], which is further generalized by Ono *et al.* [@Ono:2017pie] for the stationary and axisymmetric black holes. Since then, their method have been extensively used for varieties of black hole spacetimes [@Crisnejo1:2018uyn]. We follow their approach to calculate the light deflection angle in the weak-field limit caused by the static and spherically symmetric EGB black hole.
Considering a coordinate system centered at the black hole ($L$), and assuming the observer ($O$) and the source ($S$) at the finite distances from the black hole (cf. Fig. \[lensing1\]), we can define the deflection angle at the equatorial plane as follow [@Ishihara:2016vdc; @Ono:2017pie] $$\alpha_D=\Psi_O-\Psi_S+\Phi_{OS},$$ where, $\Phi_{OS}$ is the angular separation between the observer and the source, $\Psi_S$ and $\Psi_O$, respectively, are the angle made by light rays at the source and observer. The quadrilateral ${}_O^{\infty}\Box_{S}^{\infty}$, which is made up of spatial light ray curves from source to the observer, a circular arc segment $C_r$ of coordinate radius $r_C$ $(r_C\to\infty)$, and two outgoing radial lines from $O$ and $S$, is embedded in the 3-dimensional Riemannian manifold $^{(3)}\mathcal{M}$ defined by optical metric $\gamma_{ij}$ (cf. Fig. \[lensing1\]). The surface integral of the Gaussian curvature of the two-dimensional surface of light propagation in this manifold gives the light deflection angle [@Ishihara:2016vdc; @Ono:2017pie] $$\alpha_D=-\int\int_{{}_O^{\infty}\Box_{S}^{\infty}} K dS.\label{deflectionangle}$$ Solving Eq. (\[NR\]) for the null geodesics $ds^2=0$, we get $$dt= \pm\sqrt{\gamma_{ij}dx^i dx^j},$$ with $$\gamma_{ij}dx^i dx^j=\frac{1}{f(r)^2}dr^2+\frac{r^2}{f(r)}\left(d\theta^2+\sin^2\theta\, d\phi^2\right). \label{metric3}$$
![Schematic figure for the quadrilateral ${}_O^{\infty}\Box_{S}^{\infty}$ embedded in the curved space. []{data-label="lensing1"}](Lens.eps)
Gaussian curvature of the surface of light propagation is defined as [@Werner:2012rc] $$\begin{aligned}
K&=&\frac{{}^{3}R_{r\phi r\phi}}{\gamma},\nonumber\\
&=&\frac{1}{\sqrt{\gamma}}\left(\frac{\partial}{\partial \phi}\left(\frac{\sqrt{\gamma}}{\gamma_{rr}}{}^{(3)}\Gamma^{\phi}_{rr}\right) - \frac{\partial}{\partial r}\left(\frac{\sqrt{\gamma}}{\gamma_{rr}}{}^{(3)}\Gamma^{\phi}_{r\phi}\right)\right),\label{gaussian}\end{aligned}$$ where $\gamma=\det(\gamma_{ij})$. For EGB black hole metric (\[NR\]), in the weak-field limits, Eq. (\[gaussian\]) simplifies to $$\begin{aligned}
K&=&-\frac{2M}{r^3}+\frac{3M^2}{r^4}+\frac{640M^2\pi\alpha}{r^6}-\frac{1152M^3\pi\alpha}{r^7}+\mathcal{O}\left(\frac{M^3\alpha^2}{r^9},\frac{M^4\alpha^2}{r^{10}}\right).\end{aligned}$$ The surface integral of Gaussian curvature over the closed quadrilateral ${}_O^{\infty}\Box_{S}^{\infty}$ reads [@Ono:2017pie] $$\int\int_{{}_O^{\infty}\Box_{S}^{\infty}} K dS= \int_{\phi_S}^{\phi_O}\int_{\infty}^{r_0} K \sqrt{\gamma}dr d\phi,\label{Gaussian}$$ where $r_0$ is the distance of closest approach to the black hole. In the weak-field approximation, the light orbits equation can be consider as [@Crisnejo:2019ril] $$b=\frac{\sin\phi}{u} + \frac{M(1-\cos\phi)^2}{u^2}-\frac{M^2(60\phi\,\cos\phi+3\sin3\phi-5\sin\phi)}{16u^3}+\mathcal{O}\left( \frac{M^2\alpha}{u^5}\right)
,\label{uorbit}$$ where $b=1/r$, and $u$ is the impact parameter. The integral Eq. (\[Gaussian\]) can be recast as $$\int\int_{{}_O^{\infty}\Box_{S}^{\infty}} K dS= \int_{\phi_S}^{\phi_O}\int_{0}^{b}-\frac{K\sqrt{\gamma}}{b^2}db d\phi,$$ which for the metric Eq. (\[metric3\]) reads as
$$\begin{aligned}
\int\int K dS&=&\left(\cos^{-1}ub_o+\cos^{-1}ub_s\right)\Big(\frac{15M^2}{4u^2}-\frac{60M^2\pi\alpha}{u^4}\Big)\nonumber\\
&+&\left(\sqrt{1-u^2b_o^2}+\sqrt{1-u^2b_s^2}\right)\Big(\frac{2M}{u}+\frac{128M^3}{6u^3}-\frac{14852 M^3\pi\alpha }{25u^5}\Big) \nonumber\\
&+& \left(b_o\sqrt{1-u^2b_o^2}+b_s\sqrt{1-u^2b_s^2} \right)\Big(-\frac{M^2}{4u} -\frac{60M^2\pi\alpha }{u^3}\Big)\nonumber\\
&+& \left(b_o^2\sqrt{1-u^2b_o^2} +b_s^2\sqrt{1-u^2b_s^2} \right)\Big(\frac{ M^3}{6u}-\frac{7426 M^3\pi\alpha}{25u^3}\Big)\nonumber\\
&+&\mathcal{O}\left(\frac{M^4}{u^4},\frac{M^4\alpha}{u^6}\right),\label{Gaussian1}
\end{aligned}$$
where, $b_o$ and $b_s$, respectively, are the reciprocal of the the distances of observer and source from the black hole, i.e., $b_o=1/r_o$ and $b_s=1/r_s$. We have used the approximation $\cos\phi_o=-\sqrt{1-u^2b_o^2},\;\; \cos\phi_s=\sqrt{1-u^2b_s^2}$, the relative negative sign is because the source and the observer are at the opposite sides to the black hole. In the limits of far distant observer and source, $b_o\to 0$ and $b_s\to 0$, the deflection angle for the EGB black hole in the weak-field limits reads as follows $$\begin{aligned}
\alpha_D&=&\frac{4M}{u}+\frac{15\pi M^2}{4u^2}-\frac{60M^2\pi^2\alpha}{u^4}+\frac{128M^3}{3u^3}-\frac{29704 M^3\pi\alpha}{25u^5}+\mathcal{O}\left(\frac{M^4}{u^4},\frac{M^4\alpha}{u^6}\right).\label{defAngle}\end{aligned}$$ Further, in the limiting case of $\alpha=0$, it reduces as $$\left.\alpha_D\right|_{\text{Schw}}=\frac{4M}{u}+\frac{15\pi M^2}{4u^2}+\frac{128M^3}{3u^3}+\mathcal{O}\left(\frac{M^4}{u^4} \right),$$ which corresponds to the value for the Schwarzschild black hole [@Virbhadra:1999nm; @Crisnejo:2019ril]. In terms of the normalized impact parameter and GB coupling parameter, $u\to u/M$ and $\alpha\to\tilde{\alpha}=\alpha/M^2$, the deflection angle Eq. (\[defAngle\]), reads as $$\begin{aligned}
\alpha_D&=&\frac{4}{u}+\frac{15\pi }{4u^2}-\frac{60\pi^2\tilde{\alpha}}{u^4}+\frac{128}{3u^3}-\frac{29704\pi\tilde{\alpha}}{25u^5}+\mathcal{O}\left(\frac{1}{u^4},\frac{\tilde{\alpha}}{u^6}\right).\label{defAngle1}\end{aligned}$$ It is clear from Eq. (\[defAngle1\]), that the GB coupling parameter $\tilde{\alpha}$ decrease the deflection angle, i.e., in the weak-field limits the EGB black hole leads to smaller deflection angle than the Schwarzschild black hole. We presented the correction in the deflection angle $\delta \alpha_D=\alpha_D-\left.\alpha_D\right|_{\text{Schw}}$ due to $\tilde{\alpha}$ in Table \[T2\] for various values of impact parameter $u$. Table \[T2\] infers that, for fixed value of impact parameter $u$, the correction $\delta\alpha_D$ increases with the $\tilde{\alpha}$ and is of the order of micro-arc-second. However, for a fixed value of $\tilde{\alpha}$, $\delta\alpha_D$ decreases with $u$.
$ $ $\tilde{\alpha}=0.001$ $\tilde{\alpha}=0.003$ $\tilde{\alpha}=0.005$ $ \tilde{\alpha}=0.01 $ $ \tilde{\alpha}=0.019 $
------------ ------------------------ ------------------------ ------------------------ ------------------------- --------------------------
$u=1*10^3$ 0.122428 0.367283 0.612139 1.22428 2.32613
$u=2*10^3$ 0.00762716 0.0228815 0.0381359 0.0762718 0.144916
$u=3*10^3$ 0.00150446 0.0045134 0.0075223 0.0150446 0.0285848
$u=4*10^3$ 0.000475089 0.0014253 0.00237546 0.00475093 0.00902681
: The corrections in the deflection angle $\delta\alpha_D =\left.\alpha_D\right|_{\text{Schw}}-\alpha_D$ for weak-gravitational lensing around the EGB black hole with source at $b_s=10^{-4}$ and observer at $b_o=0$; $\delta\alpha_D$ is in units of $\mu$as. []{data-label="T2"}
![The corrections in the deflection angle $\delta\alpha_D =\left.\alpha_D\right|_{\text{Schw}}-\alpha_D$ for weak-gravitational lensing around EGB black hole with $b_s=10^{-4}$, $b_o=0$ and varying $u$; $\delta\alpha_D$ is in units of $\mu$as.[]{data-label="DefAng"}](deflection1.eps)
Conclusions {#sec6}
===========
The GB correction to the Einstein-Hilbert action provides a natural extension to the Einstein’s theory of general relativity in $D>4$ dimensional spacetime, however, it is a topological invariant quantity in $D\to 4$, and therefore does not contribute to the gravitational field equations. Recently formulated $4D$ novel EGB gravity, in which GB term in the gravitational action makes a non-trivial contribution to the field equations in $4D$ and contrary to the Schwarzschild black hole solution of GR, a black hole in this theory is free from the singularity pathology and can possess two horizons.
Gravitational lensing is unequivocally a potentially powerful tool for the analysis of strong fields and for testing general relativity. The strong field limit provides a useful framework for comparing lensing by different gravities, the $4D$ novel EGB gravity is a very interesting model to consider and discuss the observational signatures this quadratic curvature corrected gravity has than the Schwarzschild black hole of general relativity. In view of this, we investigated the gravitational lensing of light around the $4D$ EGB black hole in the strong deflection limits. Depending on the value of impact parameter $u$, photons get deflected from their straight path and leads to the multiple images of source, and for the particular value of $u=u_m$, photons follow the circular orbits around the black hole and deflection angle diverges. It is found that the static spherically symmetric EGB black holes lead to smaller deflection angle as compared to the Schwarzschild black hole value, and the deflection angle decreases with the GB coupling parameter $\tilde{\alpha}$. The effect of $\tilde{\alpha}$ on the deflection angle immediately reflect on the relativistic images. Considering the observer and the light source at far distances from the black hole, we calculate the observables for the strong lensing, namely the angular separation between the relativistic source’s images and the relative brightness magnitude of the first image. It is noted that, with the increasing GB coupling parameter, the photon circular orbits radii decrease, the angular separation between the set of source images increase, whereas the brightness magnitude decrease.
We modeled the supermassive black hole candidates, Sgr A\* and M87\*, as the $4D$ EGB black hole and calculate the lensing observables. The corrections in the angular separation of the images due to the GB coupling parameter is of the order of micro-arc-seconds, which is within the limit of current observational outreaches. The weak gravitational lensing around EGB black holes is also discussed and corrections in the deflections angle due to $\tilde{\alpha}$ are calculated, which increase with $\tilde{\alpha}$. Investigation of the gravitational lensing around the rotating EGB black holes in the strong and weak field limits is the topic of our future investigation.
S.G.G. would like to thank DST INDO-SA bilateral project DST/INT/South Africa/P-06/2016, SERB-DST for the ASEAN project IMRC/AISTDF/CRD/2018/000042 and also IUCAA, Pune for the hospitality while this work was being done. R.K. would like to thank UGC for providing SRF. S.U.I would like to thank SERB-DST for the ASEAN project IMRC/AISTDF/CRD/2018/000042.
[99]{}
B. Zwiebach, Phys. Lett. B [**156**]{}, 315 (1985); M. J. Duff, B. E. W. Nilsson and C. N. Pope, Phys. Lett. B [**173**]{}, 69 (1986); D. J. Gross and J. H. Sloan, [ Nucl. Phys. B]{} [**291**]{}, 41 (1987); M. C. Bento and O. Bertolami, [Phys. Lett. B]{} [**368**]{}, 198 (1996).
C. Lanczos, Annals Math. [**39**]{} 842 (1938).
D. Lovelock, J. Math. Phys. [**12**]{} 498 (1971).
D. G. Boulware and S. Deser, Phys. Rev. Lett. [**55**]{}, 2656 (1985).
J. T. Wheeler, Nucl. Phys. B [**268**]{}, 737 (1986).
F. R. Tangherlini, Nuovo Cim. [**27**]{}, 636 (1963).
D. Glavan and C. Lin, Phys. Rev. Lett. [**124**]{}, 081301 (2020).
R. A. Konoplya and A. F. Zinhailo, arXiv:2003.12492 \[gr-qc\]; Y. P. Zhang, S. W. Wei and Y. X. Liu, arXiv:2003.10960 \[gr-qc\]; K. Hegde, A. N. Kumara, C. L. A. Rizwan, A. K. M. and M. S. Ali, arXiv:2003.08778 \[gr-qc\]; C. Y. Zhang, P. C. Li and M. Guo, arXiv:2003.13068 \[hep-th\]; H. Lu and Y. Pang, arXiv:2003.11552 \[gr-qc\]. M. S. Churilova, arXiv:2004.00513 \[gr-qc\].
P. G. S. Fernandes, arXiv:2003.05491 \[gr-qc\].
R. A. Konoplya and A. Zhidenko, arXiv:2003.07788 \[gr-qc\].
A. Casalino, A. Colleaux, M. Rinaldi and S. Vicentini, arXiv:2003.07068 \[gr-qc\].
D. V. Singh, S. G. Ghosh and S. D. Maharaj, arXiv:2003.14136 \[gr-qc\].
S. G. Ghosh and S. D. Maharaj, arXiv:2003.09841 \[gr-qc\]; S. G. Ghosh and R. Kumar, arXiv:2003.12291 \[gr-qc\].
D. D. Doneva and S. S. Yazadjiev, arXiv:2003.10284 \[gr-qc\].
R. A. Konoplya and A. Zhidenko, arXiv:2003.12171 \[gr-qc\].
D. V. Singh and S. Siwach, arXiv:2003.11754 \[gr-qc\].
A. Kumar and R. Kumar, arXiv:2003.13104 \[gr-qc\].
S. W. Wei and Y. X. Liu, arXiv:2003.07769 \[gr-qc\].
R. Kumar and S. G. Ghosh, arXiv:2003.08927 \[gr-qc\].
S. A. Hosseini Mansoori, arXiv:2003.13382 \[gr-qc\].
M. Guo and P. C. Li, arXiv:2003.02523 \[gr-qc\].
R. A. Konoplya and A. F. Zinhailo, arXiv:2003.01188 \[gr-qc\].
C. Darwin, Proc. R. Soc. A **249**, 180 (1959).
K. S. Virbhadra and G. F. R. Ellis, Phys. Rev. D [**62**]{}, 084003 (2000).
S. Frittelli, T. P. Kling and E. T. Newman, Phys. Rev. D [**61**]{}, 064021 (2000).
V. Bozza, S. Capozziello, G. Iovane and G. Scarpetta, Gen. Rel. Grav. [**33**]{}, 1535 (2001).
E. F. Eiroa and D. F. Torres, Phys. Rev. D [**69**]{}, 063004 (2004).
R. Whisker, Phys. Rev. D [**71**]{}, 064004 (2005).
E. F. Eiroa, Phys. Rev. D [**71**]{}, 083010 (2005); Braz. J. Phys. [**35**]{}, 1113 (2005); G. Li, B. Cao, Z. Feng and X. Zu, Int. J. Theor. Phys. [**54**]{}, 3103 (2015).
A. Bhadra, Phys. Rev. D [**67**]{}, 103009 (2003).
V. Bozza, Phys. Rev. D [**66**]{}, 103001 (2002).
S. b. Chen and J. l. Jing, Phys. Rev. D [**80**]{}, 024036 (2009).
K. Sarkar and A. Bhadra, Class. Quant. Grav. [**23**]{}, 6101 (2006).
E. F. Eiroa and C. M. Sendra, Class. Quant. Grav. [**28**]{}, 085008 (2011).
S. Chandrasekhar, [*The Mathematical Theory of Black Holes*]{} (Oxford University Press, New York, 1992).
S. Weinberg, *Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity* (New York: Wiley, 1972).
V. Bozza, Phys. Rev. D [**78**]{}, 103005 (2008).
S. Frittelli and E. T. Newman, Phys. Rev. D [**59**]{}, 124001 (1999).
V. Perlick, Living Rev. Relativ. 7, 9 (2004). V. Perlick, Phys. Rev. D [**69**]{}, 064017 (2004).
V. Bozza and G. Scarpetta, Phys. Rev. D [**76**]{}, 083008 (2007).
T. Do [*et al.*]{}, Science [**365**]{}, 664 (2019).
K. Akiyama [*et al.*]{}, Astrophys. J. [**875**]{}, L1 (2019).
G. W. Gibbons and M. C. Werner, Class. Quant. Grav. [**25**]{}, 235009 (2008).
M. P. Do Carmo, *Differential Geometry of Curves and Surfaces*, (Prentice-Hall, New Jersey, 1976).
A. Ishihara, Y. Suzuki, T. Ono, T. Kitamura and H. Asada, Phys. Rev. D [**94**]{}, 084015 (2016); A. Ishihara, Y. Suzuki, T. Ono and H. Asada, Phys. Rev. D [**95**]{}, 044017 (2017).
T. Ono, A. Ishihara and H. Asada, Phys. Rev. D [**96**]{}, 104037 (2017).
G. Crisnejo and E. Gallo, Phys. Rev. D [**97**]{}, 124016 (2018); A. Övgün, Phys. Rev. D [**98**]{}, 044033 (2018); A. Övgün, I. Sakalli and J. Saavedra, JCAP [**1810**]{}, 041 (2018); A. Övgün, Phys. Rev. D [**99**]{}, 104075 (2019); W. Javed, j. Abbas and A. Övgün, Phys. Rev. D [**100**]{}, 044052 (2019); W. Javed, R. Babar and A. Övgün, Phys. Rev. D [**100**]{}, 104032 (2019); W. Javed, J. Abbas and A. Övgün, Eur. Phys. J. C [**79**]{}, 694 (2019); G. Crisnejo, E. Gallo and J. R. Villanueva, Phys. Rev. D [**100**]{}, 044006 (2019); G. Crisnejo, E. Gallo and A. Rogers, Phys. Rev. D [**99**]{}, 124001 (2019); T. Zhu, Q. Wu, M. Jamil and K. Jusufi, Phys. Rev. D [**100**]{}, 044055 (2019); R. Kumar, S. G. Ghosh and A. Wang, Phys. Rev. D [**100**]{}, 124024 (2019).
M. C. Werner, Gen. Rel. Grav. [**44**]{}, 3047 (2012).
G. Crisnejo, E. Gallo and K. Jusufi, Phys. Rev. D [**100**]{},104045 (2019).
H. Asada and M. Kasai, Prog. Theor. Phys. [**104**]{}, 95 (2000).
|
---
abstract: 'We generalise a construction of mixed Beauville groups first given by Bauer, Catanese and Grunewald. We go on to give several examples of infinite families of characteristically simple groups that satisfy the hypotheses of our theorem and thus provide a wealth of new examples of mixed Beauville groups.'
address: |
Department of Economics, Mathematics and Statistics, Birkbeck, University of London, Malet St, London WC1E 7HX\
`[email protected], [email protected]`
author:
- 'Ben Fairbairn, Emilio Pierro'
title: New examples of mixed Beauville groups
---
Introduction
============
We recall the following definition first made by Bauer, Catanese and Grunewald in [@BCG Definition 4.1].
\[mixed\] Let $G$ be a finite group and for $x,y\in G$ define $$\Sigma(x,y)=\bigcup_{i=1}^{|G|}\bigcup_{g\in G}\{(x^i)^g,(y^i)^g,((xy)^i)^g\}.$$ A **mixed Beauville quadruple** for $G$ is a quadruple $(G^0;a,c;g)$ consisting of a subgroup $G^0$ of index $2$ in $G$; of elements $a,c\in G^0$ and of an element $g\in G$ such that
1. $G^0$ is generated by $a$ and $c$;
2. $g\not\in G^0$;
3. for every $\gamma\in G^0$ we have that $(g\gamma)^2\not\in\Sigma(a,c)$ and
4. $\Sigma(a,c)\cap\Sigma(a^g,c^g)=\{e\}$.
If the $g$ is clear then we will omit this simply writing $(G^0;a,c)$ instead. If $G$ has a mixed Beauville quadruple we say that $G$ is a **mixed Beauville group** and call $(G^0;a,c)$ a **mixed Beauville structure** on $G$. Finally, the **type** of this structure is $(o(a_1),o(c_1),o(a_1c_1);o(a_2),o(c_2),o(a_2c_2))$.
We will not discuss the corresponding ‘unmixed’ Beauville groups here. Beauville groups were originally introduced in connection with a class of complex surfaces of general type, known as Beauville surfaces. These surfaces possess many useful geometric properties: their automorphism groups [@Jonesauts] and fundamental groups [@Catanese] are relatively easy to compute and these surfaces are rigid in the sense of admitting no non-trivial deformations [@BCG2] and thus correspond to isolated points in the moduli space of surfaces of general type. Early motivation came from providing cheap counterexamples to the so-called ‘Friedman-Morgan speculation’ [@FM] but they also provide a wide class of surfaces that are unusually easy to deal with to test conjectures and provide counterexamples. A number of excellent surveys on these and closely related matters have appeared in recent years - see any of [@7; @11; @12; @F; @gaj; @CombSurv] and the references therein.
To construct examples of mixed Beauville groups, Bauer, Catanese and Grunewald proved the following in [@BCG Lemma 4.5].
\[original\] Let $H$ be a perfect finite group. Let $a_1,c_1,a_2,c_2\in H$ and define $\nu(a_i,c_i)=o(a_i)o(c_i)o(a_ic_i)$ for $i=1,2$. Assume that
1. $o(a_1)$ and $o(c_1)$ are even;
2. $\langle a_1,c_1\rangle=H$;
3. $\langle a_2,c_2\rangle=H$;
4. $\nu(a_1,c_1)$ is coprime to $\nu(a_2,c_2)$.
Set $G:=(H\times H):\langle g\rangle$ where $g$ is an element of order $4$ that acts by interchanging the two factors; $G^0=H\times H\times\langle g^2\rangle$; $a:=(a_1,a_2,g^2)$ and $c:=(c_1,c_2,g^2)$. Then $(G^0;a,c)$ is a mixed Beauville structure on $G$.
The only examples of groups satisfying the hypotheses of Theorem \[original\] given by Bauer, Catanese and Grunewald were the alternating groups $A_n$ ‘for large $n$’ (proved using the heavy duty machinery first employed to verify Higman’s conjecture concerning alternating groups as images of triangle groups) and the groups $SL_2(p)$ with $p\not=2,3,5,17$ prime (though their argument also does not apply to the case $p=7$).
More generally, mixed Beauville groups have proved extremely difficult to construct. Bauer, Catanese and Grunewald show [@pq0] that there are two groups of order $2^8$ which admit a mixed Beauville structure but no group of smaller order, however, the method they use is that of checking every group computationally. Indeed, any $p$-group admitting a mixed Beauville structure must be a 2-group. Barker, Bosten, Peyerimhoff and Vdovina construct five new examples of mixed Beauville 2-groups in [@bbpv1] and an infinite family in [@bbpv2] and the aforementioned examples account for all presently known mixed Beauville groups. Non-examples of mixed Beauville groups, however, are in abundance. The following result is due to Fuertes and González-Diez [@fgd Lemma 5].
Let $(C \times C)/G$ be a Beauville surface of mixed type and $G^0$ the subgroup of $G$ consisting of the elements which preserve each of the factors, then the order of any element $f \in G \setminus G^0$ is divisible by $4$.
Fuertes and González-Diez used the above to prove that no symmetric group is a mixed Beauville group. It is easy to see, however that this result actually rules out most almost simple groups [@exc] (though as the groups $P\Sigma L_2(p^2)$ with $p$ prime show not quite all of them). Bauer, Catanese and Grunewald also show in [@BCG Theorem 4.3] that $G^0$ must be non-abelian.
In light of the above we make the following definition.
\[mixable\] Let $G$ be a group satisfying the hypotheses of Theorem \[original\]. Then we say that $G$ is a **mixable Beauville group**. More specifically, $G$ is a mixable Beauville group if $G$ is a perfect group and there exist $x_1,y_1,x_2,y_2\in G$ such that $o(x_1)$ and $o(y_1)$ are even; $\langle x_1,y_1\rangle=\langle x_2,y_2\rangle=G$ and $\nu(x_1,y_1)$ is coprime to $\nu(x_2,y_2)$. We call $(x_1,y_1,x_2,y_2)$ a **mixable Beauville structure** for $G$ of **type** $(o(x_1),o(y_1),o(x_1y_1);o(x_2),o(y_2),o(x_2y_2))$.
Thus any mixable Beauville group automatically gives us a mixed Beauville group by Theorem \[original\]. In Section 2 we will prove a generalisation of Theorem \[original\] that shows a given mixable Beaville group actually provides us with a wealth of mixed Beauville groups.
We remark that finding $2$-generated non-mixable Beauville groups is not difficult. For example, no symmetric group is mixable (every generating set must contain elements of even order, so we cannot satisfy the conditions in Definition \[mixable\]); $p$-groups are not mixable (to have an index $2$ subgroup we must have $p=2$ and then again the conditions in Definition \[mixable\] cannot be satisfied) and even among the finite simple groups, $PSL_2(2^n)$ for $n\geq2$ are not mixable (the only elements of even order have order $2$ and thus condition (2) of Theorem \[original\] cannot be satisfied). Despite this, we prove the following.
\[main\] If $G$ belongs to any of the following families of simple groups
- the alternating groups $A_n$, $n\geq6$;
- the linear groups $PSL_2(q)$ with $q\geq7$ odd;
- the unitary groups $U_3(q)$ with $q\geq3$;
- the Suzuki groups $^2B_2(2^{2n+1})$ with $n\geq1$;
- the small Ree groups $^2G_2(3^{2n+1})$ with $n\geq1$;
- the large Ree groups $^2F_4(2^{2n+1})$ with $n\geq1$;
- the Steinberg triality group $^3D_4(q)$ with $q\geq2$, or;
- the sporadic groups (including the Tits group $^2F_4(2)'$),
then $G$ is a mixable Beauville group. Furthermore, if $G$ is any of the above or $PSL_2(2^n)$ with $n\geq3$ then $G\times G$ is also a mixable Beauville group.
In light of the above Theorem we make the following conjecture.
If $G$ is a finite simple group not isomorphic to $PSL_2(2^n)$ for $n\geq2$ then $G$ is a mixable Beauville group.
Groups of the form $G^n$ where $G$ is a finite simple group are called characteristically simple groups. The study of characteristically simple Beauville groups has recently been initiated by Jones in [@JonesChar1; @JonesChar2]. It is easy to show that the characteristically simple group $PSL_2(7)\times PSL_2(7)\times PSL_2(7)$ is not a mixable Beauville group. We thus ask the following natural question.
If $G$ is a simple group then for which $n$ is $G^n$ a mixable Beauville group?
Throughout we use the standard ‘Atlas’ notation for finite groups and related concepts as described in [@ATLAS]. In Section 2 we discuss a generalization of Theorem \[original\] which enables mixable Beauville groups to define several mixed Beauville groups before turning our attention in the remaining Sections to the proof of Theorem \[main\].
Generalisation and auxiliary results
====================================
We begin with the following definition which will be key to the generating structures we demonstrate.
Let $G$ be a finite group and $x, y, z \in G$. A **hyperbolic generating triple** for $G$ is a triple $(x,y,z) \in G \times G \times G$ such that
1. $\frac{1}{o(x)} + \frac{1}{o(y)} + \frac{1}{o(z)} < 1$,
2. $\langle x, y, z \rangle = G$, and;
3. $xyz=1$.
The **type** of a hyperbolic generating triple $(x,y,z)$ is the triple $(o(x), o(y), o(z))$. If at least two of $o(x), o(y)$ and $o(z)$ are even then we call $(x,y,z)$ an **even triple**. If $o(x), o(y)$ and $o(z)$ are all odd then we call $(x,y,z)$ an $\textbf{odd triple}$. It is clear that since $z=(xy)^{-1}$, being generated by $x,y$ and $z$ is equivalent to being generated by $x$ and $y$. If the context is clear we may refer to a hyperbolic generating triple simply as a ‘triple’ for brevity and write $(x,y,xy)$ or even just $(x,y)$.
For a positive integer $k$ let $Q_{4k}$ be the dicyclic group of order $4k$ with presentation $$Q_{4k} = \langle p,q \mid p^{2k}=q^4=1,p^q=p^{-1},p^k=q^2 \rangle.$$ Let $G = (H \times H):Q_{4k}$ with the action of $Q_{4k}$ defined as follows. For $(g_1,g_2) \in H \times H$ let $p(g_1,g_2)=(g_1,g_2)$ and $q(g_1,g_2)=(g_2,g_1)$. Then $G^0=H \times H \times \langle p \rangle$ is a subgroup of index $2$ inside $G$.
\[genq\] Let $H$ be a perfect finite group. Let $a_1,c_1,a_2,c_2 \in H$ and, as before, define $\nu(a_i,c_i)=o(a_i)o(c_i)o(a_ic_i)$ for $i=1,2$. Assume that
1. the orders of $a_1,c_1$ are even,
2. $\langle a_1, c_1 \rangle = H$,
3. $\langle a_2, c_2 \rangle = H$,
4. $\nu(a_1,c_1)$ is coprime to $\nu(a_2,c_2)$.
Set $k>1$ to be any integer that divides gcd$(o(a_1),o(c_1)), G:=(H \times H):Q_{4k}$, $G^0:=H \times H \times \langle p \rangle$, a:=$(a_1,a_2,p)$ and $c:=(c_1,c_2,p^{-1})$. Then $(G^0;a,c)$ is a mixed Beauville structure on $G$.
We verify that the conditions of Definition \[mixed\] are satisfied. Since $k$ divides $\nu(a_1,c_1)$ it is coprime to $\nu(a_2,c_2)$ so we can clearly generate the elements $(1,a_2,1)$ and $(1,c_2,1)$ giving us the second factor. This also shows that we can generate the elements $a'=(a_1,1,p)$ and $c'=(c_1,1,p^{-1})$. Since $H$ is perfect we can then generate the first factor. Finally, since we can generate $H \times H$ we can clearly generate $\langle p \rangle$, hence we satisfy condition $M1$.
Now let $g \in G \setminus G^0$ and $\gamma \in G^0$. Then $g\gamma$ is of the form $(h_1,h_2,q^ip^j)$ for some $h_1,h_2 \in H$, $i=1,3$ and $1 \leq j \leq 2k$. Then $$(g\gamma)^2 = (h_1h_2,h_2h_1,(q^ip^j)^2)=(h_1h_2,h_2h_1,p^k).$$ For a contradiction, suppose that $(g\gamma)^2 \in \Sigma(a,c)$. Then since $h_1h_2$ has the same order as $h_2h_1$ condition 4 implies that $(g\gamma)^2=(1,1,p^k) \in \Sigma(a,c)$ if and only if $k$ does not divide $o(a)$ or $k$ does not divide $o(c)$. Note that if $(g\gamma)^2$ were a power of $ac$ by construction it would be $1_G$. Since by hypothesis $k$ divides gcd$(o(a_1),o(c_1))$ we satisfy conditions $M2$ and $M3$.
Finally, to show that condition $M4$ is satisfied, suppose $g' \in \Sigma(a,c) \cap \Sigma(a^g,c^g)$ for $g \in G \setminus G^0$. Since conjugation by such an element $g$ interchanges the first two factors of any element, we again have from condition $4$ that $g'$ is of the form $(1,1,p^i)$ for some power of $p$, but from our previous remarks it is clear that $p^i=1_H$ and so $g'=1_G$.
In the proof of Theorem \[genq\] we chose $a:=(a_1,a_2,p)$ and $c:=(c_1,c_2,p^{-1})$ but in principle we could have chosen the third factor of $a$ or $c$ to be $1$ and the third factor in their product $ac$ to be $p$ or $p^{-1}$ as appropriate. If we then required $k$ to divide gcd$(o(a),o(ac))$ or gcd$(o(ac),o(c))$ as necessary this gives rise to further examples of mixed Beauville groups.
We conclude this section with a number of results regarding hyperbolic generating triples for $G \times G$. The number of prime divisors of the order of a group will pose an obvious restriction on finding a mixable Beauville structure and so naturally we would like to know for how large an $n$ we can generate $G^n$ with a triple of a given type. Hall treats this question in much more generality in [@hall], here we simply mention that for a hyperbolic generating triple $(x,y,z)$ of a given type this depends on how many orbits there are of triples of the same type under the action of Aut$(G)$.
Let $G$ be a finite group and $(a_1,b_1,c_1)$, $(a_2,b_2,c_2)$ be hyperbolic generating triples for $G$. We call these two triples **equivalent** if there exists $g \in$ Aut$(G)$ such that $\{a_1^g,b_1^g,c_1^g\} = \{a_2,b_2,c_2\}$.
Since conjugate elements must have the same order the following is immediate.
\[inequi\] Let $G$ be a finite group and $(a_1,b_1,c_1)$ a hyperbolic generating triple of $G$ of type $(l_1,m_1,n_1)$ and $(a_2,b_2,c_2)$ a hyperbolic generating triple of $G$ of type $(l_2,m_2,n_2)$. If $\{l_1,m_1,n_1\} \neq \{l_2,m_2,n_2\}$ then $(a_1,b_1,c_1)$ and $(a_2,b_2,c_2)$ are inequivalent triples.
\[equi\] Let $G$ be a nonabelian finite group and let $(a_1,b_1,c_1)$ and $(a_2,b_2,c_2)$ be equivalent hyperbolic generating triples for $G$ for some $g \in$ Aut$(G)$. Then
1. if $a_1^g=a_2$ then $b_1^g=b_2$ and $c_1^g=c_2$,
2. if $a_1^g=b_2$ then $b_1^g=c_2$ and $c_1^g=a_2$, and;
3. if $a_1^g=c_2$ then $b_1^g=a_2$ and $c_1^g=b_2$.
Note that in this instance we take $a_ib_ic_i=1_G$ for $i=1,2$. Let $a_1^g=a_2$ and suppose $b_1^g=c_2=(a_2b_2)^{-1}$ and $c_1^g=b_2$. Then $$b_2 = c_1^g=(b_1^{-1}a_1^{-1})^g=c_2^{-1}a_2^{-1}=a_2b_2a_2^{-1}$$ which implies that $G$ is generated by elements that commute, a contradiction since $G$ is nonabelian. The proof is analogous for the remaining two statements.
\[cop\] Let $G$ be a finite group, $(x,y,z)$ a hyperbolic generating triple for $G$ and gcd$(o(x),o(y))=1$. Then $((x,y),(y,x^y),(z,z))$ is a hyperbolic generating triple for $G \times G$.
If $(x,y,z)$ is a hyperbolic generating triple of $G$ then so is $(y,x^y,z)$ since $x^y=y^{-1}xy$. Then, since the orders of $x$ and $y$ are coprime we can can produce the elements $(x,1_G), (y,1_G), (1_G,y)$ and $(1_G,x^y)$ which generate $G \times G$.
The proof of the preceding Lemma naturally generalises to any pair, or indeed triple, of elements in a hyperbolic generating triple whose orders are coprime.
The Alternating groups
======================
We make heavy use of the following Theorem due to Jordan.
Let $G$ be a primitive permutation group of finite degree $n$, containing a cycle of prime length fixing at least three points. Then $G \geqslant A_n$.
\[a6\] The alternating group $A_6$ and $A_6 \times A_6$ are mixable.
For our even triples we take the following elements in the natural representation of $A_6$ $$x_1=(1,2)(3,4,5,6), \; y_1=(1,5,6,4)(2,3), \; \; x_1'=(1,2)(3,4,5,6), \; y_1'=(1,5,6).$$ It can easily be checked in GAP [@gap] that $(x_1, y_1, x_1y_1)$ is an even triple for $A_6$ of type $(4,4,4)$ and that $((x_1,x_1'), (y_1,y_1'), (x_1y_1,x_1'y_1'))$ is an even triple for $A_6 \times A_6$ of type $(4,12,12)$.
For our odd triples, let $x_2=(1,2,3,4,5)$, $y_2= x_2^{(1,3,6)}$ and $y_2'=x_2^{(1,2,3,4,6)}$. Then it can be checked that $(x_2,y_2,x_2y_2)$ is an odd triple of type $(5,5,5)$ for $A_6$ and $((x_2,x_2),(y_2,y_2'),(x_2y_2,x_2y_2'))$ is an odd triple for $A_6 \times A_6$ also of type $(5,5,5)$. Therefore we have a mixable Beauville structure of type $(4,4,4;5,5,5)$ for $A_6$ and type $(4,12,12;5,5,5)$ for $A_6 \times A_6$.
The alternating group $A_7$ and $A_7 \times A_7$ are mixable.
For our even triples we take the following elements in the natural representation of $A_7$ $$x_1=(1,2)(3,4)(5,6,7), \; y_1=(1,2,3)(4,5)(6,7), \; \; x_1'=(1,6)(2,4,5)(3,7), \; y_1'=(1,6,2)(3,7,4).$$ It can easily be checked in GAP that $(x_1,y_1,x_1y_1)$ is an even triple for $A_7$ of type $(6,6,5)$ and that $((x_1,x_1'),(y_1,y_1'),(x_1y_1,x_1'y_1')$ is an even triple for $A_7 \times A_7$ of type $(6,6,5)$.
For our odd triples let $x_2=(1,2,3,4,5,6,7)$, $y_2=x_2^{(1,3,2)}$ and $y_2'=x_2^{(1,3,2)}$. Again, it can be checked that $(x_1,y_1,x_1y_1)$ is an odd triple of type $(7,7,7)$ for $A_7$ and that $((x_1,x_1'),(y_1,y_1'),(x_1y_1,x_1'y_1'))$ is an odd triple also of type $(7,7,7)$ for $A_7 \times A_7$. Therefore both $A_7$ and $A_7 \times A_7$ admit a mixable Beauville structure of type $(6,6,5;7,7,7)$.
\[alt\] The alternating group $A_{2m}$ and $A_{2m} \times A_{2m}$ are mixable for $m \geq 4$.
For $m \geq 4$ let $G=A_{2m}$ under its natural representation and consider the elements $$\begin{aligned}
a_1 &=(1,2)(3,\dots,2m),\\
b_1 &= a_1^{(1,3,4)}=(1,5,6,\dots,2m,4)(2,3),\\
a_1b_1 &=(1,3)(2,5,7,\dots,2m-3,2m-1,4,6,8,\dots,2m). \end{aligned}$$ The subgroup $H_1 = \langle a_1,b_1 \rangle$ is clearly transitive and the elements $$a_1^2 = (3,5,\dots,2m-1)(4,6,\dots,2m), \; \; b_1^2 = (1,6,8,\dots,2m)(5,7,\dots,2m-1,4),$$fix the point 2 and act transitively on the remaining points. Finally, $a_1^2 b_1^{-2}=(1,2m,2m-1,3,4)$ is a prime cycle fixing at least 3 points for all $m$ and so by Jordan’s Theorem $H_1 = G$. This gives us our first hyperbolic generating triple of type $(2m-2,2m-2,2m-2)$ for $G$. For our second, we show that there is a similar triple which is inequivalent to the first under the action of Aut$(G) = S_{2m}$. Consider the elements $$\begin{aligned}
a_1' &=(1,2)(3,\dots,2m),\\
b_1' &= a_1'^{(1,4,3)}=(1,3,5,6,\dots,2m)(2,4),\\
a_1'b_1' &=(1,4,6,\dots,2m,5,7,\dots,2m-1)(2,3) \end{aligned}$$ and note that $a_1'=a_1$. For the same argument as before we have that $\langle a_1',b_1' \rangle = G$. Now suppose that $(a_1,b_1,a_1b_1)$ is equivalent to $(a_1',b_1',a_1'b_1')$ for some $g \in$ Aut$(G)$. If $a_1^g = a_1'$ then by Lemma \[equi\] we have that $b_1^g=b_1'$ and $(a_1b_1)^g=a_1'b_1'$. Then $(1,2)^g=(1,2)$ and $(2,3)^g=(2,4)$ but these are incompatible with $(1,3)^g=(2,3)$ since for the former to hold $3$ must map to $4$ which is incompatible with the latter. Now suppose $a_1^g=b_1'$ implying $b_1^g=a_1'b_1'$ and $a_1b_1=a_1$. Then similarly we have $(1,2)^g=(2,4)$ and $(2,3)^g=(2,3)$ forcing $g$ to map $1$ to $4$ which is incompatible with requiring that $(1,3)^g=(1,2)$. Finally, if $a_1^g=a_1'b_1'$, forcing $b_1^g=a_1'$ and $a_1b_1=b_1'$, we get that $(1,2)^g=(2,3)$ and $(2,3)^g=(12)$ and we find this is incompatible with $(1,3)^g=(2,4)$. Hence these two hyperbolic generating triples are inequivalent under the action of the automorphism group of $G$ and so $((a_1,a_1'),(b_1,b_1'),(a_1b_1,a_1'b_1'))$ is a hyperbolic generating triple for $G \times G$ of type $(2m-2,2m-2,2m-2)$.
For our first odd triple consider the elements $$\begin{aligned}
a_2 &= (1,2,\dots,2m-1),\\
b_2 &= a_2^{(1,2m,3)} = (1,4,5,\dots,2m-1,2m,2), \\
a_2b_2 &=(2,3,5,7,\dots,2m-1,4,6,8,\dots,2m-2,2m) \end{aligned}$$ and let $H_2 = \langle a_2,b_2 \rangle$. We clearly have transitivity and 2-transitivity, hence $H_2$ is primitive. Since $a_2b_2^{-1}=(1,2m,2m-1,2,3)$ is a prime cycle fixing at least 3 points for all $m$, again we can apply Jordan’s Theorem and we have that $H_2 = G$. Then $(a_1,b_1,a_2,b_2)$ is a mixable Beauville structure on $G$ of type $$(2m-2,2m-2,2m-2;2m-1,2m-1,2m-1).$$ For our second odd triple consider the elements $$\begin{aligned}
a_2' &= (1,2,\dots,2m-1),\\
b_2' &= a_2'^{(1,3,2m)} = (2,2m,4,\dots,2m-1,3), \\
a_2'b_2' &=(1,2m,4,6,\dots,2m-2,3,5,\dots,2m-1)\end{aligned}$$ and note that $a_2'=a_2$. It follows that $\langle a_2,b_2 \rangle = G$ from a similar argument as before and so it remains to show that $\langle (a_2,a_2'),(b_2,b_2')\rangle = G \times G$. Let $g \in$ Aut$(G)$ and suppose that $a_2^g=a_2'$. Then by Lemma \[equi\] and inspection of the fixed points of these triples we have $g$ fixes $2m$ and maps $3 \to 1$ and $1 \to 2$ which is incompatible with $a_2^g=a_2'$. Similarly, if $a_2^g=b_2'$ then $2m \to 1$, $3 \to 2$ and $1 \to 2m$ but from $b_2^g=a_2'b_2'$ $g$ must then map $3 \to 5$, a contradiction. Finally, if $a_2^g=a_2'b_2'$ we get the mappings $2m \to 2$, $3 \to 2m$ and $1 \to 1$. But since $a_2^g=a_2'b_2'$, $g$ must then also map $3 \to 4$, a final contradiction. Then $(a_2,b_2,a_2b_2)$ and $(a_2',b_2',a_2'b_2')$ are inequivalent hyperbolic generating triples both of type $(2m-1,2m-1,2m-1)$ on $G$ and so we get a mixable Beauville structure on $G \times G$ of type $$(2m-2,2m-2,2m-2;2m-1,2m-1,2m-1).$$
The alternating group $A_{2m+1}$ and $A_{2m+1} \times A_{2m+1}$ are mixable for $m \geq 4$.
For $m \geq 4$ let $G=A_{2m+1}$ under its natural representation and consider the elements $$\begin{aligned}
a_1 &= (1,2)(3,4)(5,\dots,2m+1),\\
b_1 &=(1,2,\dots,2m-3)(2m-2,2m-1)(2m,2m+1),\\
a_1b_1 &= (1,3,5,\dots,2m-1,2m+1,6,8,\dots,2m-4).\end{aligned}$$ The subgroup $G_1 = \langle a_1,b_1 \rangle$ is clearly transitive and the elements $$\begin{aligned}
b_1a_1^2b_1^{-1} &= (4,6,8,\dots,2m,5,7,\dots,2m-5,2m-1,2m+1),\\
a_1b_1^2a_1^{-1} &= (2,4,2m+1,6,8,\dots,2m-4,1,3,\dots,2m-5)\end{aligned}$$ both fix the point $2m-3$ and act transitively on the remaining points; hence $G_1$ acts primitively. Finally, the element $a_1b_1^{-1} = (2,2m-3,2m-1,2m+1,4)$ is a prime cycle fixing at least 3 points for all $m \geq 4$ and so by Jordan’s Theorem $G_1 =G$. This gives our first hyperbolic generating triple of type $(2(2m-3),2(2m-3),2m-3)$ for $G$. For our second even triple, we manipulate the first in the following way. Let $$\begin{aligned}
a_1' &=(1,2m-4,\dots,6,2m+1,2m-1,\dots,3),\\
b_1' &=(1,2)(3,4)(5,\dots,2m+1),\\
a_1'b_1' &=(1,2m-3,\dots,2)(2m-2,2m-1)(2m,2m+1).\end{aligned}$$ Since $b_1'=a_1$ and $a_1'b_1'=b_1^{-1}$ it is clear that $\langle a_1',b_1' \rangle=G$. Note also that $a_1'=b_1^{-1}a_1^{-1}$. To see that $(a_1,b_1,a_1b_1)$ and $(a_1',b_1',a_1'b_1')$ are inequivalent, suppose for a contradiction there exists $g \in$ Aut$(G)$ for which these triples are equivalent. Since conjugation preserves cycle type it must be that $(a_1b_1)^g=a_1'$ which, by Lemma \[equi\], implies that $a_1^g=b_1'$ and $b_1^g=a_1'b_1'$. This give the equality $$a_1b_1^{-1}=b_1'(a_1'b_1')=a_1^gb_1^g=(a_1b_1)^g=a_1'=b_1^{-1}a_1,$$ a contradiction since otherwise $G$ would be abelian. Then, $((a_1,a_1'),(b_1,b_1'))$ is an even triple for $G \times G$ of type $(2(2m-3),2(2m-3),2(2m-3))$.
For our first odd triple consider the elements $$\begin{aligned}
a_2 &= (1,2, \dots, 2m+1),\\
b_2 &= a_2^{(1,2,3)} = (1,4,5,\dots,2m,2m+1,2,3),\\
a_2b_2 &= (1,3,5,\dots,2m-1,2m+1,4,6,\dots,2m-2,2m,2).\end{aligned}$$ The subgroup $G_2 = \langle a_2,b_2 \rangle$ is clearly transitive while the elements $b_2^{-1}a_2^2=(1,5,6,\dots,2m+1)(3,4)$ and $a_2b_2^{-1}=(1,2m+1,3)$ fix the point 2 and act transitively on the remaining points. Hence $G_2$ is primitive, with a prime cycle fixing at least 3 points, then by Jordan’s Theorem $G_2=A_{2m+1}$. This gives us an odd triple of type $(2m+1,2m+1,2m+1)$ for $G$ and so since we have gcd$(2(2m-3),2m+1)=1$ it follows that $(a_1,b_1,a_2,b_2)$ is a mixable Beauville structure for $A_{2m+1}$ of type $$(2(2m-3),2(2m-3),2m-3;2m+1,2m+1,2m+1).$$ For our second odd triple consider the cycles $$\begin{aligned}
x_2 &= (1,2, \dots, 2m-1),\\
y_2 &= x_2^{(1,2m,2,2m+1,3)} = (1,4,5,\dots,2m-1,2m,2m+1),\\
x_2y_2 &= (1,2,3,5,\dots,2m-1,4,6,\dots,2m,2m+1)\end{aligned}$$ and let $H_2 = \langle x_2,y_2 \rangle$. We clearly have transitivity while the elements $[x_2,y_2] = (1,2m,4,5,2)$ and $x_2[x_2,y_2] = (2,3,5,6,\dots,2m-1,2m,4)$ show that $H_2$ acts transitively on the stabiliser of the point $2m+1$ and contains a prime cycle fixing at least 3 points for all $m$. Then by Jordan’s Theorem $H_2 = G$ and this gives us a second odd triple of type $(2m-1,2m-1,2m-1)$. Since it is clear that $2(2m-3)$ is coprime to both $2m-1$ and $2m+1$ we then have a mixable Beauville structure on $G \times G$ of type $$(2(2m-3),2(2m-3),2(2m-3);4m^2-1,4m^2-1,4m^2-1).$$
The groups of Lie type
======================
We make use of theorems due to Zsigmondy, generalising a theorem of Bang, and Gow which we include here for reference. Throughout this section $q=p^e$ will denote a prime power for a natural number $e \geq 1$.
\[zsig\] For any positive integer $a > 1$ and $n > 1$ there is a prime number that divides $a^n-1$ and does not divide $a^k-1$ for any positive integer $k < n$, with the following exceptions:
1. $a = 2$ and $n=6$; and
2. $a+1$ is a power of $2$, and $n = 2$.
We denote a prime with such a property $\Phi_n(a)$.
The case where $a=2$, $n > 1$ and not equal to $6$ was proven by Bang in [@bang]. The general case was proven by Zsigmondy in [@zs]. Hereafter we shall refer to this as Zsigmondy’s Theorem. A more recent account of a proof is given by Lünburg in [@Lub].
Let $G$ be a group of Lie type defined over a field of characteristic $p>0$, prime. A **semisimple** element is one whose order is coprime to $p$. A semisimple element is **regular** if $p$ does not divide the order of its centraliser in $G$.
\[gow\] Let $G$ be a finite simple group of Lie type of characteristic $p$, and let $g$ be a non-identity semisimple element in $G$. Let $L_1$ and $L_2$ be any conjugacy classes of $G$ consisting of regular semisimple elements. Then $g$ is expressible as a product $xy$, where $x \in L_1$ and $y \in L_2$.
A slight generalisation of this result to quasisimple groups appears in [@fmp Theorem 2.6].
Projective Special Linear groups $PSL_2(q) \cong A_1(q)$
--------------------------------------------------------
The projective special linear groups $PSL_2(q)$ are defined over fields of order $q$ and have order $q(q+1)(q-1)/k$ where $k=$gcd$(2,q+1)$. Their maximal subgroups are listed in [@fj].
\[l27\] Let $G=PSL_2(7)$. Then $G^n$ admits a mixable Beauville structure if and only if $n=1,2$.
The maximal subgroups of $G$ are known [@ATLAS p.2] and these are subgroups isomorphic to $S_4$ or point stabilisers in the natural representation of $G$ on 8 points. Hyperbolic generating triples cannot have type $(3,3,3)$, since $\frac{1}{3} + \frac{1}{3} + \frac{1}{3} \nless 1$, and similarly for types $(2,2,2)$, $(2,2,4)$ or $(2,4,4)$. The number of hyperbolic generating triples of type $(7,7,7)$ can be computed using GAP, but since it is equal to the order of Aut$(G)$ we see from [@hall] that there is no triple of type $(7,7,7)$ for $G^n$ when $n>1$. Triples of type $(4,4,4)$ exist and any such triple generates $G$ since elements of order 4 are not contained in point stabilisers and inside a subgroup isomorphic to $S_4$ the product of three elements of order $4$ cannot be equal to the identity. We can compute the number of such triples from the structure constants and since this is twice the order of Aut$(G)$ we have that there exists a hyperbolic generating triple of type $(4,4,4)$ on $G$ and on $G \times G$. We then see that this is the maximum number of direct copies of $G$ for which there exists a mixable Beauville structure.
For our odd triple we then take a triple of type $(7,7,3)$ which can be shown to exist by computing their structure constants and are seen to generate $G$ since if they were to belong to a maximal subgroup then the product of two elements of order $7$ would again have order $7$. This gives a mixable Beauville structure of type $(4,4,4;7,7,3)$ on $G$. Finally, we then have mixable Beauville structures on $G \times G$ of type $(4,4,4;7,7,21)$ by Lemma \[inequi\] or alternatively of type $(4,4,4;7,21,21)$ by Lemma \[cop\].
\[l28\] Let $G=PSL_2(8)$. Then $G \times G$ admits a mixable Beauville structure.
It can easily be checked in GAP that for $G$ there exists hyperbolic generating triples of types $(2,7,7)$, $(3,3,9)$ and $(3,9,9)$. The two odd triples are inequivalent by Lemma \[inequi\] and by Lemma \[cop\] we have that there exists a mixable Beauville structure of type $(14,14,7;3,9,9)$ on $G \times G$.
\[l29\] Let $G=PSL_2(9)$. Then both $G$ and $G \times G$ admit a mixable Beauville structure.
This follows directly from the exceptional isomorphism $PSL_2(9) \cong A_6$ and Lemma \[a6\].
We make use of the following Lemmas:
\[phil\] Let $G=PSL_2(q)$ for $q=7, 8$ or $q \geq 11$. Let $k=$ gcd$(2,q+1)$ and $\phi(n)$ be Euler’s totient function. Then, under the action of Aut$(G) = P \Gamma L_2(q)$ the number of conjugacy classes of elements of order $\frac{q+1}{k}$ in $G$ is $\phi(\frac{q+1}{k})/2e$.
Elements of order $\frac{q+1}{k}$ are conjugate to their inverse so there are $\phi(\frac{q+1}{k})/2$ conjugacy classes of elements of order $\frac{q+1}{k}$ in $PSL_2(q)$. The only outer automorphisms of $G$ come from the diagonal automorphisms and the field automorphisms, but since diagonal automorphisms do not fuse conjugacy classes of semisimple elements we examine the field automorphisms. These come from the action of the Frobenius automorphism on the elements of the field ${\mathbb{F}}_q$ sending each entry of the matrix to its $p$-th power. The only fixed points of this action are the elements of the prime subfield ${\mathbb{F}}_p$ and so, since the entries on the diagonal of the elements of order $\frac{q+1}{k}$ are not both contained in the prime subfield we have that the orbit under this action has length $e$, the order of the Frobenius automorphism. We then get $e$ conjugacy classes of elements of order $\frac{q+1}{k}$ inside of $G$ fusing under this action. Hence under the action of the full automorphism group there are $\phi(\frac{q+1}{k})/2e$ conjugacy classes of elements of order $\frac{q+1}{k}$.
\[2e+1\] For a prime power, $q=p^e \geq 13$, $q \neq 27$, let $q^+=\frac{q+1}{k}$ where $k=$ gcd$(2,q+1)$. Then $\frac{\phi(q^+)}{2e} > 1$ where $\phi(n)$ is Euler’s totient function.
Let $S=\{p^i,q^+-p^i \mid 0 \leq i \leq e-1 \}$ be a set of $2e$ positive integers less than and coprime to $q^+$ and whose elements are distinct when $q \geq 13$. To this set we add $k$ which will depend on $q$. When $p=2$ we let $k=7$ since for all $e>3$, $7 \notin S$ and gcd$(7,q^+)=1$. When $p=3$ we let $k=11$, then for $e>3$, $11 \notin S$ and gcd$(11,q^+)=1$. Now consider the cases $q \equiv \pm 1$ mod $4$ for $p \neq 2,3$. When $q \equiv 1$ mod $4$, let $k=q^+-2$. Since $p \neq 2$ we have $k \notin S$ and since $q^+$ is odd when $q \equiv 1$ mod $4$ we have gcd$(k,q^+)=1$. Finally, when $q \equiv 3$ mod $4$ then $e$ must be odd. When $e > 2$ then $k=\frac{p-1}{2} \notin S$ and is coprime to $q^+$. When $e=1$, $q^+=2^im$ where $i>0$ and $m$ is odd. Then $\phi(q^+)=\phi(2^i)\phi(m)=2^{i-1}\phi(m)>2$ since $p>11$. This completes the proof.
\[psl22\] Let $G$ be the projective special linear group $PSL_2(q)$ where $q \geq 7$. Then,
1. there is a mixable Beauville structure for $G \times G$, and;
2. when $p \neq 2$ there is also a mixable Beauville structure for $G$.
In light of Lemmas \[l27\]–\[l29\] we can assume that $q \geq 11$. Define $q^+= \frac{q+1}{k}$, where $k=$ gcd$(2,q+1)$, and similarly for $q^-$. Jones proves in [@JonesChar2] that hyperbolic generating triples of type $(p,q^-,q^-)$ exist for $G$ when $q \geq 11$ and since gcd$(p,q^-)=1$ we immediately have, by Lemma \[cop\], a hyperbolic generating triple for $G \times G$. We proceed to show that there exists a hyperbolic generating triple $(x,y,z)$ for $G$ of type $(q^+,q^+,q^+)$ and note that both $p$ and $q^-$ are coprime to $q^+$. The only maximal subgroups containing elements of order $q^+$ are the dihedral groups of order $2q^+$ which we denote by $D_{q^+}$. By Gow’s Theorem, for a conjugacy class, $C$, of elements of order $q^+$ there exist $x,y,z \in C$ such that $xyz=1$. Since inside $D_{q^+}$ any conjugacy class of elements of order $q^+$ contains only two elements, $x,y$ and $z$ can not all be contained in the same maximal subgroup of $G$. Hence $(x,y,z)$ is a hyperbolic generating triple for $G$ of type $(q^+,q^+,q^+)$. When the number of conjugacy classes of elements of order $q^+$ in $G$ under the action of Aut$(G)$ is strictly greater than $1$ we can apply Gow’s Theorem a second time to give a hyperbolic generating triple of type $(q^+,q^+,q^+)$ for $G \times G$. This follows from Lemmas \[phil\] and \[2e+1\] with the exceptions of $q=11$ or $27$. For $G=PSL_2(11)$ we have that a triple of type $(p,q^-,q^-)$ exists by [@JonesChar2] or alternatively the words $ab$ and $[a,b]$ in the standard generators for $G$ [@brauer] give an odd triple of type $(11,5,5)$. In both cases we have, by Lemma \[cop\], an odd triple of type or $(55,55,5)$ for $G \times G$. For our even triple, the structure constants for the number of triples of type $(6,6,6)$ can be computed and is seen to be twice the order of Aut$(G)$ and so we have an even triple for $G$ and $G \times G$. For $G=PSL_2(27)$ we take the words in the standard generators [@brauer] $(ab)^2(abb)^2$, $a^{b^2}$ which give an even triple of type $(2,14,7)$ and the words $b^2,b^a$ which give an odd triple of type $(3,3,13)$. Again, by Lemma \[cop\], these give a mixable Beauville structure on $G \times G$. Finally, we remark that when $q \equiv \pm 1$ mod $4$ we have that $q^-$ and $q^+$ have opposite parity and this determines the parity of our triples. When $q \equiv 1$ mod $4$, $(p,q^-,q^-)$ becomes our even triple, $(q^+,q^+,q^+)$ our odd triple and vice versa when $q \equiv 3$ mod $4$.
Projective Special Unitary groups $PSU_3(q) \cong$ $^2A_2(q)$
-------------------------------------------------------------
The projective special unitary groups $PSU_3(q)$ are defined over fields of order $q^2$ and have order $q^3(q^3+1)(q^2-1)/d$ where $d=(3,q+1)$. Their maximal subgroups can be found in [@psu3] and we refer to the character table and notation in [@ssf].
\[psut\] \[t\] Let $G=PSU_3(q)$ for $q = 4$ or $q \geq 7$. Let $d=gcd(3,q+1)$ and $t' = \frac{q^2-q+1}{d}$. Then there exists a hyperbolic generating triple of type $(t', t', t')$ for $G$.
Let $G, d$ and $t'$ be as in the hypothesis. Let $C$ be a conjugacy class of elements of order $t'$ in $G$ and for $g \in C$, let $T = \langle g \rangle$. When $q \geq 7$ the unique maximal subgroup of $G$ containing $g$ is $N_G(T)$ of order $3t'$. Since gcd$(6,t')=1$ and since $T$ is by definition normal in $N_G(T)$ the Sylow $p$-subgroup in $N_G(T)$ for any prime $p \vert t'$ is contained in $T$ and is therefore unique. Hence for all $x \in N_G(T) \setminus T$ the order of $x$ is 3. Then for $g \in C$ since $g$ is conjugate to $g^{-q}$ and $g^{q^2}$ [@ssf] we have $C \cap N_G(T) = \{g, g^{-q}, g^{q^2} \}$. This partitions $C$ into $\vert C \vert /3$ disjoint triples. Using the structure constants obtained from the character table of $G$ we show that it is possible to find a hyperbolic generating triple of $G$ entirely contained within $C$. Our method is to count the total number of triples $(x,y,z) \in C \times C \times C$ such that $xyz=1$, which we denote $n(C,C,C)$, and show that there exists at least one such triple where $x,y,z$ come from distinct maximal subgroups. Let $S$ be a triple of the form $\{g,g^{-q},g^{q^2}\} \subset C$, then for $s_1, s_2, s_3 \in S, s_1s_2s_3=1$ if and only if all three elements are distinct. Therefore the contribution to $n(C,C,C)$ from triples contained within a single maximal subgroup of $G$ is $2 \lvert C \rvert$. Using the formula for structure constants as found in [@frob] we have that $$n(C,C,C) = \frac{\lvert C \rvert^3}{\lvert G \rvert} \left( 1- \frac{1}{q(q-1)} -\frac{1}{q^3} -\sum_1^{t'-1} \frac{(\zeta_{t'}^u + \zeta_{t'}^{-uq} + \zeta_{t'}^{uq^2})^3}{3(q+1)^2(q-1)} \right)$$ where $\zeta_{t'}$ is a primitive $t'$-th root of unity. From the triangle inequality we have $\lvert (\zeta_{t'}^u + \zeta_{t'}^{-uq} + \zeta_{t'}^{uq^2})\rvert^3 \leq 27$ so we can bound $n(C,C,C)$ from below and for $q \geq 8$ the following inequality holds $$n(C,C,C) \geq \frac{\lvert C \rvert^3}{\lvert G \rvert} \left( 1- \frac{1}{q(q-1)} -\frac{1}{q^3} - \frac{9(t'-1)}{(q+1)^2(q-1)} \right) > 2\lvert C \rvert.$$ For $q = 4$ or $7$ direct computation of the structure constants show that we can indeed find a hyperbolic generating triple of the desired type.
\[psu3\] Let $G=PSU_3(q)$ for $q = 4$ or $ q\geq 7$. Let $c=gcd(3,q^2-1)$, $d=gcd(3,q+1)$ and $t'=(q^2-q+1)/d$. Then
1. for $p=2$ there exists a mixable Beauville structure on $G$ of type $$\left(2,4,\frac{q^2-1}{c};t', t', t'\right),$$
2. for $p \neq 2$ there exists a mixable Beauville structure on $G$ of type $$\left(p,q+1,\frac{q^2-1}{d};t',t',t' \right).$$
The existence of hyperbolic generating triples of type $(t',t',t')$ in both even and odd characteristic is given in Lemma \[psut\] and so we turn to the even triples. When $p=2$ we have the existence of our even triples from [@fmp Lemma 4.20 and Theorem 4.22] and so we now assume that $G=PSU_3(q)$ where $p$ is odd, $q \geq 7$, $r=q+1$ and $s=q-1$.
From the list of maximal subgroups of $G$ elements of order $rs/d$ exist and can belong to subgroups corresponding to stabilisers of isotropic points, stabilisers of non-isotropic points and possibly one of the maximal subgroups of a fixed order which can occur is $PSU_3(q)$ for certain $q$. Stabilisers of isotropic points have order $q^3r/d$ whereas stabilisers of non-isotropic points have order $qr^2s/d$. There exist $1+d$ conjugacy classes of elements of order $p$ in $G$ which are as follows. The unique conjugacy class, $C_2$, of elements whose centralisers have order $q^3r/d$; and $d$ conjugacy classes, $C_3^{(l)}$ for $0 \leq l \leq d-1$, of elements whose centralisers have order $q^2$. Since an element of order $p$ which stabilises a non-isotropic point belongs to a subgroup of $G$ isomorphic to $SL_2(q)$, the order of its centraliser in $G$ must be a multiple of $2p$, hence must belong to $C_2$. In particular, elements of $C_3^0$ do not belong to stabilisers of non-isotropic points. There exists a conjugacy class, $C_6$, of elements of order $r$ whose centralisers have order $r^2/d$. An element of order $r$ contained in the stabiliser of an isotropic point must be contained in a cyclic subgroup of order $rs/d$, hence $rs/d$ must divide the order of its centraliser and so elements of $C_6$ are not contained in the stabilisers of isotropic points. Let $C_7$ be a conjugacy class of elements of order $rs/d$, then, any triple of elements $(x,y,z) \in C_3^0 \times C_6 \times C_7$, such that $xyz=1$, will not be entirely contained within the stabiliser of an isotropic or non-isotropic point. Let $n(C_3^0,C_6,C_7)$ be the number of such triples, then using the structure constant formula from [@frob] and the character table for $G$ we have $$n(C_3^0,C_6,C_7) = \frac{\vert C_3^0 \vert \vert C_6 \vert \vert C_7\vert}{\vert G \vert}\left( 1 + \sum_{u=1}^{\frac{r}{d}-1} \frac{\epsilon^{3u}(\epsilon^{3u} + \epsilon^{6u} + \epsilon^{(r-3)u})}{t}\right)$$ where $\epsilon$ is a primitive $r$-th root of unity. Using the triangle inequality we can bound the absolute value of the summation by $\frac{3q}{q^2-q+1}$ which, for $q \geq 7$, is strictly less than 1. In order to show that such an $(x,y,z)$ is not contained in any of the possible maximal subgroups of order 36, 72, 168, 216, 360, 720 or 2520, notice that the subgroup generated by $(x,y,z)$ has order divisible by $n=p(q^2-1)/d$, hence this can only occur when $p=$ 3, 5 or 7. The only cases where $n \leq 2520$ and divides one of the possible subgroup orders are the cases $q=7$ or $9$, but none of these subgroups contain elements of order 48 or 80 so we see that this is indeed an even triple for $G$. Finally, we must show that gcd$(\frac{4rs}{c},t')=1$ when $p$ is even and gcd$(\frac{prs}{d},t')=1$ when $p$ is odd. For all $p$ it is clear that gcd$(p,t')=1$ and so it suffices to show that gcd$(rs,t')=1$. We have that $t'd-s=q^2$ so $t'$ is coprime to $s$ and since $r^2-t'd=3q$ and $t'$ is coprime to 3 we have that $t'$ is coprime to $r$.
In order to extend this to a mixable Beauville structure on $G \times G$ where $G=PSU_3(q)$ we will need the following Lemma:
\[phiu\] Let $G=PSU_3(q)$ for $q \geq 3$ and $d=(3,q+1)$. Then the number of conjugacy classes of elements of order $t'=(q^2-q+1)/d$ in $G$ under the action of Aut$(G)$ is $\phi(t')/6e$ where $\phi(n)$ is Euler’s totient function.
Let $x \in G$ have order $t'$, then $x$ is conjugate to $x^{-q}$ and $x^{q^2}$ and so the number of conjugacy classes of order $t'$ in $G$ is $\phi(t')/3$. Then since the field automorphism has order $2e$ and the diagonal entries of an element of order $t'$ are not all contained in the prime subfield its orbit has length $2e$. This gives the desired result.
\[u3u3\] Let $G$ be the projective special unitary group $PSU_3(q)$ for $q=7$ or $q \geq 9$. Let $c=$ gcd$(3,q^2-1)$, $d=$ gcd$(3,q+1)$ and $t'=(q^2-q+1)/d$. Then
1. for $p=2$ there exists a mixable Beauville structure on $G \times G$ of type $$\left(2\frac{q^2-1}{c},2\frac{q^2-1}{c},4;t',t',t' \right),$$ and;
2. for $p \neq 2$ there exists a mixable Beauville structure on $G \times G$ of type $$\left(p(q+1),p(q+1),p\frac{q^2-1}{d};t',t',t' \right).$$
Let the conditions of the hypothesis be satisfied with $p=2$. Then as in the proof of Lemma \[psu3\] there exists a hyperbolic generating triple of type $(2,4,\frac{q^2-1}{c})$ on $G$ and by Lemma \[cop\] this yields an even triple of type $(2\frac{q^2-1}{c},2\frac{q^2-1}{c},4)$ on $G \times G$. Similarly, for $p \neq 2$ by Lemma \[psu3\] there exists a hyperbolic generating triple of type $(p,q+1,\frac{q^2-1}{d})$ on $G$, which by Lemma \[cop\] yields an even triple of type $(p(q+1),p(q+1),\frac{q^2-1}{d})$ on $ G\times G$.
By Lemma \[psut\] there exist hyperbolic generating triples of type $(t',t',t')$ for all $p$ and by Lemma \[phiu\] we need only show that $\phi(t') > 6e$. For $q=7,9$ it can be verified directly that $\phi(t')>6e$ so we can assume $q \geq 11$. In the case $d=1$ we have $2^3 < p^e-1$ so $2^f < p^{2f}-p^f+1$ for all $0 \leq f \leq 6e$ and we have our inequality. In the case $d=3$ since $2^33 < p(p-1)$ we have $2^f$ for $1 \leq f \leq 3e$. Similarly, since $3^2 < p-1$ we have the terms $3^{f-1}$ for $1 \leq f \leq 4e$ giving us $7e$ terms in total, as was to be shown.
\[U33458\] The projective special unitary group $PSU_3(q)$ and $PSU_3(q) \times PSU_3(q)$ admit a mixable Beauville structure for $q \geq 3$.
In light of the preceding Lemmas in this section it remains only to check the cases $PSU_3(3)$, $PSU_3(5)$ and $PSU_3(q) \times PSU_3(q)$ for $q=3,4,5$ and $8$. We present words in the standard generators [@sg; @brauer] that can be easily checked to give suitable triples for $G$ which, by Lemma \[cop\], give mixable Beauville structures for these cases. For $G=PSU_3(3)$ let $$a_1=[a,b^2], \; b_1=[a,b^2]^b, \; a_2=(babab^2)^3, \; b_2=[a,b^2]^b \in G.$$ It can be checked that $G = \langle a_1,b_1 \rangle = \langle a_2,b_2 \rangle$ where both hyperbolic generating triples have type $(4,4,8)$ but in the former triple both elements of order $4$ come from the conjugacy class $4C$, whereas in the latter, $a_2 \in 4AB, b_2 \in 4C$. Since these two triples are then inequivalent under the action of Aut$(G)$ we have that $(a_1,a_2), (b_1,b_2) \in G \times G$ yields a hyperbolic generating triple of type $(4,4,8)$. For the remaining cases we present in Table \[u3table\] words in the standard generators for $G$ which, by Lemma \[cop\], give a mixable Beauville structure on $G \times G$ and $G$ where necessary.
$q$ $x_1$ $y_1$ $x_2$ $y_2$ Type
----- ----------- ----------- --------- ----------------- -------------------
$3$ $a_1,a_2$ $b_1,b_2$ $ab$ $ba$ $(4,4,8;7,7,3)$
$4$ $a$ $(ab)^2$ $b$ $[b,a]$ $(2,13,13;3,5,3)$
$5$ $a$ $ab^2$ $ab$ $b^3ab^3$ $(3,8,8;7,7,5)$
$8$ $a$ $(ab)^2$ $[a,b]$ $[a,b]^{babab}$ $(2,19,19;9,9,7)$
: Words in the standard generators [@brauer] $a$ and $b$ or as otherwise specified in Lemma \[U33458\] for $G = PSU_3(q)$.[]{data-label="u3table"}
The Suzuki groups $^2B_2(2^{2n+1}) \cong Sz(2^{2n+1})$
------------------------------------------------------
The Suzuki groups $^2B_2(q)$ are defined over fields of order $q=2^{2n+1}$ for $n \geq 0$ and have order $q^2(q-1)(q^2+1)$. They are simple for $q>2$ and their maximal subgroups can be found in [@raw].
\[sz\] Let $G$ be the Suzuki group $^2B_2(q)$ for $q>2$. Then
1. $G$ admits a mixable Beauville structure of type $(2,4,5;q-1,n,n)$, and;
2. $G \times G$ admits a mixable Beauville structure of type $(4,10,10;n(q-1),n(q-1),n)$
where $n = q \pm \sqrt{2q} +1$, whichever is coprime to 5.
In the proof of [@fj Theorem 6.2] Fuertes and Jones prove that there exist hyperbolic generating triples for $G$ of types $(2,4,5)$ and $(q-1,n,n)$. It is clear that gcd$(10,n)=$ gcd$(10,q-1)=1$. Then, by Lemma \[cop\], we need only show that gcd$(q-1,n)=1$. If $q-1$ and $n$ share a common factor, then so do $q^2-1$ and $q^2+1$ and similarly their difference. Hence gcd$(q-1,n)$ divides 2, but since $q-1$ is odd we have gcd$(q-1,n)=1$ as was to be shown.
The small Ree groups $^2G_2(3^{2n+1}) \cong R(3^{2n+1})$
--------------------------------------------------------
The small Ree groups are defined over fields of order $q=3^{2n+1}$ for $n \geq 0$ and have order $q^3(q^3+1)(q-1)$. They are simple for $q>3$ and their maximal subgroups can be found in [@2g2] or [@raw].
\[ree\] The small Ree groups $^2G_2(q)$ for $q>3$ admit a mixable Beauville structure of type $$\left(\frac{q+1}{2},\frac{q+1}{2},q+\sqrt{3q}+1;\frac{q-1}{2},\frac{q-1}{2},q-\sqrt{3q}+1\right).$$
Let $G= {}^2G_2(q)$ for $q>3$ and for convenience we let $n^+ =q+\sqrt{3q}+1$ and $n^- =q-\sqrt{3q}+1$. Ward’s analysis of the small Ree groups [@ward] shows that elements of orders $\frac{q+1}{2}$, $\frac{q-1}{2}$, $n^+$ and $n^-$ exist and that the order of their centralisers are $q+1$, $q-1$, $n^+$ and $n^-$; hence these are all regular semisimple elements of $G$. Then, by Gow’s Theorem, we can find elements $x_1, y_1 \in G$, both of order $\frac{q+1}{2}$, whose product has order $n^+$, and elements $x_2,y_2 \in G$, both of order $\frac{q-1}{2}$, whose product has order $n^-$.
The only maximal subgroups of $G$ containing elements of order $n^+$ are normalisers of the cyclic subgroup which they generate of order $6n^+$. Similarly for elements of order $n^-$, they are contained in maximal subgroups of order $6n^-$. Since $n^+ - (q+1) = \sqrt{3q}$ we have gcd$(\frac{q+1}{2},n^+)=1$ since neither is divisible by 3. Then, for $q>3$ we have $\frac{q+1}{2} > 6$ so $(x_1,y_1,x_1y_1)$ is indeed an even triple for $G$ of type $(\frac{q+1}{2},\frac{q+1}{2},q+\sqrt{3q}+1)$. Similarly, for $q>3$ we have $\frac{q-1}{2}>6$ and $n^+n^- + (q-1) = q^2$, hence gcd$(\frac{q-1}{2},n^-)=1$ since again neither is divisible by 3. Note this also implies that gcd$(n^+,\frac{q-1}{2})=1$. This gives us an odd triple for $G$ of type $(\frac{q-1}{2},\frac{q-1}{2},q-\sqrt{3q}+1)$.
It is clear that gcd$(\frac{q+1}{2},\frac{q-1}{2})=1$ since their difference is $q$ and neither is divisible by 3. Similarly, gcd$(n^+,n^-)=1$ since their difference is $2\sqrt{3q}$ and both are clearly coprime to 6. Since we have already shown that gcd$(n^+,\frac{q-1}{2})=1$ it remains to show that gcd$(n^-,\frac{q+1}{2})=1$. We have $(q+1) - n^- = \sqrt{3q}$ and since neither is divisible by 3 we are done.
The groups $^2G_2(q) \times {}^2G_2(q)$ for $q>3$ admit a mixable Beauville structure of type $$\left(\frac{q+1}{2},\frac{q+1}{2}n^+,\frac{q+1}{2}n^+;\frac{q-1}{2},\frac{q-1}{2}n^-,\frac{q-1}{2}n^-\right)$$ where $n^+ =q+\sqrt{3q}+1$ and $n^- =q-\sqrt{3q}+1$.
By Lemmas \[cop\] and \[ree\] we need only show that gcd$(\frac{q+1}{2},n^+)=1$ and gcd$(\frac{q-1}{2},n^-)=1$, both of which were demonstrated in the proof of Lemma \[ree\].
The large Ree groups $^2F_4(2^{2n+1})$
--------------------------------------
The large Ree groups $^2F_4(q)$ are defined over fields of order $q = 2^{2n+1}$ for $n \geq 0$ and have order $q^{12}(q^6+1)(q^4-1)(q^3+1)(q-1)$. They are simple except for the case $q=2$ which has simple derived subgroup $^2F_4(2)'$, known as the Tits group, which we consider along with the sporadic groups in the next section. The maximal subgroups of the large Ree groups can be found in [@2f4] or [@raw].
\[Ree\] The large Ree groups $^2F_4(q)$ for $q >2$ admit a mixable Beauville structure of type $$\left(10,10,n^+;\frac{q^2-1}{3}, n^-,n^-\right)$$ where $n^+ = q^2+q+1+\sqrt{2q}(q+1)$ and $n^- =q^2+q+1-\sqrt{2q}(q+1)$.
Let $G = {}^2F_4(q)$ for $q > 2$. Elements of order $10$ exist since $G$ contains maximal subgroups of the form $^2B_2(q) \wr 2$, as do elements of order $\frac{q^2-1}{3}$ since $G$ contains maximal subgroups of the form $SU_3(q):2$ and $PGU_3(q):2$ and since gcd$(3,q+1) = 3$. The only maximal subgroup containing an element of order $n^+$ is the normaliser of the cyclic subgroup which it generates, of order $12n^+$. Similarly, elements of order $n^-$ are only contained in the normalisers of the cyclic subgroup they generate, of order $12n^-$.
Using the computer program [CHEVIE]{} it is possible to determine the structure constant for a pair of elements of order 10 whose product is $n^+$ and we see this is nonzero. Since $n^+n^- = q^4-q^2+1$ and $q \equiv \pm 2$ mod $5$ we have that gcd$(10,n^-)=$ gcd$(10,n^+)=1$. Then, since no maximal subgroup contains elements of orders $10$ and $n^+$ this is indeed an even triple for $G$. Elements of order $\frac{q^2-1}{3}$ are semisimple and the order of the centraliser of an element of order $n^-$ is $n^-$ [@sh] so these are regular semisimple elements. Then by Gow’s Theorem there exists a pair of elements of order $n^-$ whose product has order $\frac{q^2-1}{3}$. To show that such a triple will generate $G$ we show that no maximal subgroup contains elements of orders $n^-$ and $\frac{q^2-1}{3}$. Since $(n^+n^-) + (q^2 -1) = q^4$ any common factor of $n^-$ and $\frac{q^2-1}{3}$ must be a power of 2, but since $n^-$ and $q^2-1$ are both odd we have gcd$(\frac{q^2-1}{3},n^-)=1$. Note that this also implies gcd$(n^+,\frac{q^2-1}{3})=1$. Then, since $\frac{q^2-1}{3} > 12$ for $q>2$ we then have that an odd triple of type $(\frac{q^2-1}{3}, n^-,n^-)$ exists for $G$.
We have already shown that gcd$(10,n^-)=1$, gcd$(n^+,q^2-1)=1$ and it is clear that gcd$(10,\frac{q^2-1}{3})=1$. Finally, let $c =$ gcd$(n^+,n^-)$ and note that $c$ is odd. Since $n^+ - n^- = 2 \sqrt{2q}(q+1)$, $c$ must divide $q+1$. Also, since $n^+ + n^- = 2(q^2 + q +1)$, $c$ must also divide $q^2 + q + 1$. Therefore $c$ must divide $q^2$ and hence $c=1$ so we have our desired mixable Beauville structure.
The groups $^2F_4(q) \times {}^2F_4(q)$ for $q>2$ admit a mixable Beauville structure of type $$\left(10,10n^+,10n^+;\frac{q^2-1}{3}n^-,\frac{q^2-1}{3}, n^-,n^-\right)$$ where $n^+ = q^2+q+1+\sqrt{2q}(q+1)$ and $n^- =q^2+q+1-\sqrt{2q}(q+1)$.
By Lemmas \[cop\] and \[Ree\] we need only show that gcd$(10,n^+)=1$ and gcd$(\frac{q^2-1}{3},n^-)=1$, both of which can be found in the proof of Lemma \[Ree\].
The Steinberg triality groups of type $^3D_4(q)$
------------------------------------------------
The Steinberg triality groups $^3D_4(q)$ are defined over fields of order $q$ and have order $q^{12}(q^8+q^4+1)(q^6-1)(q^2-1)$. They are simple for all prime powers, $q$, and their maximal subgroups can be found in [@3d4] or [@raw].
Let $G$ be the Steinberg trialty group $^3D_4(2)$. Then both $G$ and $G \times G$ admit a mixable Beauville structure.
It can be verified using GAP that $(a,(ab)^3b^2;ab,b^{ab^2})$, where $a$ and $b$ are the standard generators as found in [@brauer], is a mixable Beauville structure of type $(2,7,28;13,9,13)$ and by Lemma \[cop\] this yields a mixable Beauville strucure of type $(14,14,28;117,117,13)$ on $G \times G$.
\[tri\] Let $G$ be the Steinberg triality group of type $^3D_4(q)$ for $q \geq 3$ and $d=$ gcd$(3,q+1)$. Then
1. for $p=2$ there exists a mixable Beauville structure on $G$ of type $$(6,6,\Phi_{12}(q);\Phi_3(q),\Phi_3(q),\Phi_6(q)),$$
2. for $p \neq 2$ there exists a mixable Beauville structure on $G$ of type $$\left(\frac{q^2-1}{d},\frac{q^2-1}{d},\Phi_{12}(q);\Phi_3(q),\Phi_3(q),\Phi_6(q)\right).$$
Let $G$ be as in the hypothesis. By [@fmp Lemma 5.24] for $q > 2$ there exists a hyperbolic generating triple of type $(\Phi_3(q),\Phi_3(q),\Phi_6(q))$.
For $p = 2$ one can verify using [CHEVIE]{} to compute the structure constants that there exist pairs of elements of order $6$ whose product has order $\Phi_{12}(q)$ and it is clear from the list of maximal subgroups that this is indeed an even triple for $G$.
For $p \neq 2$ elements of order $\frac{q^2-1}{d}$ exist since $G$ contains subgroups isomorphic to $SU(3,q)$. Using [CHEVIE]{} and the list of maximal subgroups it can be shown that an odd triple of type $(\frac{q^2-1}{d},\frac{q^2-1}{d},\Phi_{12}(q))$ exists for $G$. By Zsigmondy’s theorem, Theorem \[zsig\], it is also clear that we have coprimeness for both Beauville structures.
Let $G$ be the Steinberg triality groups of type $^3D_4(q)$ for $q \geq 3$ and $d=$ gcd$(3,q+1)$. Then
1. for $p=2$ there exists a mixable Beauville structure on $G \times G$ of type $$(6,6\Phi_{12}(q),6\Phi_{12}(q);\Phi_3(q),\Phi_3(q)\Phi_6(q),\Phi_3(q)\Phi_6(q));$$
2. for $p \neq 2$ there exists a mixable Beauville structure on $G$ of type $$\left(\frac{q^2-1}{d},\Phi_{12}(q)\frac{q^2-1}{d},\Phi_{12}(q)\frac{q^2-1}{d};\Phi_3(q),\Phi_3(q)\Phi_6(q),\Phi_3(q)\Phi_6(q)\right).$$
By Lemmas \[cop\] and \[tri\] we need only verify that gcd$(6,\Phi_{12}(q))=1$ for $p=2$ as the rest follows by construction. This is clear since $\Phi_{12}(q)$ is both odd and coprime to $q^2-1$ which is divisible by 3.
The Sporadic groups
===================
$G$ $x_1$ $y_1$ $x_2$ $y_2$
------------- ----------------------- ------------------------------- --------------------- ----------------------------- --
$M_{11}$ $ab(ab^2)^2$ $(ab(ab^2)^2)^{b}$ $(ab)^5$ $[a,b]^2$
$M_{12}$ $(ab)^4ba(bab)^2b$ $x_1^{b^2aba}$ $ab$ $ba$
$J_1$ $(ab)^2(ba)^3b^2ab^2$ $b^{aba}$ $ab$ $[a,b]$
$M_{22}$ $(ab)^3b^2ab^2$ $ba(bab^2)^2a$ $ab$ $(ab)^4b^2ab^2$
$J_2$ $ab(abab^2)^2ab^2$ $x_1^{ab^2}, x_1^{(ba)^2b^2}$ $ab$ $ba$
$M_{23}$ $abab^2$ $babab$ $ab$ $(abab)^{ba}$
$^2F_4(2)'$ $(ab)^3bab$ $(x_1^7)^{baba}$ $ab$ $ba$
$HS$ $abab^3$ $b^3aba$ $(ab)^3b$ $((ab)^3b)^b$
$J_3$ $(ab)^2(ba)^3b^2$ $x_1^b$ $ab$ $(ab)^b$
$M_{24}$ $(ab)^4b$ $(x_1^3)^b$ $ab$ $ba$
$McL$ $ab^2$ $bab$ $ab$ $(ab)^b$
$He$ $ab^3$ $ab^4$ $ab$ $ba$
$Ru$ $b$ $b^{(ab)^5}$ $(ab)^2$ $(ba)^2$
$Suz$ $(ab(abab^2)^2)$ $x_1^{abab^2}$ $ab$ $ba$
$O'N$ $[a,b]$ $([a,b]^2)^{bab}$ $ab^2$ $(ab^2)^{abab}$
$Co_3$ $ab$ $(ab)^{b^2}$ $a(ab)^2b(a^2b)^2b$ $(x_2^3)^{baba}$
$Co_2$ $a$ $b$ $ab(ab^2)^2b$ $(x_2^2)^{bab^2}$
$Fi_{22}$ $(ab)^3b^3$ $((ab)^3b^3)^a$ $b$ $b^a$
$HN$ $a$ $b$ $[a,b]^a$ $(ab)^2b((ab)^5(ab^2)^2)^2$
$Ly$ $a$ $b$ $abab^3$ $x_2^{(ab)^7}$
$Th$ $[a,b]$ $[a,b]^{(ba)^4b^2}$ $ababa$ $(x_2^5)^{bab}$
$Fi_{23}$ $a$ $b$ $((ab)^{11}b)^3$ $x_2^a$
$Co_1$ $a$ $b$ $[(ab)^3,aba]$ $[(ab)^{23},ab^2]^{babab}$
$J_4$ $a$ $b$ $a(bab)^3(ab)^2b$ $x_2^a$
$Fi_{24}'$ $ab$ $((ab)^6b)^{15} $ $b(ba)^3$ $(x_2^3)^{bab}$
: Words in the standard generators of $G$ [@brauer] where $(x_1,y_1,x_2,y_2)$ is a mixable Beauville structure for $G$ of the specified type.[]{data-label="table"}
$G$ Type of $G$ Type of $G \times G$
---------------- ----------------------- ------------------------------- -- -- --
$M_{11}$ $(8,8,5;11,3,11)$ $(8,40,40;11,33,33)$
$M_{12}$ $(8,8,5;11,11,3)$ $(8,40,40;11,33,33)$
$J_1$ $(10,3,10;7,19,19)$ $(10,30,30;133,133,19)$
$M_{22}$ $(8,8,5;11,7,7)$ $(8,40,40;77,77,7)$
$J_2$ $(10,10,10;7,7,3)$ $(10,10,80;7,21,21)$
$M_{23}$ $(8,8,11;23,23,7)$ $(8,88,88;23,161,161)$
$^2F_4(2)'$ $(8,8,5;13,13,3)$ $(8,40,40;13,39,39)$
$HS$ $(8,8,15;7,7,11)$ $(8,120,120;7,77,77)$
$J_3$ $(8,8,5;19,19,3)$ $(8,40,40;19,57,57)$
$M_{24}$ $(8,8,5;23,23,3)$ $(8,40,40;23,69,69)$
$McL$ $(12,12,7;11,11,5)$ $(84,84,12;55,55,11)$
$He$ $(8,8,5;17,17,7)$ $(8,40,40;17,119,119)$
$Ru$ $(4,4,29;13,13,7)$ $(4,116,116;13,91,91)$
$Suz$ $(8,8,7;13,13,3)$ $(8,56,56;13,39,39)$
$O'N$ $(12,6,31;19,19,11)$ $(12,186,186;209,209,19)$
$Co_3$ $(14,14,5;23,23,9)$ $(14,70,70;23,207,207)$
$Co_2$ $(2,5,28;23,23,9)$ $(10,10,28;23,207,207)$
$Fi_{22}$ $(16,16,9;13,13,11)$ $(144,144,16;143,143,13)$
$HN$ $(2,3,22;5,19,19)$ $(6,6,22;95,95,19)$
$Ly$ $(2,5,14;67,67,37)$ $(10,10,14;2479,2479,67)$
$Th$ $(10,10,13;19,19,31)$ $(130,130,10;589,589,19)$
$Fi_{23}$ $(2,3,28;13,13,23)$ $(6,6,28;299,299,13)$
$Co_1$ $(2,3,40;11,13,23)$ $(6,6,40;143,143,23)$
$J_4$ $(2,4,37;43,43,23)$ $(4,74,74;989,989,43)$
$Fi_{24}'$ $(29,4,20;33,33,23)$ $(116,116,20;759,759,33)$
${\mathbb{B}}$ $(2,3,8;47,47,55)$ $(6,6,8;47,2585,2585)$
${\mathbb{M}}$ $(94,94,71;21,39,55)$ $(94,6674,6674;21,2145,2145)$
: Types of the mixable Beauville structures for $G$ and $G \times G$ from the words in Table \[table\] and Lemmas \[baby\] and \[monster\].
In this section we exhibit explicit words in the standard generators [@brauer] of mixable Beauville structures for the sporadic groups. With the exception of the even triple for Janko’s group, $J_2$, at least two elements of every triple have coprime order which, by Lemma \[cop\], automatically gives us a corresponding triple for $G \times G$. In the case of $J_2$ we have two even triples, $(x_1,x_1^{ab^2})$ and $(x_1,x_1^{(ab)^2b^2})$ of types $(10,10,10)$ and $(10,10,8)$, which are inequivalent by Lemma \[inequi\]. For groups of reasonable order it can be explicitly computed in GAP that these elements generate. For groups of larger order we appeal to the list of maximal subgroups as found in [@ATLAS] and [@raw] to ensure generation. For the Monster ${\mathbb{M}}$ and the Baby Monster ${\mathbb{B}}$ we use less constructive methods to prove the existence of these structures.
\[baby\] There exists a mixable Beauville structure on the Baby Monster, ${\mathbb{B}}$, and on ${\mathbb{B}}\times {\mathbb{B}}$.
From [@sgbm] we know there exists a hyperbolic generating triple of type $(2,3,8)$. Let $$x=(ab)^3(ba)^4b(ba)^2b^2, \; \; y=x^{ab^2}$$ be words in the standard generators [@brauer]. They both have order $47$ and their product has order 55, then from the list of maximal subgroups [@raw] they will generate ${\mathbb{B}}$. This gives a mixable Beauville structure of type $(2,3,8;47,47,55)$ on ${\mathbb{B}}$ and by Lemma \[cop\] we have a mixable Beauville structure of type $(6,6,8;47,2585,2585)$ on ${\mathbb{B}}\times {\mathbb{B}}$.
\[monster\] There exists a mixable Beauville structure on the Monster, ${\mathbb{M}}$, and on ${\mathbb{M}}\times {\mathbb{M}}$.
Norton and Wilson [@norwil Theorem 21] show that the only maximal subgroups of the Monster which contain elements of order $94$ are copies of $2 \cdot {\mathbb{B}}$ which does not contain elements of order $71$. Then from a computation of the structure constants an even triple of type $(94,94,71)$ can be shown to exist. Finally, in [@exc] it is shown that there exists a hyperbolic generating triple of type $(21,39,55)$ on ${\mathbb{M}}$. Therefore we have a mixable Beauville structure of type $(94,94,71;21,39,55)$ on ${\mathbb{M}}$ and of type $(94,6674,6674;21,2145,2145)$. This completes the proof.
Finally we have the following Lemma which completes the proof of Theorem \[main\].
Let $G$ be one of the 26 sporadic groups or the Tits group $^2F_4(2)'$. Then there exists a mixable Beauville structure on $G$ and $G \times G$.
[99]{}
<span style="font-variant:small-caps;">A. S. Bang</span>, ‘Talteoretiske undersølgesler’, *Tidskrift Math.* (5) 4 (1886) 130–137.
<span style="font-variant:small-caps;">N. Barker, N. Boston, N. Peyerimhoff [and]{.nodecor} A. Vdovina</span>, ‘New examples of Beauville surfaces’, *Monatsh. Math.* 166 (2012) 319–327.
<span style="font-variant:small-caps;">N. Barker, N. Boston, N. Peyerimhoff [and]{.nodecor} A. Vdovina</span>, ‘An infinite family of $2$–groups with mixed Beauville structures’, *Int. Math. Res. Not.*, arXiv:math.AG/13044480.
<span style="font-variant:small-caps;">I. C. Bauer</span>, ‘Product–Quotient Suraces: Result and Problems’, Preprint, 2012, arXiv:math.AG/12043409.
<span style="font-variant:small-caps;">I. C. Bauer, F. Catanese [and]{.nodecor} F. Grunewald</span>, ‘Beauville surfaces without real structures I’, *Geometric Methods in Algebra and Number Theory* Progr. Math. 235 (Birkhäuser Boston, Boston, 2005) 1–42.
<span style="font-variant:small-caps;">I. C. Bauer, F. Catanese [and]{.nodecor} F. Grunewald</span>, ‘Chebycheff and Belyi polynomials, dessins d’enfants, Beauville surfaces and group theory’, *Mediterr. J. Math.* (2) 3 (2006) 121–146.
<span style="font-variant:small-caps;">I. C. Bauer, F. Catanese [and]{.nodecor} F. Grunewald</span>, ‘The classification of surfaces with $p_0=q=0$ isogenous to a product of curves’, *Pure Appl. Math. Q.* 4 (2008) 547–586.
<span style="font-variant:small-caps;">I. C. Bauer, F. Catanese [and]{.nodecor} R. Pignatelli</span>, ‘Surfaces of General Type with Geometric Genus Zero: A Survey’, *Complex and differential geometry* Springer Proc. Math. 8 (Springer, Heidelberg, 2011) 1–48.
<span style="font-variant:small-caps;">I. C. Bauer, F. Catanese [and]{.nodecor} R. Pignatelli</span>, ‘Complex surfaces of general type: some recent progress’, *Global aspects of complex geometry* (Springer, Berlin, 2006) 1–58.
<span style="font-variant:small-caps;">F. Catanese</span>, ‘Fibered surfaces, varieties isogenous to a product and related moduli spaces’, *Am. J. Math.* (1) 122 (2000) 1–44.
<span style="font-variant:small-caps;">J. H. Conway, R. T. Curtis, S. P. Norton, R. A. Parker [and]{} R. A. Wilson</span>, *Atlas of Finite Groups* (Clarendon Press, Oxford, 1985).
<span style="font-variant:small-caps;">B. T. Fairbairn</span>, ‘Some Exceptional Beauville Structures’, *J. Group Theory*, (5) 15 (2012) 631–639.
<span style="font-variant:small-caps;">B. T. Fairbairn</span>, ‘Recent work on Beauville surfaces, structures and groups’, to appear in *Proceedings of Groups St Andrews 2013* (eds C.M. Campbell, M.R. Quick, E.F. Robertson, and C.M. Roney-Dougal).
<span style="font-variant:small-caps;">B. T. Fairbairn, K. Magaard [and]{.nodecor} C. W. Parker</span>, ‘Generation of finite quasisimple groups with an application to groups acting on Beauville surfaces’, *Proc. London Math. Soc.* (4) 107 (2013) 744–798.
<span style="font-variant:small-caps;">R. Friedman [and]{.nodecor} J. W. Morgan</span> ‘Algebraic surfaces and four-manifolds: some conjectures and speculations’, *Bull. Amer. Math. Soc.* (1) 18 (1988) 1–19.
<span style="font-variant:small-caps;">F. G. Frobenius</span>, ‘[Ü]{}ber Gruppencharaktere’, *Sitzungsber. K[ö]{}n. Preuss. Akad. Wiss. Berlin* (1896) 985–1021. <span style="font-variant:small-caps;">Y. Fuertes [and]{.nodecor} G. González-Diez</span>, ‘On Beauville Structures on the Groups $S_n$ and $A_n$’, *Math. Z.* (4) 264 (2010) 959–968.
<span style="font-variant:small-caps;">Y. Fuertes [and]{.nodecor} G. A. Jones</span>, ‘Beauville surfaces and finite groups’, *J. Algebra* (1) 340 (2011) 13–27.
The GAP Group, ‘GAP – groups, algorithms and programming, version 4.7.4’, 2014, http://www.gap-system.org.
<span style="font-variant:small-caps;">M. Geck, G. Hiss, F. L[ü]{}beck, G. Malle, [and]{.nodecor} G. Pfeiffer</span>, ‘[CHEVIE]{} – A system for computing and processing generic character tables for finite groups of Lie type, Weyl groups and Hecke algebras’, [*Appl. Algebra Engrg. Comm. Comput.*]{}, 7 (1996) 175–210.
<span style="font-variant:small-caps;">R. Gow</span>, ‘Commutators in finite simple groups of Lie type’, *Bull. London Math. Soc.* (3) 32 (2000) 311–315.
<span style="font-variant:small-caps;">P. Hall</span>, ‘The Eulerian functions of a group’, *Q. J. Math.* (1) 7 (1936) 134–151.
<span style="font-variant:small-caps;">G. A. Jones</span>, ‘Automorphism groups of Beauville surfaces’, *J. Group Theory* (3) 16 (2013) 353–381.
<span style="font-variant:small-caps;">G. A. Jones</span>, ‘Beauville surfaces and groups: a survey’, *Fields Inst. Comm.* to appear.
<span style="font-variant:small-caps;">G. A. Jones</span>, ‘Characteristically simple Beauville groups, I: cartesian powers of alternating groups’, Preprint, 2013, arXiv:math.GR/13045444.
<span style="font-variant:small-caps;">G. A. Jones</span>, ‘Characteristically simple Beauville groups, II: low rank and sporadic groups’, Preprint, 2013, arXiv:math.GR/13045450.
<span style="font-variant:small-caps;">P. B. Kleidman</span>, ‘The Maximal Subgroups of the Chevalley Groups $G_2(q)$ with $q$ Odd, the Ree Groups $^2G_2(q)$, and Their Automorphism Groups’, *J. Algebra* (1) 117 (1988) 30–71.
<span style="font-variant:small-caps;">P. B. Kleidman</span>, ‘The Maximal Subgroups of the Steinberg Triality Groups $^3D_4(q)$ and of Their Automorphism Groups’, *J. Algebra* (1) 115 (1988) 182–199.
<span style="font-variant:small-caps;">H. Lünburg</span>, ‘Ein einfacher Beweis für den Satz von Zsigmondy über primitive Primteiler von $a^N-1$’, *Geometries and Groups* Lecture Notes in Mathematics 893 (eds Martin Aigner and Dieter Jungnickel; Springer, New York, 1981) 219–222.
<span style="font-variant:small-caps;">G. Malle</span>, ‘The maximal subgroups of $^2F_4(q^2)$’, *J. Algebra* (1) 139 (1991) 52–69.
<span style="font-variant:small-caps;">H. Mitchell</span>, ‘Determination of the Ordinary and Modular Ternary Linear Groups’, *Trans. Amer. Math. Soc.* (2) 12 (1911) 207–242.
<span style="font-variant:small-caps;">S. P. Norton [and]{.nodecor} R. A. Wilson</span>, ‘Anatomy of the Monster. II’, *Proc. London Math. Soc.* (3) 84 (2002) 581–598.
<span style="font-variant:small-caps;">K. Shinoda</span>, ‘The conjugacy classes of the finite Ree groups of type $(F_4)$’, *J. Fac. Sci. Univ. Tokyo* Sect. 1 A, Mathematics, (1) 22 (1975) 1–15.
<span style="font-variant:small-caps;">W. A. Simpson [and]{.nodecor} J. Sutherland Frame</span>, ‘The character tables for $SL(3,q)$, $SU(3,q^2)$, $PSL(3,q)$, $PSU(3,q^2)$’, *Can. J. Math.* (3) XXV (1973) 486–494.
<span style="font-variant:small-caps;">J. Širáň</span>, ‘How symmetric can maps on surfaces be?’, *Surveys in Combinatorics 2013* London Math. Soc. Lecture Note Ser. 409 (eds Simon R. Blackburn, Stefanie Gerke and Mark Wildon; CUP, Cambridge, 2013) 161–238.
<span style="font-variant:small-caps;">H. N. Ward</span> ‘On Ree’s Series of Simple Groups’ *Trans. Amer. Math. Soc.* (1) 121 (1966) 62–89.
<span style="font-variant:small-caps;">R. A. Wilson</span>, ‘The symmetric genus of the Baby Monster’, *Q. J. Math.* (2) 44 (1993) 513–516.
<span style="font-variant:small-caps;">R. A. Wilson</span>, ‘Standard Generators for Sporadic Simple Groups’, *J. Algebra* (2) 184 (1996) 505–515.
<span style="font-variant:small-caps;">R. A. Wilson</span>, *The finite simple groups* Graduate Texts in Mathematics 251 (Springer-Verlag London Ltd., London, 2009).
<span style="font-variant:small-caps;">R. A. Wilson</span>, *et al.*, <span style="font-variant:small-caps;">Atlas</span> of Finite Group Representations v3, http://brauer.maths.qmul.ac.uk/Atlas/v3.
<span style="font-variant:small-caps;">K. Zsigmondy</span>, ‘Zur Theorie der Potenzreste’, *Monat. Math. Phys.* (1) 3 (1892) 265–284.
|
---
abstract: 'We introduce an explicit combinatorial characterization of the minimal model ${\EuScript O}_{\infty}$ of the coloured operad ${\EuScript O}$ encoding non-symmetric operads. In our description of ${\EuScript O}_{\infty}$, the spaces of operations are defined in terms of hypergraph polytopes and the composition structure generalizes the one of the $A_{\infty}$-operad. As further generalizations of this construction, we present a combinatorial description of the $W$-construction applied on ${\EuScript O}$, as well as of the minimal model of the coloured operad ${\EuScript C}$ encoding non-symmetric cyclic operads.'
address: 'Institute of Mathematics CAS, Žitn'' a 25, 115 67 Prague, Czech Republic'
author:
- 'Jovana Obradovi'' c'
title: Combinatorial homotopy theory for operads
---
Introduction {#introduction .unnumbered}
============
Sullivan’s classical construction of minimal models of rational homotopy theory has been made available to operad theory by Markl, in his paper [@m2], together with the subsequent papers of Hinich [@Hinich], Spitzweck [@Spitzweck], Vogt [@Vogt], Berger-Moerdijk [@BM0], Cisinski-Moerdijk [@CM] and Robertson [@Marcy], in which model structures of various categories of operads have been investigated. In [@m2 Theorem 3.1], Markl introduced the notion of a minimal model of a monochrome dg operad and he proved that any such operad ${\EuScript P}$, with ${\EuScript P}(0)=0$ and ${\EuScript P}(1)={\Bbbk}$, with ${\Bbbk}$ being a field of characteristic zero, admits a minimal model, which is unique up to isomorphism. In [@m1 Definition 2], Markl generalized the notion of a minimal model to coloured dg operads. We recall his definition below.
\[minimalmodel\] Let ${\EuScript P}=({\EuScript P},d_{\EuScript P})$ be a $C$-coloured dg operad. A minimal model of ${\EuScript P}$ is a $C$-coloured dg operad $\mathfrak{M}_{\EuScript P}=({{\mathcal{T}}}_{C}(E),d_{\mathfrak M})$, where ${{\mathcal{T}}}_C(E)$ is the free $C$-coloured operad on a $C$-coloured collection $E$, together with a map $\alpha_{\EuScript P}:{\mathfrak M}_{\EuScript P}\rightarrow {\EuScript P}$ of dg coloured operads, such that $\alpha_{\EuScript P}:{\mathfrak M}_{\EuScript P}\rightarrow {\EuScript P}$ is a quasi-isomorphism, and $d_{\mathfrak M}(E)$ consists of decomposable elements of ${{\mathcal{T}}}_C(E)$, i.e. $d_{\mathfrak M}(E)\subseteq {{\mathcal{T}}}_C(E)^{(\geq 2)}$, where ${{\mathcal{T}}}_C(E)^{(\geq 2)}\subseteq {{\mathcal{T}}}_C(E)$ is determined by trees with at least two vertices.
For Koszul operads, Markl’s notion of minimal model coincides with the cobar construction on the Koszul dual of an operad, given by Ginzburg-Kapranov [@GK94] and Getzler-Jones [@GJ94], and, in particular, provides us with the structure encoding higher operations of most classical strongly homotopy algebras, such as $A_{\infty}$-, $L_{\infty}$- and $C_{\infty}$-algebras. A detailed description of these algebras can be found in [@mss Section 3.10]. In recent applications of homotopy theory of algebras over operads, especially in theoretical physics, an explicit description of the structure maps of minimal models remains essential; see [@jurco] for an up to date review on how higher homotopy structures naturally govern field theories. Such a description is often obtained by a direct calculation of a particular model, which tends to be a rather involved task and calls for new methods and conceptual approaches for understanding the homotopy properties of algebraic structures.
In this paper, we introduce an explicit combinatorial characterization of the minimal model of the coloured operad ${\EuScript O}$ encoding non-symmetric operads, introduced by Van der Laan in his work [@VdL] on extending Koszul duality theory of Ginzburg-Kapranov and Getzler–Jones to coloured operads. The novelty of our characterization is its interpretation in terms of hypergraph polytopes, introduced by Došen and Petri' c in [@DP-HP] and further developed by Curien, Ivanovi' c and the author in [@CIO], whose hypergraphs arise in a certain way from rooted trees – we refer to them as operadic polytopes. In particular, each operadic polytope is a truncated simplex displaying the homotopy replacing the “pre-Lie” relations for the partial composition operations pertinent to the corresponding rooted tree. In this way, our operad structure generalizes the structure of Stasheff’s topological $A_{\infty}$-operad [@Sta1]: the family of (combinatorial) associahedra corresponds to the suboperad determined by linear rooted trees. We then introduce a combinatorial description of the cubical subdivision of operadic polytopes, obtaining in this way the Boardman-Vogt-Berger-Moerdijk resolution of ${\EuScript O}$, i.e. its $W$-construction, introduced in [@BM2]. Finally, by modifying the underlying formalism of trees, we obtain the minimal model of the coloured operad ${\EuScript C}$ encoding non-symmetric cyclic operads, whose algebras yield a notion of strongly homotopy cyclic operads for which the relations for the partial composition operations are coherently relaxed up to homotopy, while the relations involving the action of cyclic permutations are kept strict.
We hope that our explicit construction of operadic polytopes, together with the fact that they admit the structure of a strict infinity operad, will be of interest in the context of recent developments around Koszulity in operadic categories of Batanin and Markl [@BatMar]. From a different, but closely related point of view, we believe that it provides a valuable addition to Ward’s recent work [@Ward], proving that the operad encoding modular operads is Koszul and indicating that such a proof can be given in terms of cellular chains on a family of polytopes that generalizes graph associahedra.
[*Acknowledgements.*]{} I wish to express my gratitude to M. Livernet, M. Markl, F. Wierstra, and R. Kaufmann for many useful discussions. I am especially indebted to P.-L. Curien and B. Vallette for detailed comments that greatly improved the final version of this paper. I gratefully acknowledge the financial support of the Praemium Academiae of M. Markl and RVO:67985840.
Notation and conventions {#notation-and-conventions .unnumbered}
------------------------
### Operads {#operads .unnumbered}
We work with ${\mathbb N}$-coloured reduced operads in the symmetric monoidal category ${\mathsf{dgVect}}$ of dg vector spaces over a field ${\Bbbk}$ of characteristic 0. In ${\mathsf{dgVect}}$, the monoidal structure is given by the classical tensor product $\otimes$, and the switching map $\tau: V\otimes W\rightarrow W\otimes V$ is defined by $\tau(v\otimes w):=(-1)^{|v||w|} w\otimes v$, where $v$ and $w$ are homogeneous elements of degrees $|v|$ and $|w|$, respectively. We use the classical Koszul sign convention. We work with homological grading; the differential is a map of degree $-1$. We denote with ${{\mathcal{T}}}_{C}(K)$ the free $C$-coloured (symmetric) operad on a $C$-coloured (symmetric) collection $K$. A detailed construction of ${{\mathcal{T}}}_{C}(K)$ is given in [@BM2 Section 3]. Our main references for the general theory of operads and related notions are [@mss] and [@LV].
### Ordinals {#ordinals .unnumbered}
We denote by $[n]$ the set $\{1,\dots,n\}$, and by $\Sigma_n$ the symmetric group on $[n]$.
### Trees {#trees .unnumbered}
A rooted tree ${{\mathcal{T}}}$ is a finite connected contractible graph on a non-empty vertex set, together with a distinguished external edge $r({{\mathcal{T}}})$, called the root of ${{\mathcal{T}}}$. We shall denote by ${\it v}({{\mathcal{T}}})$, ${\it e}({{\mathcal{T}}})$ and $l({{\mathcal{T}}})$ the sets of vertices, (internal) edges and external edges (or leaves) of ${{\mathcal{T}}}$, respectively. We shall write $E({{\mathcal{T}}})$ for the union $e({{\mathcal{T}}})\cup l({{\mathcal{T}}})$ of all the edges of ${{\mathcal{T}}}$. We shall write $i({{\mathcal{T}}})$ for the set ${\it l}({{\mathcal{T}}})\backslash \{{r}({{\mathcal{T}}})\}$, and we shall refer to the elements of $i({{\mathcal{T}}})$ as the inputs (or the input leaves) of ${{\mathcal{T}}}$. The set of inputs $i(v)$ and the root $r(v)$ of a vertex $v\in v({{\mathcal{T}}})$ are defined in the standard way through the source and target maps obtained by reading ${{\mathcal{T}}}$ from the input leaves to the root. The notation for all these various sets defining a rooted tree will often also be used for their respective cardinalities. We shall denote by $\rho({{\mathcal{T}}})$ the unique vertex of ${{\mathcal{T}}}$ whose root is $r({{\mathcal{T}}})$, and we shall refer to it as the root vertex of ${{\mathcal{T}}}$. Throughout the paper, [*edge*]{} will always mean an [*internal edge*]{}.
A rooted tree ${{\mathcal{T}}}$ is called planar if each vertex of ${{\mathcal{T}}}$ comes equipped with an ordering of its inputs. In this case, the inputs of ${{\mathcal{T}}}$ admit a canonical labeling from left to right, relative to the planar embedding of ${{\mathcal{T}}}$, by $1$ through $n$, for $n=|{\it i}({{\mathcal{T}}})|$. Planar rooted trees are isomorphic if there exists an isomorphism of the correspondig graphs that preserves the root and the planar structure. We denote by ${\tt Tree}(n)$ the set of planar rooted trees with $n$ inputs.
There are two principal constructions on planar rooted trees: grafting and substitution. For trees ${{\mathcal{T}}}_1\in {\tt Tree}(n)$ and ${{\mathcal{T}}}_2\in {\tt Tree}(m)$ and an index $1\leq i\leq n$, the grafting of ${{\mathcal{T}}}_2$ to ${{\mathcal{T}}}_1$ along the input $i$ is the tree ${{\mathcal{T}}}_1\circ_i {{\mathcal{T}}}_2$, obtained by identifying the root of ${{\mathcal{T}}}_2$ with the $i$-th input of ${{\mathcal{T}}}_1$. If $v\in V({{\mathcal{T}}}_1)$ is such that $|{\it i}(v)|=m$, the substitution of the vertex $v$ of ${{\mathcal{T}}}_1$ by ${{\mathcal{T}}}_2$ is the tree ${{\mathcal{T}}}_1\bullet_v {{\mathcal{T}}}_2$, obtained by replacing the vertex $v$ by the tree ${{\mathcal{T}}}_2$, identifying the $m$ inputs of $v$ with the $m$ inputs of ${{\mathcal{T}}}_2$, using the respective planar structures. The trees ${{\mathcal{T}}}_1\circ_i {{\mathcal{T}}}_2$ and ${{\mathcal{T}}}_1\bullet_v {{\mathcal{T}}}_2$ can be rigorously defined either in terms of disjoint unions of sets of vertices, edges and leaves of ${{\mathcal{T}}}_1$ and ${{\mathcal{T}}}_2$, or by preassuming the appropriate disjointness of sets and taking the ordinary union instead; we take the latter convention. Moreover, we shall assume that all the edges and leaves that need to be identified in these two constructions are a priori the same.
A corolla is a rooted tree with only one vertex. Each planar rooted tree ${{\mathcal{T}}}$ is either a planar corolla, or there exist planar rooted trees ${{\mathcal{T}}}_1,\dots,{{\mathcal{T}}}_p$, a corolla $t_n$ with $n$ inputs, for $n\geq p$, and a monotone injection $(p,n):[p]\rightarrow [n]$, such that ${{\mathcal{T}}}$ is obtained by identifying the roots of ${{\mathcal{T}}}_i$’s with the inputs $(p,n)(i)$ of $t_n$. In the latter case, we write ${{\mathcal{T}}}=t_n({{\mathcal{T}}}_1,\dots,{{\mathcal{T}}}_p)$, implicitly bookkeeping the data of the correspondence $(p,n)$. Note that this recursive definition allows one for an inductive reasoning.
A subtree of a planar rooted tree ${{\mathcal{T}}}$ is a connected subgraph ${{\mathcal{S}}}$ of ${{\mathcal{T}}}$ which is itself a planar rooted tree, such that ${r}({{\mathcal{S}}})={r}(v)$, for some $v\in v({{\mathcal{T}}})$, and such that, if a vertex $v$ of ${{\mathcal{T}}}$ is present in ${{\mathcal{S}}}$, then all the inputs and the root of $v$ in ${{\mathcal{T}}}$ must also be present in ${{\mathcal{S}}}$; it is assumed that the source and the target maps of ${{\mathcal{S}}}$ are the appropriate restrictions of the ones of ${{\mathcal{T}}}$, and that the planar structure of ${{\mathcal{S}}}$ is inherited from ${{\mathcal{T}}}$. In this way, each subtree of ${{\mathcal{T}}}$ is completely determined by a subset of vertices of ${{\mathcal{T}}}$, and therefore also by a subset of internal edges of ${{\mathcal{T}}}$ (by taking all the vertices adjacent with those edges). We can, therefore, speak about the subtree ${{\mathcal{T}}}(X)$ of ${{\mathcal{T}}}$ determined by a subset $X$ of vertices (resp. of edges) of ${{\mathcal{T}}}$. For an edge $e\in e({{\mathcal{T}}})\cup \{r({{\mathcal{T}}})\}$, the subtree of ${{\mathcal{T}}}$ rooted at $e$ is the subtree of ${{\mathcal{T}}}$ determined by all the vertices of ${{\mathcal{T}}}$ that are descendants of the vertex $v$ whose root is $e$, including $v$ itself.
In this paper, we shall work with three different kinds of rooted trees. In order to help the reader navigate between them, in the following table we briefly summarize their characterizations and the corresponding notational conventions.
[ccccc]{} && &&\
&& &&\
\[0.5cm\] &&
(E)\[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0,0) [$3$]{}; (F) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0,1) [$1$]{}; (A) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (1,1) [$2$]{}; (-0.2,1.8) – (F)–(0.2,1.8); (0.8,1.8) – (A)–(1.2,1.8); (-0.4,0.8)–(E)–(0,-0.55); (0.4,0.8)–(E)–(-1,0.8); (E)–(F) node ; (E)–(A) node ;
&&\
A planar unrooted tree is a finite connected contractible graph on a non-empty vertex set, each of whose vertices comes equipped with a cyclic ordering of all the adjacent edges. Planar unrooted trees are isomorphic if there exists an isomorphism of the corresponding graphs that preserves the cyclic orderings of the sets of edges adjacent to vertices. By forgetting the data of the root of a planar rooted tree ${{\mathcal{T}}}$, one canonically obtains a planar unrooted tree that we shall denote by ${{\mathcal{U}}}({{\mathcal{T}}})$.
Hypergraph polytopes
====================
A hypergraph polytope is a polytope that may be characterized as a truncated simplex, whereby the truncations are only performed on the faces of the original simplex and not on the faces already obtained as a result of a truncation. In particular, in each dimension, the family of hypergraph polytopes consists of an interval of simple polytopes starting with a simplex and ending with a permutohedron. As an illustration, here is a sequence of truncations of the 3-dimensional simplex that leads to a polytope called [*hemiassociahedron*]{}:
\
The hemiassociahedron is not as well-known as certain other notable members of the family of hypergraph polytopes, like simplices, hypercubes, associahedra, cyclohedra and permutohedra, but, like all those polytopes, it also has a role in characterizing infinity structures: it displays a particular homotopy of strongly homotopy operads. The hemiassociahedron will be [*our favourite polytope*]{} in this article.
The attribute [*hypergraph*]{} in the designation [*hypergraph polytopes*]{} is meant to indicate the particular style of combinatorial description of the polytopes from this family: the face lattice of each hypergraph polytope can be derived from the data of a hypergraph whose hyperedges encode the truncations of the simplex that lead to the polytope in question. This particular characterization of truncated simplices has been introduced by Došen and Petri' c in [@DP-HP] and further developed by Curien, Ivanovi' c and the author in [@CIO]. Truncated simplices were originally investigated by Feichtner and Sturmfels in [@FS05] and by Postnikov in [@Pos], by means of different – and predating – combinatorial tools: [*nested sets*]{} and [*tubings*]{}, while, in [@Pos2], they first appeared under the name of [*nestohedra*]{}. The familly of [*graph-associahedra*]{}, introduced by Carr and Devados in [@CD], is the subfamily of the family of hypergraph polytopes determined by polytopes whose face lattices can be encoded by the data of a genuine graph.
This section is a recollection on the combinatorial description of the familly of hypergraph polytopes and is entiriely based on [@DP-HP] and [@CIO]. In particular, we shall consider hypergraph polytopes as abstract polytopes only, disregarding their geometric characterization as a bounded intersection of a finite set of half-spaces, which is also given in the two references. We refer to [@abs-pol] for the definition of an abstract polytope and related notions.
Hypergraph terminology
----------------------
A hypergraph is a generalization of a graph for which an edge can relate an arbitrary number of vertices. Formally, a hypergraph ${\bf H}$ is given by a set $H$ of [*vertices*]{} and a subset ${\bf H}\subseteq {{\mathcal{P}}}(H)\backslash\emptyset$ of [*hyperedges*]{}, such that $\bigcup {\bf H}=H$. Note the abuse of notation here: we used the bold letter ${\bf H}$ to denote both the hypergraph itself and its set of hyperedges. We justify this identification by requiring all our hypergraphs to be [*atomic*]{}, meaning that $\{x\}\in {\bf H}$ for all $x\in H$. Additionally, we shall assume that all our hypergraphs are [*non-empty*]{}, meaning that $H\neq \emptyset$, [*finite*]{}, meaning that $H$ is finite and [*connected*]{}, meaning that there are no non-trivial partitions $H=H_1\cup H_2$, such that ${\bf H}=\{X\in {\bf H}\,|\, X\subseteq H_1\}\cup\{Y\in {\bf H}\,|\, Y\subseteq H_2\}$. There is one more property of hypergraphs that we shall encounter (but not a priori ask for) in the construction of hypergraph polytopes: the property of being [*saturated*]{}. We say that a hypergraph ${\bf H}$ is saturated when, for every $X,Y\in{\bf H}$ such that $X\cap Y\neq\emptyset$, we have that $X\cup Y\in {\bf H}$. Every hypergraph can be saturated by adding the missing (unions of) hyperedges. Let us introduce the notation $${\bf{H}}_X:=\{Z\in {\bf{H}}\,|\, Z\subseteq X\},$$ for a hypergraph ${\bf H}$ and $X\subseteq H$. The [*saturation*]{} of ${\bf{H}}$ is then formally defined as the hypergraph $${\it Sat}({\bf{H}}):=\{ X\,|\, {\emptyset\subsetneq X\subseteq H\;\mbox{and}\;{\bf{H}}_X\;\mbox{is connected}}\}.$$
\[hypergraphex1\] The hypergraph $${\bf H}=\{\{x\},\{y\},\{u\},\{v\},\{x,y\},\{x,u\},\{x,v\},\{u,v\},\{x,u,v\}\}$$ can be represented pictorially as follows:
\(x) \[circle,draw=black,fill=black,inner sep=0mm,minimum size=1.3mm,label=[\[xshift=0.2cm,yshift=-0.25cm\][$x$]{}]{}\] at (1,1) ; (u) \[circle,draw=black,fill=black,inner sep=0mm,minimum size=1.3mm,label=[\[xshift=0.2cm,yshift=-0.25cm\][$y$]{}]{}\] at (1,1.9) ; (y) \[circle,draw=black,fill=black,inner sep=0mm,minimum size=1.3mm,label=[\[xshift=-0.2cm,yshift=-0.3cm\][$u$]{}]{}\] at (0.3,0.2) ; (z) \[circle,draw=black,fill=black,inner sep=0mm,minimum size=1.3mm,,label=[\[xshift=0.2cm,yshift=-0.25cm\][$v$]{}]{}\] at (1.7,0.2) ; (u)–(x)–(y)–(z); (x)–(z); (1,1.5)–(-0.3,0)–(2.3,0)–cycle;
Here, the hyperedge $\{x,u,v\}$ is represented by the circled-out area aroud the vertices $x$, $u$ and $v$. The hypergraph ${\bf H}$ is not saturated. The saturation of ${\bf H}$ is the hypergraph $${\it Sat}({\bf H})=\{\{x\},\{y\},\{u\},\{v\},\{x,y\},\{x,u\},\{x,v\},\{u,v\},\{x,y,u\},\{x,y,v\},\{x,u,v\},\{x,y,u,v\}\}.$$ [[ 9999 ]{}]{}
We additionally import the following notational conventions and terminology from [@CIO]. For a hypergraph ${\bf H}$ and $X\subseteq H$, we set $${\bf H}\backslash X:={\bf{H}}_{H\backslash X}.$$ Observe that for each (not necessarily connected) finite hypergraph there exists a partition $H=H_1\cup\ldots\cup H_m$, such that each hypergraph ${\bf{H}}_{H_i}$ is connected and ${\bf{H}}=\bigcup({\bf{H}}_{H_i})$. The ${\bf{H}}_{H_i}$’s are called the [*connected components*]{} of ${\bf{H}}$. We shall write ${\bf{H}}_i$ for ${\bf{H}}_{H_i}$. We shall use the notation
${\bf{H}}\backslash X \leadsto {\bf H}_1,\ldots,{\bf H}_n$ (resp. ${\bf{H}}\backslash X \leadsto \{{\bf H}_i \,|\,1\leq i\leq n\}$)
to indicate that ${\bf H}_1,\ldots,{\bf H}_n$ are the (resp. $ \{{\bf H}_i \,|\, 1\leq i\leq n\}$ is the set of) connected components of ${\bf H}\backslash X$.
The abstract polytope of a hypergraph {#abspol}
-------------------------------------
We next define the abstract polytope $${{\mathcal{A}}}({\bf H})=(A({\bf H})\cup\{\emptyset\},\leq_{\bf H})$$ associated to a hypergraph ${\bf H}$. We shall recall the representation of ${{\mathcal{A}}}({\bf H})$ given in [@CIO], which coincides, up to isomorphism, with the one of [@DP-HP]. The advantage of the representation of ${{\mathcal{A}}}({\bf H})$ given in [@CIO] lies in the [*tree notation*]{} for all the faces of hypergraph polytopes that encodes face inclusion as edge contraction – this combinatorial decription reveals the operad structure on a particular subfamily of the family of hypergraph polytopes and was essential for the main purpose of this paper.
The elements of the set $A({\bf H})$, to which we refer as the [*constructs*]{} of ${\bf H}$, are the non-planar, vertex-decorated rooted trees defined recursively as follows. Let $\emptyset\neq Y\subseteq H$ be a non-empty subset of vertices of ${\bf H}$.\
- If $Y = H$, then the tree with a single vertex decorated with $H$ and without any inputs, is a construct of ${\bf{H}}$; we denote it by $H$.\
- If $Y\subsetneq H$, if ${\bf H}\backslash Y \leadsto {\bf H}_1,\ldots,{\bf H}_n$, and if $C_1,\ldots,C_n$ are constructs of ${\bf H}_1,\ldots,{\bf H}_n$, respectively, then the tree whose root vertex is decorated by $Y$ and that has $n$ inputs, on which the respective $C_i\,$’s are grafted, is a construct of ${\bf H}$; we denote it by $Y\{C_1,\ldots,C_n\}$.\
We write $C:{\bf H}$ to indicate that $C$ is a construct of ${\bf H}$. A [*construction*]{} is a construct whose vertices are all decorated with singletons.
Let us go through the recursive definition of constructs by unwinding the construction of the following three constructs of the hypergraph ${\bf H}$ from Example \[hypergraphex1\]: $$C_1=\{x,y,u,v\},\quad C_2=\{x\}\{\{u,v\},\{y\}\}, \quad \mbox{ and }\quad C_3=\{x\}\{\{u\}\{\{v\}\},\{y\}\}.$$ The construct $C_1$ is obtained by the first rule in the above definition. The constructs $C_2$ and $C_3$ are both obtained by choosing the set $\{x\}$ to be the decoration of the root vertex. The connected components of ${\bf H}\backslash\{x\}$ are ${\bf H}_1=\{\{u\},\{v\},\{u,v\}\}$ and ${\bf H}_2=\{\{y\}\}$. The construct $C_2$ is then obtained by grafting to the root vertex $\{x\}$ the constructs $\{u,v\}:{\bf H}_1$ and $\{y\}:{\bf H}_2$ (both obtained by the first rule in the above definition), and $C_3$ is obtained by choosing $\{u\}\{\{v\}\}$ instead of $\{u,v\}$ as a construct of ${\bf H}_1$. The construct $C_3$ is a construction. [[ 9999 ]{}]{}
\[condr\] In order to facilitate the notation for constructs, we shall represent their singleton vertices without the braces. For example, instead of $\{x\}\{\{u,v\},\{y\}\}$ and $\{x\}\{\{u\}\{\{v\}\},\{y\}\}$, we shall write $x\{\{u,v\},y\}$ and $x\{u\{v\},y\}$. Also, we shall freely confuse the vertices of constructs with the sets decorating them, since they are a fortiori all distinct. In particular, we shall denote the vertices of constructs with capital letters specifying those sets. Finally, in order to provide more intuition for the partial order on constructs that we are about to define in terms of edge contraction, and later also for the composition of constructs underlying our infinity operad structure, we shall use the graphical representation for constructs indicated in the Introduction. For example,
the constructs $x\{y,\{u,v\}\}$ and $x\{u\{v\},y\}$ will be drawn as and ,
respectively. Notice that for constructs we do not draw the root.
The partial order $\leq_{\bf H}$ on $A({\bf H})\cup\{\emptyset\}$ is defined by the following three rules.\
- For all $C:{\bf H}$, $\emptyset \leq_{\bf{H}} C$.\
- If ${\bf H}\backslash Y\leadsto {\bf H}_1,\dots,{\bf H}_n$, ${\bf H}_1\backslash X\leadsto {\bf H}_{11},\dots,{\bf H}_{1m}$, $C_{1j}:{\bf H}_{1j}$ for $1\leq j\leq m$, and ${C}_i:{\bf H}_i$ for $2\leq i\leq n$, then $$Y\{X\{C_{11},\ldots,C_{1m}\},C_2,\ldots,C_n\}\leq_{\bf H}(Y\cup X)\{C_{11},\ldots,C_{1m},C_2,\ldots,C_n\}.$$
- If ${\bf H}\backslash Y\leadsto {\bf H}_1,\ldots,{\bf H}_n$, $C_i:{\bf H}_i$ for $2\leq i\leq n$, and $C_1\leq_{{\bf H}_1}C'_1$, then $$Y\{C_1,C_2,\ldots,C_n\} \leq_{\bf{H}} Y\{C'_1,C_2,\ldots,C_n\}.$$
Therefore, given a construct $C:{\bf H}$, one can obtain a larger construct by contracting an edge of $C$ and merging the decorations of the vertices adjacent with that edge. Note that the partial order $\leq_{\bf H}$ is well-defined, in the sense that, if $C_1:{\bf H}$ and if $C_1\leq_{\bf H} C_2$ is inferred, then $C_2:{\bf H}$ can be inferred.
The faces of ${{\mathcal{A}}}({\bf H})$ are ranked by integers ranging from $-1$ to $|H|-1$. The face $\emptyset$ is the unique face of rank $-1$, whereas the rank of a construct $C:{\bf H}$ is $|H|-|v(C)|$. In particular, constructions are faces of rank $0$, whereas the construct $H:{\bf H}$ is the unique face of rank $|H|-1$. We take the usual convention to name the faces of rank $0$ [*vertices*]{}, the faces of rank $1$ [*edges*]{} and the faces of rank $|H|-2$ [*facets*]{}. The ranks of the faces of ${{\mathcal{A}}}({\bf H})$ correspond to their actual dimension when realized in Euclidean space. We refer to [@DP-HP Section 9] and [@CIO Section 3] for a geometric realization of ${{\mathcal{A}}}({\bf H})$. In the next section, in conformity with this geometric realization, we shall provide examples of hypergraph polytopes.
The fact that the poset ${{\mathcal{A}}}({\bf H})$ is indeed an abstract polytope of rank $|H|-1$ follows by translating the definition of ${{\mathcal{A}}}({\bf H})$, using the order isomorphism [@CIO Proposition 2], to the formalism of hypergraph polytopes presented in [@DP-HP], as a consequence of [@DP-HP Section 8], where the axioms of abstract polytopes are verified for the latter presentation of ${{\mathcal{A}}}({\bf H})$.
Examples
--------
This section contains examples of various hypergraph polytopes; in [@DP-HP Appendix B] and [@CIO Section 2.4, Section 2.6], the reader can find more of them. Given that our hypergraph vocabulary is now settled, before we give the individual examples, let us first provide the intuition on the very first characterization of hypergraph polytopes that we have mentioned: the geometric description in terms of truncated simplices.
If ${\bf H}$ is a hypergraph with the vertex set $H=\{x_1,\dots,x_{n+1}\}$, then ${\bf H}$ encodes the truncation instructions to be applied to the $(|H|-1)$-dimensional simplex, as follows. Start by labeling the facets of the $(|H|-1)$-dimensional simplex by the vertices of ${\bf H}$. Then, for each hyperedge $X\in{\it Sat}({\bf H})\backslash (\{H\}\cup \{\{x_i\}\,|\, x_i\in H\})$, truncate the face of the simplex defined as the intersection of the facets contained in $X$. This intuition is formalized as the geometric realization of hypergraph polytopes in [@DP-HP Section 8] and [@CIO Section 3].
### Simplex
The hypergraph encoding the $n$-dimensional simplex is the hypergraph with $n+1$ vertices and no non-trivial hyperedges: $${\bf S}_{n+1}=\{\{x_1\},\dots,\{x_{n+1}\},\{x_1,\dots,x_{n+1}\}\}.$$ In dimension $2$, the poset of constructs of the hypergraph ${\bf S}_3=\{\{x\},\{y\},\{z\},\{x,y,z\}\}$ can be realized as a triangle:
Let us now illustrate how truncations arise by adding non-trivial hyperedges to the “bare” simplex hypergraph ${\bf S}_3$. Consider the hypergraph ${\bf S}_3\cup \{\{y,z\}\}$. For this hypergraph, the vertex $x\{y,z\}$ is no longer well-defined, since $({\bf S}_3\cup \{\{y,z\}\})\backslash\{x\}$ no longer contains two connected components, but only one. In the polytope associated to ${\bf S}_3\cup \{\{y,z\}\}$, the vertex $x\{y,z\}$ gets replaced by two new vertices: $x\{y\{z\}\}$ and $x\{z\{y\}\}$, and the edge $x\{\{y,z\}\}$ between them, which can be realized by truncating $x\{y,z\}$ in the above realization of the triangle:
By additionally adding the hyperedge $\{x,y\}$ to ${\bf S}_3\cup \{\{y,z\}\}$, the vertex $z\{x,y\}$ will also be truncated. This leads us to our second example.
### Associahedron {#assoc}
The hypergraph encoding the $n$-dimensional associahedron is the linear graph with $n+1$ vertices: $${\bf A}_{n+1}=\{\{x_1\},\dots,\{x_{n+1}\},\{x_1,x_2\},\dots,\{x_n,x_{n+1}\}\}.$$ In dimension $2$, the poset of constructs of the hypergraph ${\bf A}_3=\{\{x\},\{y\},\{z\},\{x,y\},\{y,z\}\}$ (i.e. of the hypergraph ${\bf S}_3\cup \{\{x,y\},\{y,z\}\}$) encodes the face lattice of a pentagon as follows:
Starting from the construct representation of the $n$-dimensional associahedron, one can retrieve Stasheff’s original representation in terms of (partial) parenthesisations of a word $a_1\cdots a_{n+2}$ on $n+2$ letters, or, equivalently, of planar rooted trees with $n+2$ leaves, as follows. The idea is to consider each vertex $x_i$ of ${\bf A}_{n+1}$ as the mutiplication of letters $a_{i}$ and $a_{i+1}$, as suggested in the following expression: $$a_1 \cdot_{x_1} a_2 \cdot_{x_2} a_3 \cdot_{x_3} \,\,\cdots\,\, \cdot_{x_{n+1}} a_{n+2}.$$ A given construct should then be read from the leaves to the root, interpreting each vertex as an instruction for inserting a pair of parentheses around the group of (possibly already partially parenthesised) letters spanned by all the multiplications determined by the vertex. For example, in dimension $2$, and taking $$a \cdot_{x} b\cdot_{y} c \cdot_z d$$ as the layout for building the parentheses, the constructs $$x\{y\{z\}\},\enspace \{x,y\}\{z\},\enspace\mbox{ and }\enspace y\{x,z\}$$ correspond to parenthesised words $$(a(b(cd))),\enspace (ab(cd)),\enspace\mbox{ and }\enspace ((ab)(cd)),$$ respectively. In the other direction, the construct corresponding to a planar rooted tree with $n+2$ leaves is recovered as follows. First, label the $n+1$ intervals between the leaves of a given tree by $x_1,\dots,x_{n+1}$. Then, considering $x_i$’s as balls, let them fall, and decorate each vertex of the tree by the set of balls which end up falling to that vertex. Finally, remove the input leaves of the starting tree. For example, in dimension 2 again, and writing $x,y,z$ for $x_1,x_2,x_3$, respectively, the planar rooted trees
correspond to constructs
$x\{z\{y\}\}$ and $\{x,z\}\{y\}$,
respectively.
### Permutohedron
The hypergraph encoding the $n$-dimensional permutoheron is the complete graph with $n+1$ vertices: $${\bf P}_{n+1}=\{\{x_1\},\dots,\{x_{n+1}\}\}\cup\{\{x_i,x_j\}\,|\, i,j\in\{1,\dots,n+1\} \mbox{ and } i\neq j\}.$$ In dimension $2$, the hypergraph ${\bf P}_3=\{\{x\},\{y\},\{z\},\{x,y\},\{y,z\},\{z,x\}\}$ (i.e. the hypergraph ${\bf A}_3\cup \{\{z,x\}\}$) encodes the following set of constructs:
The corresponding realization is obtained by truncating the top vertex of the 2-dimensional associahedron from §\[assoc\].
### Hemiassociahedron {#sec.hemiassociahedron}
We finish this section with the description of the $3$-dimensional hemiassociahedron, whose construction in terms of simplex truncation we illustrated in the introduction to this section. The hypergraph encoding the $3$-dimensional hemiassociahedron is the hypergraph ${\bf H}$ from Example \[hypergraphex1\]: $${\bf H}=\{\{x\},\{y\},\{u\},\{v\},\{x,y\},\{x,u\},\{x,v\},\{u,v\},\{x,u,v\}\}.$$ As an exercise, the reader may now label the facets of the $3$-dimensional simplex in such a way that the sequence of truncations from page 3 can be read in terms of non-trivial hyperedges of ${\it Sat}({\bf H})$. The following table, listing the constructs of rank 2 of ${\bf H}$, might come in handy:
together with the following realization, in which we labeled the vertices of the hemiassociahedron:
Notice that we did not specify the hypergraph for the general case of an $n$-dimensional hemiassociahedron. Indeed, the question of the generalization of the $3$-dimensional hemiassociahedron to an arbitrary finite dimension has more than one possible answer. For example, we might define the hypergraph encoding the $n$-dimensional hemiassociahedron by ${\bf H}_{n+1}={\bf P}_{n}\cup \{\{y\},\{x_i,y\}\}$, where $y$ is different from all the vertices of the permutohedron hypergraph ${\bf P}_{n}$ and $x_i$ is one of the vertices of ${\bf P}_{n}$, but other possibilities exist as well. In the framework of strongly homotopy structures, an appropriate generalization should be such that the resulting family of hemiassociahedra is closed under the composition product of the structure in question. Finding such a generalization seems like an interesting task.
The combinatorial homotopy theory for operads {#operads}
=============================================
This section contains the combinatorial description of the minimal model ${\EuScript O}_{\infty}$ of the coloured operad ${\EuScript O}$ encoding non-symmetric non-unital reduced operads. The operad ${\EuScript O}$ is the quadratic coloured operad whose generators and relations presentation is given by the “non-symmetric portion" of [@DV Definition 5], where the coloured operad encoding symmetric operads is defined. This definition describes ${\EuScript O}$ in terms of [*composite trees*]{} (i.e. binary trees whose vertices encode the $\circ_i$ operations) and grafting. Under the name $\mbox{PsOpd}$, and by specifying its spaces of operations, the operad ${\EuScript O}$ is defined earlier in [@VdL Definition 4.1], where it is proven to be self-dual Koszul, and where ${\EuScript O}_{\infty}$ is subsequently defined as the cobar construction $\Omega\,\mbox{PsOpd}^{{{\scriptstyle \text{\rm !`}}}}$ of the cooperad $\mbox{PsOpd}^{{{\scriptstyle \text{\rm !`}}}}$. This alternative characterizatiom describes ${\EuScript O}$ in terms of [*operadic trees*]{} (i.e., trees with vertices indexed bijectively by $[k]$) and substitution. The same style of the definition of ${\EuScript O}$ can also be found in [@BM2 Example 1.5.6], where arity $0$ and units are additionally allowed.
We start this section by recalling and relating in §\[theoperad\] the two equivalent definitions of the operad ${\EuScript O}$. The combinatorial description of the minimal model ${\EuScript O}_{\infty}$ that we construct in §\[oinf\] will be directly tied to the characterization of ${\EuScript O}$ given by [@VdL Definition 4.1]. To each operadic tree ${{\mathcal{T}}}$ with $k$ vertices, we shall associate in a particular way a hypergraph ${\bf H}_{{\mathcal{T}}}$ with $k-1$ vertices, in such a way that the faces of the abstract polytope ${{\mathcal{A}}}({\bf H}_{{\mathcal{T}}})$ become the operations of ${\EuScript O}_{\infty}$ that replace (or split) ${{\mathcal{T}}}$, and that the order relation of ${{\mathcal{A}}}({\bf H}_{{\mathcal{T}}})$ determines the differential of ${\EuScript O}_{\infty}$. As further generalizations of our construction, in §\[Wconstruction\], we introduce a cubical subdivision of operadic polytopes, obtaining in this way precisely the combinatorial Boardman-Vogt-Berger-Moerdijk resolution of ${\EuScript O}$, i.e. the $W$-construction for coloured operads, introduced in [@BM2], applied on ${\EuScript O}$. Finally, in §\[cyc\], by switching from operadic trees to [*cyclic operadic trees*]{}, we obtain the combinatorial description of the minimal model of the coloured operad ${\EuScript C}$ encoding non-symmetric non-unital reduced cyclic operads.
Operads as algebras over the colored operad ${\EuScript O}$ {#theoperad}
-----------------------------------------------------------
In this section, we recall from [@DV Definition 5] and [@VdL Definition 4.1] the two definitions of the coloured operad ${\EuScript O}$ encoding non-symmetric operads. We relate these two characterizations through a correspondence between the underlying formalisms of composite and operadic trees.
### The operad ${\EuScript O}$ in terms of composite trees
Below is the “non-symmetric portion" of [@DV Definition 5], obtained from [@DV Definition 5] by leaving out the generators encoding the action of the symmetric groups. Note that the resulting operad ${\EuScript O}$ (called ${\EuScript O}_{\it ns}$ in [@DV]) itself remains a symmetric coloured operad. As a final remark before the definition, we note that ${\EuScript O}$ will incipiently be defined as a coloured operad in the category of sets, and that it is turned into a dg operad by the strong symmetric monoidal functor sending a set $X$ to the direct sum $\bigoplus_{x\in X} {\Bbbk}$.
\[1\] The coloured operad ${\EuScript O}$ is the ${\mathbb N}$-coloured operad defined by ${\EuScript O}={{\mathcal{T}}}_{\mathbb N}(E)/(R)$, where the set of generators $E$ is given by binary operations
for $n,k\geq 1$, equipped with the action of the transposition $(21)$ that sends to , and the set $R$ of relations by relations
-- ------ -- -- ---
$$ = =
-- ------ -- -- ---
where, in , $j'=j+i-1$, and, in , it is assumed that $i<j$ and $j'=j+k-1$.
The operad ${\EuScript O}$ is a dg coloured operad: for each $k\geq 2$, the vector space${{\mathcal{T}}}_{\mathbb N}(E)(n_1,\dots,n_k;n)/R(n_1,\dots,n_k;n)$ is concentrated in degree zero, and the differential is trivial. Therefore, the homology of $${\EuScript O}(n_1,\dots,n_k;n)=\bigoplus_{m\geq 0} ({{\mathcal{T}}}_{\mathbb N}(E)(n_1,\dots,n_k;n)/R(n_1,\dots,n_k;n))_m$$ is trivial for $m\neq 0$, while, for $m=0$, we have $H_0({\EuScript O}(n_1,\dots,n_k;n),0)\cong {\EuScript O}(n_1,\dots,n_k;n)$.
Let $({\it End}(A),d_{\it End})$ be the dg coloured endomorphism operad on a dg ${\mathbb N}$-module$A=\{(A(n),d_{A(n)})\}_{n\geq 1}$, i.e. the ${\mathbb N}$-coloured dg operad defined by $${\it End}(A)(n_1,\dots,n_k;n):= {\mbox{Hom}}(A(n_1)\otimes\cdots\otimes A(n_k);A(n)),\quad\quad k\geq 1,$$ where ${\mbox{Hom}}_p(A(n_1)\otimes\cdots\otimes A(n_k);A(n))$ is the vector space of homogeneous degree $p$ linear maps $f:A(n_1)\otimes\cdots\otimes A(n_k)\rightarrow A(n)$, with the partial composition operations (resp. the action of the symmetric groups) induced by substitution (resp. permutation respecting the Koszul sign rule) of the tensor factors, and with the differential $d_{\it End}$ defined on $f\in {\mbox{Hom}}_p(A(n_1)\otimes\cdots\otimes A(n_k);A(n))$ by $$d_{\it End}(f):=d_{A(n)}\circ f-(-1)^p \sum_{i=1}^k f\circ_i d_{A(n_i)}.$$ (Note that when the formula defining $d_{\it End}$ is applied to elements, additional signs appear due to the Koszul sign rule.)
\[algoperad\] Algebras over the coloured operad ${\EuScript O}$ are non-unital, non-symmetric, reduced dg operads.
By definition, an ${{\EuScript O}}$-algebra is a degree 0 homomorphism of ${\mathbb N}$-coloured dg operads $\chi:({{\EuScript O}},0)\rightarrow ({\it End}(A),d_{\it End})$. Therefore, an ${{\EuScript O}}$-algebra is a dg ${\mathbb N}$-module $(A,d)=\{(A(n),d_{A(n)})\}_{n\geq 1}$ endowed with operations $$\raisebox{-2.75ex}{\resizebox{!}{1.3cm}{\begin{tikzpicture}
\node(n) [rectangle,draw=none,minimum size=0mm,inner sep=0cm] at (-0.3,0.6) {\tiny $1$};
\node(k) [rectangle,draw=none,minimum size=0mm,inner sep=0cm] at (0.3,0.6) {\tiny $2$};
\node(n) [rectangle,draw=none,minimum size=0mm,inner sep=0cm] at (-0.4,0.4) {\tiny $n$};
\node(k) [rectangle,draw=none,minimum size=0mm,inner sep=0cm] at (0.4,0.4) {\tiny $k$};
\node(k) [rectangle,draw=none,minimum size=0mm,inner sep=0cm] at (0.5,-0.45) {\tiny $n\!+\!k\!-\!1$};
\node(j) [rectangle,draw=none,minimum size=0mm,inner sep=0cm] at (0,-0.5) {};
\node(i) [circle,draw=black,thick,minimum size=0.2mm,inner sep=0.05cm] at (0,0) {$i$};
\draw[-,thick] (i)--(-0.3,0.45);\draw[-,thick] (i)--(0.3,0.45);
\draw[-,thick] (i)--(j);\end{tikzpicture}}} :\, A(n)\otimes A(k)\rightarrow A(n+k-1)$$ satisfying the obvious associativity axioms, whereby the equality $\chi\circ 0=d_{{\it End}}\circ \chi$ satisfied by $\chi$ guarantees that those operations are compatible with $d$.
### The operad ${\EuScript O}$ in terms of operadic trees {#operadictrees}
We next recall from [@VdL Definition 4.1] and [@BM2 Example 1.5.6] the characterization of ${\EuScript O}$ in terms of operadic trees.
Denote, for $k\geq 2$, $n_1,\dots,n_k\geq 1$, and $n=\big(\sum^k_{i=1} n_i\big)-k+1$, with ${\tt Tree}(n_1,\dots,n_k;n)$ the set of equivalence classes of pairs $({\mathcal{T}},\sigma)$, where ${{\mathcal{T}}}\in{\tt Tree}(n)$ has $k$ vertices and $\sigma :[k]\rightarrow v({{\mathcal{T}}})$ is a bijection such that the vertex $\sigma(i)$ has $n_i$ inputs, under the equivalence relation defined by:
$({{\mathcal{T}}}_1,\sigma_{1})\sim ({{\mathcal{T}}}_2,\sigma_{2})$ if there exists an isomorphim $\varphi: {{\mathcal{T}}}_1\rightarrow {{\mathcal{T}}}_2$, such that $\varphi\circ\sigma_{1}=\sigma_{2}$.
(In the equality $\varphi\circ\sigma_{1}=\sigma_{2}$ above, we abuse the notation by writing $\varphi$ for what is actually the vertex component of $\varphi$. We shall continue with this practice whenever specifying compatibilities involving tree isomorphisms.) We refer to pairs $({{\mathcal{T}}},\sigma)$ as [*operadic trees*]{}.
\[representation\] A ${\Bbbk}$-linear basis of the vector space ${\EuScript O}(n_1,\dots,n_k;n)$ is given by the equivalence classes of operadic trees from ${\tt Tree}(n_1,\dots,n_k;n)$.
By [@DV Proposition 3], the coloured operad ${\EuScript O}$ is spanned by binary planar ${\mathbb N}$-coloured left combs whose vertex decorations, read from top to bottom, are nondecreasing, together with a labeling of the leaves with a permutation on the number of them. Formally, this basis is the set of normal forms of the confluent and terminating rewriting system obtained by orienting the relations and from left to right. To each left comb
${T}=$
such that $i_1\leq \cdots\leq i_{k-1}$, we associate an operadic tree $\omega(T)$, as follows: denote with $t_i$ the planar corolla with $n_i$ inputs, decorated with $\sigma(i)$, and define $$\omega(T)=(\cdots((t_1\circ_{i_1} t_2)\circ_{i_2} t_3)\cdots)\circ_{i_{k-1}} t_k ,$$ where $\circ_{i_j}$, $1\leq j\leq k-1$, denotes the grafting operation on rooted trees (that preserves vertex decorations). In particular, the correspondence between the generators of ${\EuScript O}$ and operadic trees with two vertices is given by
$\raisebox{-3ex}{\resizebox{!}{1.3cm}{\begin{tikzpicture}
\node(n) [rectangle,draw=none,minimum size=0mm,inner sep=0cm] at (-0.3,0.6) {\tiny $1$};
\node(k) [rectangle,draw=none,minimum size=0mm,inner sep=0cm] at (0.3,0.6) {\tiny $2$};
\node(n) [rectangle,draw=none,minimum size=0mm,inner sep=0cm] at (-0.4,0.4) {\tiny $n$};
\node(k) [rectangle,draw=none,minimum size=0mm,inner sep=0cm] at (0.4,0.4) {\tiny $k$};
\node(k) [rectangle,draw=none,minimum size=0mm,inner sep=0cm] at (0.5,-0.45) {\tiny $n\!+\!k\!-\!1$};
\node(j) [rectangle,draw=none,minimum size=0mm,inner sep=0cm] at (0,-0.5) {};
\node(i) [circle,draw=black,thick,minimum size=0.2mm,inner sep=0.05cm] at (0,0) {$i$};
\draw[-,thick] (i)--(-0.3,0.45);\draw[-,thick] (i)--(0.3,0.45);
\draw[-,thick] (i)--(j);\end{tikzpicture}}}\enspace \longleftrightarrow$ $\enspace\raisebox{-7ex}{\begin{tikzpicture}
\node (s1)[circle,draw=none] at (-0.4,0.65) {\tiny $\dots$};
\node (s2)[circle,draw=none] at (0.45,0.65) {\tiny $\dots$};
\node (s5)[circle,draw=none] at (0,1.82) {\tiny $\dots$};
\node (1)[circle,draw=none] at (-0.75,0.65) {\tiny $1$};
\node (m)[circle,draw=none] at (0.75,0.65) {\tiny $n$};
\node (2)[circle,draw=none] at (-0.32,1.82) {\tiny $1$};
\node (n)[circle,draw=none] at (0.32,1.82) {\tiny $k$};
\node (E)[circle,draw=black,minimum size=4mm,inner sep=0.1mm] at (0,0) {\footnotesize $1$};
\node (F) [circle,draw=black,minimum size=4mm,inner sep=0.1mm] at (0,1.075) {\footnotesize $2$};
\draw[-] (0.7,0.525) -- (E)--(-0.7,0.525);
\draw[-] (E)--(0,-0.65);
\draw[-] (0.3,1.7) -- (F)--(-0.3,1.7);
\draw[-] (E)--(F) node [midway,right,xshift=-0.075cm,yshift=0.1cm] {\scriptsize $i$};
\end{tikzpicture}}$ and$\raisebox{-3ex}{\resizebox{!}{1.3cm}{\begin{tikzpicture}
\node(n) [rectangle,draw=none,minimum size=0mm,inner sep=0cm] at (-0.3,0.6) {\tiny $2$};
\node(k) [rectangle,draw=none,minimum size=0mm,inner sep=0cm] at (0.3,0.6) {\tiny $1$};
\node(n) [rectangle,draw=none,minimum size=0mm,inner sep=0cm] at (-0.4,0.4) {\tiny $k$};
\node(k) [rectangle,draw=none,minimum size=0mm,inner sep=0cm] at (0.4,0.4) {\tiny $n$};
\node(k) [rectangle,draw=none,minimum size=0mm,inner sep=0cm] at (0.5,-0.45) {\tiny $n\!+\!k\!-\!1$};
\node(j) [rectangle,draw=none,minimum size=0mm,inner sep=0cm] at (0,-0.5) {};
\node(i) [circle,draw=black,thick,minimum size=0.2mm,inner sep=0.0325cm] at (0,0) {\small $j$};
\draw[-,thick] (i)--(-0.3,0.45);\draw[-,thick] (i)--(0.3,0.45);
\draw[-,thick] (i)--(j);\end{tikzpicture}}}\enspace \longleftrightarrow$ $\enspace\raisebox{-7ex}{\begin{tikzpicture}
\node (s1)[circle,draw=none] at (-0.4,0.65) {\tiny $\dots$};
\node (s2)[circle,draw=none] at (0.45,0.65) {\tiny $\dots$};
\node (s5)[circle,draw=none] at (0,1.82) {\tiny $\dots$};
\node (1)[circle,draw=none] at (-0.75,0.65) {\tiny $1$};
\node (m)[circle,draw=none] at (0.75,0.65) {\tiny $k$};
\node (2)[circle,draw=none] at (-0.32,1.82) {\tiny $1$};
\node (n)[circle,draw=none] at (0.32,1.82) {\tiny $n$};
\node (E)[circle,draw=black,minimum size=4mm,inner sep=0.1mm] at (0,0) {\footnotesize $2$};
\node (F) [circle,draw=black,minimum size=4mm,inner sep=0.1mm] at (0,1.075) {\footnotesize $1$};
\draw[-] (0.7,0.525) -- (E)--(-0.7,0.525);
\draw[-] (E)--(0,-0.65);
\draw[-] (0.3,1.7) -- (F)--(-0.3,1.7);
\draw[-] (E)--(F) node [midway,right,xshift=-0.075cm,yshift=0.1cm] {\scriptsize $i$};
\end{tikzpicture}}$
Notice that the fact that the vertex decorations of ${T}$ are nondecreasing means that $\omega(T)$ is defined by grafting the corollas in the [*left-recursive way*]{}, i.e. from bottom to top and from left to right. This property is used for the definition of the inverse of $\omega$: a composite tree is recovered by traversing an operadic tree in the left-recursive manner, as we illustrate in Example \[compositionexample\] that follows.
The partial composition operations $\circ_i$ of the coloured operad ${\EuScript O}$ translate to the basis given by Lemma \[representation\] as follows: for $({{\mathcal{T}}}_1,\sigma_{1})\in {\EuScript O}(n_1,\dots,n_k;n)$ and $({{\mathcal{T}}}_2,\sigma_{2})\in {\EuScript O}(m_1,\dots,m_l;n_i)$, we have $$({{\mathcal{T}}}_1,\sigma_{1}) \circ_i ({{\mathcal{T}}}_2,\sigma_{2}) = ({{\mathcal{T}}}_1\bullet_i {{\mathcal{T}}}_2, \sigma_1\bullet_i\sigma_2),$$ where ${{\mathcal{T}}}_1\bullet_i {{\mathcal{T}}}_2$ is the planar rooted tree obtained by replacing the vertex $\sigma_{1}(i)$ (i.e. the vertex indexed by $i$) of ${{\mathcal{T}}}_1$ by the tree ${{\mathcal{T}}}_2$, identifying the $n_i$ inputs of $\sigma_{1}(i)$ in ${{\mathcal{T}}}_1$ with the $n_i$ input edges of ${{\mathcal{T}}}_2$ using the respective planar structures, and $\sigma_1\bullet_i\sigma_2$ is defined by $$(\sigma_1\bullet_i\sigma_2)(j)=\begin{cases} \sigma_{1}(j), \enspace j<i \\ \sigma_{2}(j-i+1),\enspace i\leq j\leq i+l-1 \\ \sigma_{1}(j-l+1), \enspace i+l\leq j\leq k+l-1.\end{cases}$$ Indeed, it can be shown that $$({{\mathcal{T}}}_1\bullet_i {{\mathcal{T}}}_2, \sigma_1\bullet_i\sigma_2)=\omega({\it nf}(\omega^{-1}({{\mathcal{T}}}_1,\sigma_1)\circ_i \omega^{-1}({{\mathcal{T}}}_2,\sigma_2))),$$ where $\omega$ is the bijection from the proof of Lemma \[representation\], and ${\it nf}$ is the normal form function of the rewriting system generated by orienting the relations and from left to right. The action of the symmetric group is defined by $({{\mathcal{T}}},\sigma)^{\kappa}=({{\mathcal{T}}}, \sigma\circ\kappa)$.
The following example illustrates the correspondence between the partial composition operation of ${\EuScript O}$ in terms of composite trees and grafting, and operadic trees and substitution.
\[compositionexample\] For operadic trees
(t1)\[circle,draw=none\] at (-1.8,1) [ $({{\mathcal{T}}}_1,\sigma_1)\,=$]{}; (E)\[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0.75,0) [$2$]{}; (F) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (-0.3,1) [$1$]{}; (A) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (-0.3,2) [$3$]{}; (-0.1,2.8) – (A)–(-0.5,2.8); (0.75,0.8) – (E)–(-0.75,0.8); (1.5,0.8) – (E)–(2.25,0.8); (0.45,0.8) – (E)–(1.05,0.8); (E)–(0.75,-0.55); (0.1,1.8) – (F)–(-0.7,1.8); (E)–(F) node ; (F)–(A) node ;
we have
The normalizing sequence for $\omega^{-1}({{\mathcal{T}}}_1,\sigma_1)\circ_2 \omega^{-1}({{\mathcal{T}}}_2,\sigma_2)$ is given by
The operadic tree corresponding to the last composite tree in the sequence is
(E)\[circle,draw=none,minimum size=4mm,inner sep=0.1mm\] at (-3.5,1) [ $({{\mathcal{T}}},\sigma)=$]{}; (E)\[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (-0,0) [$4$]{}; (F) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (-1,1) [$1$]{}; (A) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (-1,2) [$5$]{}; (q) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0,1) [$2$]{}; (r) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (1.65,1) [$3$]{}; (0.8,0.8) – (E)–(-1.65,0.8); (-1.2,2.8) – (A)–(-0.8,2.8); (-0.2,1.8) – (q)–(0.2,1.8); (1.85,1.8) – (r)–(1.45,1.8); (E)–(0,-0.55); (-1.4,1.8) – (F)–(-0.6,1.8); (E)–(F) node ; (E)–(q) node ; (E)–(r) node ; (F)–(A) node ;
and we indeed have that $({{\mathcal{T}}}_1,\sigma_1)\circ_2 ({{\mathcal{T}}}_2,\sigma_2)=({{\mathcal{T}}},\sigma)$.[[ 9999 ]{}]{}
Observe that, although ${\EuScript O}$ is an operad with the free action of the symmetric group, the relation contains a non-trivial permutation of the inputs, making it a [*non-regular*]{} operad. This means that ${\EuScript O}$ cannot be characterized starting from a non-symmetric operad, by tensoring the space of operations with the regular representation of ${\Bbbk}[\Sigma_n]$, and by tensoring the partial composition operation with the composition map of the symmetric operad ${\it Ass}$.
Indeed, such a characterization would require that the restriction of the structure of ${\EuScript O}$ to [*left-recursive*]{} operadic trees, i.e. operadic trees with a canonical order of vertices that we define below, is closed under the operadic composition of ${\EuScript O}$, which fails to be true.
A left-recursive operadic tree is an operadic tree $({{\mathcal{T}}},\sigma)$, for which $\sigma:[k]\rightarrow v({{\mathcal{T}}})$ is the following canonical indexing of the vertices of ${{\mathcal{T}}}$:\
- if ${{\mathcal{T}}}$ is a corolla $t_n$, then $\sigma:\{1\}\rightarrow v(t_n)$ is trivially defined by $\sigma(1)=\rho(t_n)$;\
- if ${{\mathcal{T}}}=t_m({{\mathcal{T}}}_1,\dots,{{\mathcal{T}}}_p)$ and if $\leq_i$ is the linear order on $v({{\mathcal{T}}}_i)$ determined by the left-recursive structure of ${{{\mathcal{T}}}_i}$, then $\sigma$ is derived from the following linear order on $v({{\mathcal{T}}})$: $$u\leq v \quad \Longleftrightarrow \quad \begin{cases} u=\rho(t_m) \\ u,v\in v({{\mathcal{T}}}_i) \enspace\mbox{ and }\enspace u\leq_i v \\ u\in v({{\mathcal{T}}}_i),\, v\in v({{\mathcal{T}}}_j) \enspace\mbox{ and }\enspace {{\mathcal{T}}}_i < {{\mathcal{T}}}_j, \end{cases}$$ where ${{\mathcal{T}}}_i<{{\mathcal{T}}}_j$ means that ${{\mathcal{T}}}_i$ comes before ${{\mathcal{T}}}_j$ with respect to the order of inputs of $\rho(t_m)$.\
Hence, in a left-recursive operadic tree, the vertices are indexed from bottom to top and from left to right by $1$ through $k$. Observe that this indexing is invariant under planar isomorphisms. In what follows, when refering to a left-recursive operadic tree $({{\mathcal{T}}},\sigma)$, given that $\sigma$ is canonically determined, we shall write simply ${{\mathcal{T}}}$.
The reader may now want to compose the operadic trees ${{\mathcal{T}}}_1$ and ${{\mathcal{T}}}_2$ from Example \[compositionexample\], considered as left-recursive trees, to see that the result will not be a left-recursive operadic tree. Nevertheless, note that the composition $({{\mathcal{T}}}_1,\sigma_1)\circ_2 ({{\mathcal{T}}}_2,\sigma_2)$ from that example can be calculated by the substitution operation on ${{\mathcal{T}}}_1$ and ${{\mathcal{T}}}_2$ considered as left-recursive operadic trees, followed by the reindexing of the vertices of the resulting (non-left-recursive) tree in a uniquely determined way.
\[namesnumbers\] The data of an operadic tree ${{\mathcal{T}}}$ involves [*non-skeletal*]{} and [*skeletal*]{} identifications of its the edges and vertices: the non-skeletal data is given by the [*names*]{} of edges and vertices as elements of $e({{\mathcal{T}}})\cup i({{\mathcal{T}}})$ and $v({{\mathcal{T}}})$, respectively, and the skeletal data is the index of an edge (resp. vertex) given by the planar structure (resp. by the left-recursive indexing). We shall freely mix these two ways of specifying edges and vertices and use whatever is more suitable for the purpose at hand. In particular, note that the non-skeletal description of edges eases the portrayal of operadic composition operation, as it bypasses the reindexing involved in the skeletal setting.
The combinatorial ${\EuScript O}_{\infty}$ operad {#oinf}
-------------------------------------------------
In this section, we define the combinatorial ${\EuScript O}_{\infty}$ operad as the dg operad defined on the faces of [*operadic polytopes*]{}, i.e. hypergraph polytopes whose hypergraphs are the edge-graphs of operadic trees, with the differential determined by the partial order on those faces. We start by formalizing the latter type of hypergraphs.
### The edge-graph of a planar rooted tree {#edgegraph1}
The [*edge-graph*]{} of a planar rooted tree ${{\mathcal{T}}}$ is the hypergraph ${\bf H}_{{\mathcal{T}}}$ defined as follows: the vertices of ${\bf H}_{{\mathcal{T}}}$ are the (internal) edges of ${{\mathcal{T}}}$ (identified in the non-skeletal manner) and two vertices are connected by an edge in ${\bf H}_{{\mathcal{T}}}$ whenever, as edges of ${{\mathcal{T}}}$, they share a common vertex. Notice that the names (and possible indexing) of the vertices of ${{\mathcal{T}}}$, as well as the leaves of ${{\mathcal{T}}}$, play no role in the definition of the edge-graph of ${{\mathcal{T}}}$.
\[edgegraph\] With the non-skeletal identification of the edges of operadic trees $({{\mathcal{T}}}_1,\sigma_1)$, $({{\mathcal{T}}}_2,\sigma_2)$ and $({{\mathcal{T}}},\sigma)$ from Example \[compositionexample\] given by
(E)\[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0.75,0) [$2$]{}; (F) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (-0.3,1) [$1$]{}; (A) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (-0.3,2) [$3$]{}; (x) \[circle,draw=none,minimum size=4mm,inner sep=0.1mm\] at (0.3,0.64) [$x$]{}; (y) \[circle,draw=none,minimum size=4mm,inner sep=0.1mm\] at (-0.175,1.5) [$y$]{}; (-0.1,2.8) – (A)–(-0.5,2.8); (0.75,0.8) – (E)–(-0.75,0.8); (1.5,0.8) – (E)–(2.25,0.8); (0.45,0.8) – (E)–(1.05,0.8); (E)–(0.75,-0.55); (0.1,1.8) – (F)–(-0.7,1.8); (E)–(F) node ; (F)–(A) node ;
(E)\[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (-0,0) [$4$]{}; (F) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (-1,1) [$1$]{}; (A) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (-1,2) [$5$]{}; (q) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0,1) [$2$]{}; (r) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (1.65,1) [$3$]{}; (x) \[circle,draw=none,minimum size=4mm,inner sep=0.1mm\] at (-0.4,0.64) [$x$]{}; (y) \[circle,draw=none,minimum size=4mm,inner sep=0.1mm\] at (-0.875,1.5) [$y$]{}; (u) \[circle,draw=none,minimum size=4mm,inner sep=0.1mm\] at (0.14,0.5) [$u$]{}; (v) \[circle,draw=none,minimum size=4mm,inner sep=0.1mm\] at (1.1,0.5) [$v$]{}; (0.8,0.8) – (E)–(-1.65,0.8); (-1.2,2.8) – (A)–(-0.8,2.8); (-0.2,1.8) – (q)–(0.2,1.8); (1.85,1.8) – (r)–(1.45,1.8); (E)–(0,-0.55); (-1.4,1.8) – (F)–(-0.6,1.8); (E)–(F) node ; (E)–(q) node ; (E)–(r) node ; (F)–(A) node ;
respectively, the corresponding edge-graphs are
respectively. Observe that the edge-graph ${\bf H}_{{\mathcal{T}}}$ of the tree ${{\mathcal{T}}}$ is precisely the hypergraph of the hemiassociahedron (cf. §\[sec.hemiassociahedron\]), making ${{\mathcal{T}}}$ [*our favourite operadic tree*]{}.[[ 9999 ]{}]{}
Observe, in Example \[edgegraph\], the additional data given by the relative position of vertices of ${\bf H}_{{{\mathcal{T}}}_1}$ (one above the other) and ${\bf H}_{{{\mathcal{T}}}_2}$ (one next to the other). This data is implicitly present in the edge-graphs of planar rooted trees: since edge-graphs inherit their structure from planar rooted trees, their vertices can naturally be arranged in levels, both vertically (from bottom to top) and horizontally (from left to right). This observation is essential for the interpretation of the edges of operadic polytopes in terms of homotopies replacing the relations and defining the operad ${\EuScript O}$. The latter interpretation has been defined in detail in [@CIO Section 4]. Let us recall here the idea.
Recall from §\[abspol\] that the vertices of a hypergraph polytope are encoded by the constructions of the corresponding hypergraph, i.e. by the constructs whose vertices are decorated by singletons only. In addition, the edges of a hypergraph polytope are encoded by the constructs whose vertices are all singletons, except one, which is a two-element set. Let ${{\mathcal{T}}}$ be an operadic tree and let $C$ be a construct encoding an edge of the operadic polytope ${\bf H}_{{\mathcal{T}}}$; suppose that $\{x,y\}$ is the unique two-element set vertex of $C$. We show how ${\bf H}_{{\mathcal{T}}}$, together with its bipartition of vertical and horizontal edges, determines the type of $C$ in terms of homotopies for the relations and , as well as the direction of the corresponding edge corresponding to the orientation of and from left to right. (Strictly speaking, in [@CIO], the authors worked in the non-skeletal operadic setting and with the opposite orientation of . In the non-skeletal environment, the colours of ${\EuScript O}$ are arbitrary finite sets and the vertices of composite trees are decorated by the elements of those sets. This in particular means that the non-skeletal variant of the relation does not admit a natural orientation, as opposed to the skeletal one.) In order to state the criterion, we shall use the fact that, among all the paths between two vertices of ${\bf H}_{{\mathcal{T}}}$, there exists a unique one of minimal length; this fact is proven in [@CIO Lemma 11]. The criterion is the following:
> If the shortest path between $x$ and $y$ in ${\bf H}_{{\mathcal{T}}}$ is made up of vertical edges only, then the edge encoded by $C$ corresponds to the homotopy for the relation , and is oriented towards the vertex encoded by the construction in which the vertex $x$ appears above the vertex $y$ if and only if the vertical level of $x$ is inferior to the vertical level of $y$ in ${\bf H}_{{\mathcal{T}}}$. Otherwise, the edge encoded by $C$ corresponds to the homotopy for the relation , and is oriented towards the vertex encoded by the construction in which the vertex $x$ appears above the vertex $y$ if and only if the horizontal level of $x$ is inferior to the horizontal level of $y$ in ${\bf H}_{{\mathcal{T}}}$.
Let us derive the edge information for the facet of the hemiassociahedron given by the marked square in the realization below:
According to the criterion, the edges $\{u,v\}\{y\{x\}\}$ and $\{u,v\}\{x\{y\}\}$ encode the homotopies for , whereas the edges $v\{u\{\{x,y\}\}\}$ and $u\{v\{\{x,y\}\}\}$ encode the homotopies for . From the point of view of [*categorified operads*]{} [@DP-HP-o], corresponding to strongly homotopy operads for which the operations given by operadic trees with more than three vertices vanish, the construct $\{u,v\}\{\{x,y\}\}$, encoding the entire square, is the homotopy identity for the naturality relation
\(a) at (0,0) [$(((f {\circ_{x}} g)\circ_{y}h)\circ_u k)\circ_v l$]{}; (b) at (5.5,0) [$(((f {\circ_{x}} g)\circ_{y}h)\circ_v l)\circ_u k$]{}; (1) at (0,-1.75) [$((f {\circ_{x}} (g\circ_{y}h))\circ_u k)\circ_v l$]{}; (2) at (5.5,-1.75) [$((f {\circ_{x}} (g\circ_{y}h))\circ_v l)\circ_u k$]{}; (b)–(a) node \[midway,above\] ; (1)–(a) node \[midway,left\] ; (2)–(1) node \[midway,above\] ; (2)–(b) node \[midway,right\] ;
pertaining to the operation
(E)\[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (-0,0) [$f$]{}; (F) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (-1,1) [$g$]{}; (A) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (-1,2) [$h$]{}; (q) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0,1) [$k$]{}; (r) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (1.65,1) [$l$]{}; (x)\[circle,draw=none,minimum size=4mm,inner sep=0.1mm\] at (-0.45,0.65) [$x$]{}; (y) \[circle,draw=none,minimum size=4mm,inner sep=0.1mm\] at (-0.875,1.55) [$y$]{}; (u)\[circle,draw=none,minimum size=4mm,inner sep=0.1mm\] at (0.13,0.55) [$u$]{}; (v) \[circle,draw=none,minimum size=4mm,inner sep=0.1mm\] at (1.15,0.55) [$v$]{}; (0.8,0.8) – (E)–(-1.65,0.8); (-1.2,2.8) – (A)–(-0.8,2.8); (-0.2,1.8) – (q)–(0.2,1.8); (1.85,1.8) – (r)–(1.45,1.8); (E)–(0,-0.55); (-1.5,1.8) – (F)–(-0.5,1.8); (E)–(F) node ; (E)–(q) node ; (E)–(r) node ; (F)–(A) node ;
[[ 9999 ]{}]{}
The following two lemmas are straightforward consequences of the definition of the edge-graph of an operadic tree.
\[connectededges\] The subtrees of an operadic tree $({{\mathcal{T}}},\sigma)$ that have at least two vertices, considered as left-recursive operadic trees, are in a one-to-one correspondence with the connected subsets of ${H}_{{\mathcal{T}}}$, i.e. non-empty subsets $X$ of vertices of ${\bf H}_{{\mathcal{T}}}$ such that the hypergraph $({\bf H}_{{\mathcal{T}}})_{X}$ is connected.
Thanks to Lemma \[connectededges\], for an operadic tree $({{\mathcal{T}}},\sigma)$ and $\emptyset\neq X\subseteq e({{\mathcal{T}}})$, we can index the connected components of ${\bf H}_{{\mathcal{T}}}\backslash X$ by the corresponding left-recursive subtrees of ${{\mathcal{T}}}$, by writing $${\bf H}_{{\mathcal{T}}}\backslash X\leadsto {\bf H}_{{{\mathcal{T}}}_1},\dots,{\bf H}_{{{\mathcal{T}}}_n}.$$ However, one must be careful with the induced decomposition on the level of trees! Observe that the subtrees ${{\mathcal{T}}}_1,\dots,{{\mathcal{T}}}_n$ of ${{\mathcal{T}}}$ do not in general make a decomposition of ${{\mathcal{T}}}$, in the sense that the removal of the edges from the set $X$ may result in a number of subtrees of ${{\mathcal{T}}}$ reduced to a corolla.
\[wd0\] Suppose that $({{\mathcal{T}}},\sigma)=({{\mathcal{T}}}_1,\sigma_1)\circ_i ({{\mathcal{T}}}_2,\sigma_2)$, and that, for a subset $\emptyset\neq X\subseteq e({{\mathcal{T}}}_1)$ of edges of ${{\mathcal{T}}}_1$, we have ${\bf H}_{{{\mathcal{T}}}_1}\backslash X\leadsto {\bf H}_{({{\mathcal{T}}}_1)_1},\dots,{\bf H}_{({{\mathcal{T}}}_1)_p}$. If there exists an index $1\leq j\leq p$, such that the subtree $({{\mathcal{T}}}_{1})_j$ of ${{\mathcal{T}}}_1$ contains the vertex $v$ indexed by $i$ in ${{\mathcal{T}}}_1$, and if $l$ is the index that the vertex $v$ gets in the left-recursive ordering of the vertices of $({{\mathcal{T}}}_{1})_j$, then $${\bf H}_{{{\mathcal{T}}}_1\bullet_i {{\mathcal{T}}}_2}\backslash X\leadsto\{({\bf H}_{1})_k\,|\, 1\leq k\leq p , k\neq j\}\cup\{{\bf H}_{({{\mathcal{T}}}_{1})_j\,\bullet_l\, {{\mathcal{T}}}_2}\}.$$ Otherwise, we have that $${\bf H}_{{{\mathcal{T}}}_1\bullet_i {{\mathcal{T}}}_2}\backslash X\leadsto\{({\bf H}_{1})_k\,|\, 1\leq k\leq p \}\cup\{{\bf H}_{{{\mathcal{T}}}_2}\}.$$
An isomorphism of planar rooted trees induces an isomorphism on the corresponding hypergraphs and their constructs in the natural way: the form of the hypergraph matters, not the names of the hyperedges. For an isomorphism $\varphi:{{\mathcal{T}}}_1\rightarrow {{\mathcal{T}}}_2$ of planar rooted trees and constructs $C_1:{\bf H}_{{{\mathcal{T}}}_1}$ and $C_2:{\bf H}_{{{\mathcal{T}}}_2}$, we shall write $C_1\sim_{\varphi} C_2$ to denote that $C_1$ and $C_2$ are isomorphic via $\varphi$. In addition, for a hypergraph ${\bf H}$, the polytope ${{\mathcal{A}}}({\bf H})$ will be considered modulo renaming of the vertices of ${\bf H}$.
### The operad ${\EuScript O}_{\infty}$ as an operad of vector spaces {#Oinf}
Define, for $k\geq 2$, $n_1,\dots,n_k\geq 1$, and $n=\big(\sum^k_{i=1} n_i\big)-k+1$, the vector space ${\EuScript O}_{\infty}\displaystyle(n_1,n_2,\dots,n_k;n)$ to be the ${\Bbbk}$-linear span of the set of triples $({{\mathcal{T}}},\sigma,C)$, such that $({{\mathcal{T}}},\sigma)\in {\tt Tree}(n_1,\dots,n_k;n)$ and $C:{\bf H}_{{\mathcal{T}}}$, subject to the equivalence relation generated by:
> $({{\mathcal{T}}}_1,\sigma_1,C_1)\sim ({{\mathcal{T}}}_2,\sigma_2,C_2)$ if there exists an isomorphism $\varphi:{{\mathcal{T}}}_1\rightarrow {{\mathcal{T}}}_2$, such that $\varphi\circ\sigma_1=\sigma_2$ and $ C_1 \sim_{\varphi}C_2$.
Hence, for a fixed operadic tree $({{\mathcal{T}}},\sigma)\in {\tt Tree}(n_1,\dots,n_k;n)$, the subspace of ${\EuScript O}_{\infty}\displaystyle(n_1,n_2,\dots,n_k;n)$ determined by $({{\mathcal{T}}},\sigma)$ is spanned by all the (isomorphism classes of) constructs of the hypergraph ${\bf H}_{{\mathcal{T}}}$:
$${\EuScript O}_{\infty}(n_1,n_2,\dots,n_k;n):=\displaystyle {\textsf{Span}}_{{\Bbbk}}\bigg(\bigoplus_{({{\mathcal{T}}},\sigma)\in \,{\tt Tree}(n_1,\dots,n_k;n)} {A}({\bf H}_{{\mathcal{T}}})\bigg).$$
Note that for $n\neq \big(\sum^k_{i=1} n_i\big)-k+1$, we set ${\EuScript O}_{\infty}(n_1,n_2,\dots,n_k;n)$ to be the zero vector space. The ${\mathbb N}$-coloured collection $$\{{\EuScript O}_{\infty}(\displaystyle n_1,n_2,\dots,n_k;n) \,\,|\,\, n_1,\dots,n_k\geq 1\}$$ admits the following operad structure. The composition operation $$\circ_i : {\EuScript O}_{\infty}\displaystyle(n_1,\dots,n_k;n) \otimes {\EuScript O}_{\infty}\displaystyle(m_1,\dots,m_l;n_i)\rightarrow {\EuScript O}_{\infty}\displaystyle(n_1,\dots,n_{i-1},m_1,\dots,m_l,n_{i+1},\dots n_k;n)$$ is defined by $$({{\mathcal{T}}}_1,\sigma_1,C_1)\circ_i ({{\mathcal{T}}}_2,\sigma_2,C_2)=({{\mathcal{T}}}_1\bullet_i {{\mathcal{T}}}_2,\sigma_1\bullet_i\sigma_2,C_1\bullet_i C_2),$$ where the composition on the level of operadic trees is determined by the composition product of the operad ${\EuScript O}$, and the construct $C_1\bullet_i C_2:{\bf H}_{{{\mathcal{T}}}_1\bullet_i {{\mathcal{T}}}_2}$ is defined as follows.\
- If $C_1=H_{{{\mathcal{T}}}_1}$, then $$C_1\bullet_i C_2:=H_{{{\mathcal{T}}}_1}\{C_2\}.$$
- Suppose that $C_1=X\{C_{11},\dots,C_{1p}\}$, where ${\bf H}_{{{\mathcal{T}}}_1}\backslash X\leadsto ({\bf H}_{{{\mathcal{T}}}_1})_1,\dots, ({\bf H}_{{{\mathcal{T}}}_1})_p$ and $C_{1q}:({\bf H}_{{{\mathcal{T}}}_1})_q$. If there exists an index $1\leq j\leq p$, such that the subtree $({{\mathcal{T}}}_1)_j$ of ${{\mathcal{T}}}_1$ contains the vertex $v$ indexed by $i$ in ${{\mathcal{T}}}_1$, we define $$C_1\bullet_i C_2:=X\{C_{11},\dots,C_{1j}\bullet_l C_2,\dots,C_{1p}\},$$ where $l$ is the left-recursive index of the vertex $v$ in ${{\mathcal{T}}}_1$. Otherwise, we define $$C_1\bullet_i C_2:=X\{C_{11},\dots,\dots,C_{1p},C_2\}.$$
The action of the symmetric group is defined by $({{\mathcal{T}}},\sigma,C)^{\kappa}=({{\mathcal{T}}}, \sigma\circ\kappa,C)$.
\[wd\] The composition operation of ${\EuScript O}_{\infty}$ is well-defined.
We prove that $C_1\bullet_i C_2$ is indeed a construct of ${\bf H}_{{{\mathcal{T}}}_1\bullet_i {{\mathcal{T}}}_2}$. As for the first case defining $C_1\bullet_i C_2$, by Lemma \[connectededges\], since ${{\mathcal{T}}}_2$ is a subtree of ${{\mathcal{T}}}_1\bullet_i {{\mathcal{T}}}_2$, we have that ${\bf H}_{{{\mathcal{T}}}_1\bullet_i {{\mathcal{T}}}_2}\backslash H_{{{\mathcal{T}}}_1}\leadsto {\bf H}_{{{\mathcal{T}}}_2}$. Therefore, since $C_2:{\bf H}_{{{\mathcal{T}}}_2}$, we indeed have that $H_{{{\mathcal{T}}}_1}\{C_2\}:{\bf H}_{{{\mathcal{T}}}_1\bullet_i {{\mathcal{T}}}_2}$. The legitimacy of the second case defining $C_1\bullet_i C_2$ is a direct consequence of Lemma \[wd0\].
The following lemma provides a non-inductive characterization of $C_1\bullet_i C_2$.
The construct $C_1\bullet_i C_2$ is the unique construct of the hypergraph ${\bf H}_{{{\mathcal{T}}}_1\bullet_i {{\mathcal{T}}}_2}$, such that $v(C_1\bullet_i C_2)=v(C_1)\cup v(C_2)$, $\rho(C_1\bullet_i C_2)=\rho(C_1)$, and such that there exists an edge of $C_1\bullet_i C_2$ whose removal results precisely in $C_1$ and $C_2$.
Note that, if $({{\mathcal{T}}}_1,\sigma_1,C_1)\circ_i ({{\mathcal{T}}}_2,\sigma_2,C_2)=({{\mathcal{T}}}_1\bullet_i {{\mathcal{T}}}_2,\sigma_1\bullet_i\sigma_2,C_1\bullet_i C_2)$, then $C_1\bullet_i C_2\leq H_{{{\mathcal{T}}}_1}\{H_{{{\mathcal{T}}}_2}\}$ in ${{\mathcal{A}}}({\bf H}_{{{\mathcal{T}}}_1\bullet_i {{\mathcal{T}}}_2})$.
\[compositionconstructs\] The picture below displays all the 9 instances of the partial composition $$\circ_2:{\EuScript O}_{\infty}(3,7,2;10)\otimes {\EuScript O}_{\infty}(2,2,5;7)\rightarrow {\EuScript O}_{\infty}(3,2,2,5,2;10)$$ determined by operadic trees $({{\mathcal{T}}}_1,\sigma_1)$ and $({{\mathcal{T}}}_2,\sigma_2)$ from Example \[compositionexample\]. The resulting 9 constructs are the faces of the square $\{x,y\}\{\{u,v\}\}$ of the 3-dimensional hemiassociahedron.
$\circ_2$ $=$
Observe that the rank of the composition is the sum of the corresponding ranks.
Let us provide the details of the construction of the composition
at (-0.95,0.45) ;
at (-0.95,0.45) ;
at (-0.14,0.065) [ ]{};
By definition, we consider the left-recursive subtrees of ${{\mathcal{T}}}_1$ obtained by removing the edge $x$ and we search for the one containing the vertex that used to be indexed by $1$ in ${{\mathcal{T}}}_1$. Since this subtree is reduced to a corolla, the resulting construct will have grafted to the root vertex of . [[ 9999 ]{}]{}
The proof that the operad ${\EuScript O}_{\infty}$ is free as an operad of vector spaces will rely on the operation of collapsing an edge in a rooted tree. We recall the relevant definitions and results below.
\[collapse\] Let ${{\mathcal{T}}}\in {\tt Tree}(n)$ and let $e$ be an (internal) edge of ${{\mathcal{T}}}$. We define ${{\mathcal{T}}}\backslash e\in {\tt Tree}(n)$ to be the rooted tree obtained by collapsing the edge $e$ [*downwards*]{}, i.e. in such a way that the vertex that remains after $e$ is collapsed is the target vertex of $e$, i.e. the root vertex $\rho({{\mathcal{T}}}(\{e\}))$ of the subtree of ${{\mathcal{T}}}$ determined by $e$; after the collapse, the inputs of $\rho({{\mathcal{T}}}(\{e\}))$ will be all the inputs of ${{\mathcal{T}}}(\{e\})$ and they will be ordered as in ${{\mathcal{T}}}(\{e\})$. The remaining of the structure of ${{\mathcal{T}}}$ remains the same in ${{\mathcal{T}}}\backslash e$.
As for the edge collapses of operadic trees $({{\mathcal{T}}},\sigma)$, we take the convention to consider both ${{\mathcal{T}}}\backslash e$ and ${{\mathcal{T}}}(\{e\})$ as left-recursive operadic trees.
The following lemma is a straightforward consequence of Definition \[collapse\].
\[elementarydecomposition\] For an operadic tree $({{\mathcal{T}}},\sigma)$ and $e\in {\it e}({{\mathcal{T}}})$, there exists a unique permutation $\sigma_e\in\Sigma_k$, such that the equality $({{\mathcal{T}}},\sigma)=({{\mathcal{T}}}\backslash e\circ_{\rho({{\mathcal{T}}}(\{e\}))} {{\mathcal{T}}}(\{e\}))^{\sigma_e}$ holds in the operad ${\EuScript O}$.
\[edgecoherence\] Note that, if $e_1,e_2\in {\it e}({{\mathcal{T}}})$, then $({{\mathcal{T}}}\backslash e_1)\backslash e_2=({{\mathcal{T}}}\backslash e_2)\backslash e_1$. This equality ensures that, having fixed a set of edges of a tree, the order of collapsing the edges from that set has no effect on the resulting tree.
Note that, if a fixed set of edges determines a subtree ${{\mathcal{S}}}$ of ${{\mathcal{T}}}$, then the root vertex $\rho({{\mathcal{S}}})$ of ${{\mathcal{S}}}$ remains a vertex in the tree ${{\mathcal{T}}}\backslash{{\mathcal{S}}}$, obtained by collapsing all the edges of ${{\mathcal{S}}}$.
The following result is a consequence of Lemma \[elementarydecomposition\] and Remark \[edgecoherence\].
\[subtreedecomposition\] For an operadic tree $({{\mathcal{T}}},\sigma)$ and a subtree ${{\mathcal{S}}}$ of ${{\mathcal{T}}}$, considered as a left-recursive operadic tree, there exists a unique permutation $\sigma_{{\mathcal{S}}}\in\Sigma_k$, such that the equality $({{\mathcal{T}}},\sigma)=({{\mathcal{T}}}\backslash {{\mathcal{S}}}\circ_{\rho({{\mathcal{S}}})} {{\mathcal{S}}})^{\sigma_{{\mathcal{S}}}}$ holds in the operad ${\EuScript O}$.
We now have all the prerequisites for proving that the ${\EuScript O}_{\infty}$ operad is free. The idea is simple and it has already been indicated in §\[edgegraph1\]: constructs of arbitrary hypergraphs are non-planar trees, but if a hypergraph is the edge-graph of some operadic tree, then the hypergraph itelf, as well as its constructs, inherit a canonical planar embedding from that tree. This observation gives us a way to represent each triple $({{\mathcal{T}}},\sigma,C)$, where $({{\mathcal{T}}},\sigma)$ is an operadic tree and $C:{\bf H}_{{\mathcal{T}}}$, as a planar tree of a free operad.
\[free\] As a coloured operad of vector spaces, ${\EuScript O}_{\infty}$ is the free ${\mathbb N}$-coloured operad generated by the equivalence classes of left-recursive operadic trees: $${\EuScript O}_{\infty}\simeq {{\mathcal{T}}}_{\mathbb N}\Bigg(\, \displaystyle\bigoplus_{k\geq 2}\,\,\bigoplus_{n_1,\dots,n_k\geq 1}\,\,\bigoplus_{{{\mathcal{T}}}\in \,{\tt Tree}(n_1,\dots,n_k;n)}\,{\Bbbk}\,\Bigg).$$
We define an isomorphism $\alpha$ between ${\EuScript O}_{\infty}$ and the operad of ${\mathbb N}$-coloured planar rooted trees whose vertices are decorated by left-recursive operadic trees and whose leaves are labeled by a permutation on the number of them, by induction on the number of vertices of the construct $C$ of a given operation $({{\mathcal{T}}},\sigma,C)\in {\EuScript O}_{\infty}(n_1,\dots,n_k;n)$. Denote, for $1\leq i\leq k$, with $\sigma_i$ the index given by $\sigma$ to the $i$-th vertex in the left-recursive ordering of ${{\mathcal{T}}}$.
- If $C$ is the maximal construct $e({{\mathcal{T}}})$, then
\(r) \[rectangle,draw=black,rounded corners=.1cm,inner sep=1mm\] at (0,0) [$\enspace{{\mathcal{T}}}\enspace$]{}; (rl) \[circle,draw=none,inner sep=0.4mm\] at (-0.8,1.1) [${\bf\sigma}_1$]{}; (rr) \[circle,draw=none,inner sep=0.4mm\] at (0.8,1.1) [$\sigma_k$]{}; (rl) \[circle,draw=none,inner sep=0.4mm\] at (-0.8,0.8) [$n_{\sigma_1}$]{}; (rr) \[circle,draw=none,inner sep=0.4mm\] at (0.8,0.8) [$n_{\sigma_k}$]{}; (rc) \[circle,draw=none,inner sep=0.4mm\] at (0,0.8) [$\dots$]{}; (erc) \[circle,draw=none,inner sep=0.4mm\] at (0,1.1) [$\dots$]{}; (rb) \[circle,draw=none,inner sep=0.4mm\] at (0.15,-0.65) [$n$]{}; (0.8,0.7)–(r)–(-0.8,0.7); (r)–(0,-0.7);
Note that each input of $\alpha({{\mathcal{T}}},\sigma,e({{\mathcal{T}}}))$ is uniquely determined by any of the following two data: its position in the planar structure (the number), or, the name of the vertex indexed by ${i}$ in ${{\mathcal{T}}}$ (the name).\
- Suppose that $C=X\{C_1,\dots,C_p\}$, where ${\bf H}_{{\mathcal{T}}}\backslash X\leadsto {\bf H}_{{{\mathcal{T}}}_1},\dots, {\bf H}_{{{\mathcal{T}}}_p}$ and $C_i:{\bf H}_{{{\mathcal{T}}}_i}$. Let ${{\mathcal{T}}}_X$ be the left-recursive operadic tree obtained from $({{\mathcal{T}}},\sigma)$ by collapsing all the edges from $e({{\mathcal{T}}})\backslash X$ (see Remark \[edgecoherence\]). Observe that the collapse of the edges ${\it e}({{\mathcal{T}}})\backslash X$ that defines ${{\mathcal{T}}}_X$ is, in fact, the collapse of the subtrees ${{\mathcal{T}}}_i$, $1\leq i\leq p$, of ${{\mathcal{T}}}$. By the definition of a collapse, each ${{\mathcal{T}}}_i$ will collapse to the vertex $\rho({{\mathcal{T}}}_i)$ of ${{\mathcal{T}}}_X$; in particular, since the ${{\mathcal{T}}}_i$’s are mutually disjoint subtrees of ${{\mathcal{T}}}$, all the vertices $\rho({{\mathcal{T}}}_i)$ will be mutually distinct. We define $$\quad\quad\quad\quad\alpha({{\mathcal{T}}},\sigma,C)=\bigg((\cdots((\alpha({{\mathcal{T}}}_X,X)\circ_{\rho({{\mathcal{T}}}_{1})} \alpha({{\mathcal{T}}}_{1},C_{1}) )\circ_{\rho({{\mathcal{T}}}_{2})} \alpha({{\mathcal{T}}}_{2},C_{2}))\cdots)\circ_{{\rho({{\mathcal{T}}}_{p})}}\alpha({{\mathcal{T}}}_{p},C_{p})\bigg)^{\sigma_C},$$ where we the inputs of $\alpha({{\mathcal{T}}}_X,X)$ involved in grafting are represented by their names, in order to avoid the reindexing, and where $\sigma_C$ is the permutation determined uniquely thanks to (iterated application of) Lemma \[subtreedecomposition\]. Note that each tree ${{\mathcal{T}}}_i$ above is considered as left-recursive.
For example, for the operadic tree
(E)\[circle,draw=none,minimum size=4mm,inner sep=0.1mm\] at (-3.3,1.5) [ $({{\mathcal{T}}},\sigma)=$]{}; (E)\[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (-0.1,0) [$3$]{}; (F) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (-1.1,1) [$2$]{}; (A) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (-1.7,2) [$6$]{}; (q) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (-0.5,2) [$1$]{}; (u1) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (-0.5,3) [$4$]{}; (r) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0.9,1) [$5$]{}; (1.6,0.8) – (E)–(-1.8,0.8); (-1.9,2.8) – (A)–(-1.5,2.8); (0.7,1.8) – (r)–(1.1,1.8); (-0.7,3.8) – (u1)–(-0.3,3.8); (E)–(-0.1,-0.55); (-2.2,1.8) – (F)–(0,1.8); (-0.9,2.8) – (q)–(-0.1,2.8); (E)–(F) node \[midway,right,xshift=-0.05cm,yshift=0.05cm\] [$x$]{}; (F)–(q) node \[midway,right,xshift=-0.05cm,yshift=0cm\] [$u$]{}; (E)–(r) node \[midway,left,xshift=0.05cm,yshift=0.025cm\] [$y$]{}; (F)–(A) node \[midway,right,xshift=-0.35cm,yshift=0cm\] [$z$]{}; (q)–(u1) node \[midway,right,xshift=-0.1cm,yshift=-0.05cm\] [$v$]{};
and constructs
of the hypergraph ${\bf H}_{{\mathcal{T}}}$ associated to ${{\mathcal{T}}}$, we have that
(0.75,0.5)–(-0.5,2.9); (2.5,1.175)–(2.25,0.5); (-1.1,4.75)–(0,3.5)–(1.1,4.75); (-0.6,7.4)–(0,5.5)–(0.6,7.4); (1) \[rectangle,draw=black,fill=white,rounded corners=.5cm,inner sep=2mm\] at (1.5,-0.5) [ ]{}; (2) \[rectangle,draw=black,fill=white,rounded corners=.35cm,inner sep=2mm\] at (0,2.7) ; (3) \[rectangle,draw=black,fill=white,rounded corners=.35cm,inner sep=2mm\] at (0,5.85) ; (n1) \[rectangle,draw=none,inner sep=0mm\] at (0.3,0.9) [$10$]{}; (n2) \[rectangle,draw=none,inner sep=0mm\] at (1.5,1.3) [$2$]{}; (n3) \[rectangle,draw=none,inner sep=0mm\] at (2.5,1.3) [$2$]{}; (n3) \[rectangle,draw=none,inner sep=0mm\] at (-1.1,4.85) [$4$]{}; (n4) \[rectangle,draw=none,inner sep=0mm\] at (0.13,4.5) [$6$]{}; (n5) \[rectangle,draw=none,inner sep=0mm\] at (1.1,4.85) [$2$]{}; (n6) \[rectangle,draw=none,inner sep=0mm\] at (-0.6,7.525) [$4$]{}; (n7) \[rectangle,draw=none,inner sep=0mm\] at (0.6,7.525) [$3$]{}; (n2’) \[rectangle,draw=none,inner sep=0mm\] at (1.5,1.6) [$\bf 6$]{}; (n3’) \[rectangle,draw=none,inner sep=0mm\] at (2.5,1.6) [$\bf 5$]{}; (n3’) \[rectangle,draw=none,inner sep=0mm\] at (-1.1,5.15) [$\bf 3$]{}; (n5’) \[rectangle,draw=none,inner sep=0mm\] at (1.1,5.15) [$\bf 4$]{}; (n6’) \[rectangle,draw=none,inner sep=0mm\] at (-0.6,7.825) [$\bf 2$]{}; (n7’) \[rectangle,draw=none,inner sep=0mm\] at (0.6,7.825) [$\bf 1$]{}; (1.5,-2)–(1); (2)–(3); (1)–(1.5,1.175);
where the edge and leaf colours given by natural numbers are represented using regular font, and the indexing of the leaves is represented using bold font. Observe that, modulo leaves, $\alpha({{\mathcal{T}}},C)$ has the same shape as $C$.
The inverse of $\alpha$ is defined by composing the left-recursive operadic trees that decorate the nodes of an element $(T,\sigma)$ of the free operad, in the way dictated by the edges of that element, followed by reindexing the vertices of the resulting tree as specified by $\sigma$, and by extracting the corresponding construct in the following way: first, remove all the leaves of $T$, and then, for each vertex of $T$, replace the operadic tree that decorates that vertex by the maximal constructs of its associated hypergraph. Lemma \[subtreedecomposition\] ensures that this correspondence is indeed an isomorphism.
Having in mind the free operad description of ${\EuScript O}_{\infty}$, we adopt the following notational convention about constructs.
If we wish to incorporate the specification of the planar embedding of a construct $C:{\bf H}_{{\mathcal{T}}}$ into the notation for $C$, we shall write $C=X(C_1,\dots,C_p)$ instead of $C=X\{C_1,\dots,C_p\}$, if $C_1,\dots,C_p$ appear in that order in the tree $\alpha({{\mathcal{T}}},\sigma,C)$.
### The operad ${\EuScript O}_{\infty}$ as a differential graded operad {#odg}
In order to equip the ${\EuScript O}_{\infty}$ operad with a grading and a differential, we shall use the free operad structure of ${\EuScript O}_{\infty}$ and count the edges and leaves that lie in a particular position relative to some other edge or a leaf, in the way formalized by the following definition.
\[orient\] Let ${{\mathcal{T}}}$ be a planar rooted tree, and let $e\in e({{\mathcal{T}}})$ and $l\in {\it i}({{\mathcal{T}}})$ be an internal edge and an input leaf of ${{\mathcal{T}}}$, respectively.\
- The internal edges [*below*]{} $e$ (resp. $l$) in ${{\mathcal{T}}}$ are the internal edges of ${{\mathcal{T}}}$ that lie on the unique path from the vertex $\rho({{\mathcal{T}}}(\{e\}))$ (resp. $\rho({{\mathcal{T}}}(\{l\}))$ ) to $\rho({{\mathcal{T}}})$.\
- The edges and leaves [*on the left*]{} (resp. [*on the right*]{}) from $e$ are the edges and leaves of ${{\mathcal{T}}}$ which are strictly on the left (resp. right) from the unique path from the first (resp. last) leaf of the subtree of ${{\mathcal{T}}}$ rooted at $e$, to ${r}({{\mathcal{T}}})$. The edges and leaves on the left (resp. on the right) from $l$ are the edges and leaves of ${{\mathcal{T}}}$ which are strictly on the left (resp. right) from the unique path from $l$ to ${r}({{\mathcal{T}}})$.
For $e\in e({{\mathcal{T}}})$ and $l\in {\it i}({{\mathcal{T}}})$, denote with $E_{\leq e}({{\mathcal{T}}})$ the sum of the number of all edges and leaves on the left from $e$ and the number of all edges below $e$ in ${{\mathcal{T}}}$, and with $E_{l>}({{\mathcal{T}}})$ the number of all edges and leaves on the right from $l$ in ${{\mathcal{T}}}$.
We grade the vector space ${\EuScript O}_{\infty}(n_1,n_2,\dots,n_k;n)$ by setting $$|({{\mathcal{T}}},\sigma,C)|= {\it e}({{\mathcal{T}}}) -v(C)=k-1-v(C).$$ Note that the grading agrees with the rank of the construct $C:{\bf H}_{{\mathcal{T}}}$ in ${{\mathcal{A}}}({\bf H}_{{\mathcal{T}}})$; in particular, $0\leq |({{\mathcal{T}}},C)|\leq k-2$. Observe also that the partial composition operation of ${\EuScript O}_{\infty}$ respects the grading, in the sense that $$|({{\mathcal{T}}}_1,\sigma_1,C_1)\circ_i ({{\mathcal{T}}}_2,\sigma_2,C_2)|=|({{\mathcal{T}}}_1,\sigma_1,C_1)| +|({{\mathcal{T}}}_2,\sigma_2,C_2)|.$$ If $({{\mathcal{T}}},\sigma)$ is clear from the context, we shall often write $|C|$ for what is actually $|({{\mathcal{T}}},\sigma,C)|$.
In the graded version of the ${\EuScript O}_{\infty}$ operad, signs show up in the definition of the partial composition: we adapt the definition of $\circ_i$ by setting $$({{\mathcal{T}}}_1,\sigma_1,C_1)\circ_i ({{\mathcal{T}}}_2,\sigma_2,C_2)=(-1)^{\varepsilon}({{\mathcal{T}}}_1\bullet_i {{\mathcal{T}}}_2,\sigma_1\bullet_i\sigma_2,C_1\bullet_i C_2),$$ where $\varepsilon$ is the number of edges and leaves of $\alpha({{\mathcal{T}}}_1,\sigma_1,C_1)$ on the right of the leaf indexed by $i$, multiplied by the number of all edges and leaves of $\alpha({{\mathcal{T}}}_2,\sigma_2,C_2)$, minus the root: $$\varepsilon= E_{i>}(\alpha({{\mathcal{T}}}_1,\sigma_1,C_1))\cdot (E(\alpha({{\mathcal{T}}}_2,\sigma_2,C_2))-1).$$
In the graded setting, the composition
at (-0.95,0.45) ;
at (-0.95,0.45) ;
at (-0.14,0.065) [ ]{};
from Example \[compositionconstructs\] gets multiplied by $+$. Indeed,
(0,-0.5)–(1,2.1); (0.4,3.55)–(1,2)–(1.6,3.55); (1) \[rectangle,draw=black,fill=white,rounded corners=.5cm,inner sep=2mm\] at (0,-0.5) [ ]{}; (2) \[rectangle,draw=black,fill=white,rounded corners=.35cm,inner sep=2mm\] at (1,2.1) ; (n1) \[rectangle,draw=none,inner sep=0mm\] at (-0.75,1.2) [$7$]{}; (n2) \[rectangle,draw=none,inner sep=0mm\] at (0.4,3.7) [$3$]{}; (n3) \[rectangle,draw=none,inner sep=0mm\] at (1.6,3.7) [$2$]{}; (n1’) \[rectangle,draw=none,inner sep=0mm\] at (-0.75,1.45) [${\bf 2}$]{}; (n2’) \[rectangle,draw=none,inner sep=0mm\] at (0.4,3.95) [${\bf 1}$]{}; (n3’) \[rectangle,draw=none,inner sep=0mm\] at (1.6,3.95) [${\bf 3}$]{}; (n3) \[rectangle,draw=none,inner sep=0mm\] at (0.65,0.8) [$4$]{}; (0,-2)–(1); (1)–(-0.75,1.075);
(-0.4,3.55)–(-1,2)–(-1.6,3.55); (1) \[rectangle,draw=black,fill=white,rounded corners=.4cm,inner sep=2mm\] at (0,-0.5) ; (2) \[rectangle,draw=black,fill=white,rounded corners=.35cm,inner sep=2mm\] at (-1,2.1) ; (n1) \[rectangle,draw=none,inner sep=0mm\] at (0.6,1.2) [$2$]{}; (n2) \[rectangle,draw=none,inner sep=0mm\] at (-0.4,3.7) [$2$]{}; (n3) \[rectangle,draw=none,inner sep=0mm\] at (-1.6,3.7) [$5$]{}; (n1’) \[rectangle,draw=none,inner sep=0mm\] at (0.6,1.45) [${\bf 2}$]{}; (n2’) \[rectangle,draw=none,inner sep=0mm\] at (-0.4,3.95) [${\bf 1}$]{}; (n3’) \[rectangle,draw=none,inner sep=0mm\] at (-1.6,3.95) [${\bf 3}$]{}; (n3) \[rectangle,draw=none,inner sep=0mm\] at (-0.65,0.8) [$6$]{}; (0,-2)–(1)–(2); (1)–(0.6,1.075);
and, therefore, $\varepsilon=3\cdot 4=12$. On the other hand, we have
at (-0.95,0.45) ;
at (-0.14,0.065) [ ]{};
since $\alpha({{\mathcal{T}}}_2,\sigma_2,\{u,v\})$ has one edge less than $\alpha({{\mathcal{T}}}_2,\sigma_2,\{u\})$, which gives $\varepsilon=3\cdot 3=9$.[[ 9999 ]{}]{}
In the following lemma, we prove that Theorem \[free\] extends to the graded context, in which signs show up in the computation of composition in a free operad (see [@LV Section 5.8.7]).
With the composition product adapted to the graded context, ${\EuScript O}_{\infty}$ is the free ${\mathbb N}$-coloured graded operad with respect to the set of generators given in Theorem \[free\].
By Theorem \[free\], it remains to be shown that, for an operation $({{\mathcal{T}}},\sigma, C)\in {\EuScript O}_{\infty}(n_1,\dots,n_k;n)$ and two distinct vertices $v_1,v_2\in v({{\mathcal{T}}})$, such that the index of $v_1$ in the left-recursive decoration of ${{\mathcal{T}}}$ is less than the index of $v_2$, we have that $$(({{\mathcal{T}}},\sigma,C)\circ_{v_1} ({{\mathcal{T}}}_1,\sigma_1,C_1))\circ_{v_2} ({{\mathcal{T}}}_2,\sigma_2,C_2)=(-1)^{|C_1|\cdot|C_2|}(({{\mathcal{T}}},\sigma,C)\circ_{v_2} ({{\mathcal{T}}}_2,\sigma_2,C_2))\circ_{v_1} ({{\mathcal{T}}}_1,\sigma_1,C_1).$$ In the free operad description of ${\EuScript O}_{\infty}$, the two compositions of the above equality are represented by the planar tree
where, for $i=1,2$, $\alpha({{\mathcal{T}}}_i,\sigma_i,C_i)$ has $l_i$ inputs and $e_i$ edges, and where $k_0$ (resp. $k_1$, $k_2$) is the number of leaves and edges of $\alpha({{\mathcal{T}}},\sigma,C)$ on the left from $v_1$ (resp. between $v_1$ and $v_2$, on the right from $v_2$). The sign of the composition on the left-hand side is then determined by $\varepsilon_1=(k_1+k_2+1)(l_1+e_1)+k_2(l_2+e_2)$, while, for the right-hand side, we have $\varepsilon_2=k_2(l_2+e_2)+(k_1+k_2+l_2+e_2+1)(l_1+e_1)$. Additionally, for $i=1,2$, we have that $|C_i|=l_i-e_i-2$. A straightforward calculation shows that $\varepsilon_1=_{\mbox{mod}2} \varepsilon_2+|C_1|\cdot |C_2|$, which proves the claim.
In order to equip the graded operad ${\EuScript O}_{\infty}$ with a differential, we now formalize the action of splitting the vertices of constructs of edge-graphs of operadic trees. Let $({{\mathcal{T}}},\sigma,C)\in {\EuScript O}_{\infty}(n_1,\dots,n_k;n)$ and let $V\in v(C)$ be such that $|V|\geq 2$. Let ${{\mathcal{T}}}_{V}$ be the left-recursive operadic tree obtained from ${{\mathcal{T}}}$ by contracting all the edges of ${{\mathcal{T}}}$ except the ones contained in $V$. Let $X$ and $Y$ be non-empty disjoint sets such that $X\cup Y=V$ and such that $X\{Y\}:{\bf H}_{{{\mathcal{T}}}_V}$. We define the construct $C[X\{Y\}/V]$ of ${\bf H}_{{\mathcal{T}}}$ by induction on the number of vertices of $C$, as follows.\
- If $C=e({{\mathcal{T}}})$, we set $$C[X\{Y\}/V]:=X\{Y\}.$$
- Suppose that $C=Z\{C_1,\dots,C_p\}$, where ${\bf H}_{{\mathcal{T}}}\backslash Z\leadsto {\bf H}_{{{\mathcal{T}}}_1},\dots, {\bf H}_{{{\mathcal{T}}}_p}$ and $C_i:{\bf H}_{{{\mathcal{T}}}_i}$.\
- If there exists an index $1\leq i\leq p$, such that $V\in v(C_i)$, we define $$C[X\{Y\}/V]:=Z\{C_1,\dots,C_{i-1},C_i[X\{Y\}/V] ,C_{i+1},\dots,C_p\}.$$
- Suppose that $V=Z$ and let $\{i_1,\dots,i_q\}\cup \{j_1,\dots,j_r\}$ be the partition of the set $\{1,\dots,p\}$ such that the trees ${{\mathcal{T}}}_{i_s}$, for $1\leq s\leq q$, contain an edge sharing a vertex with some edge of $Y$, while the trees ${{\mathcal{T}}}_{i_t}$, for $1\leq t\leq q$, have no edges sharing a vertex with the edges of $Y$. We define $$C[X\{Y\}/V]:=X\{Y\{C_{i1},\dots C_{iq}\},C_{j1},\dots,C_{jr}\}.$$ If, exceptionally, $\{i_1,\dots,i_q\}=\emptyset$ (resp. $\{j_1,\dots,j_r\}=\emptyset$), we have that $C[X\{Y\}/V]:=X\{Y,C_1,\dots,C_p\}$ (resp. $C[X\{Y\}/V]:=X\{Y\{C_1,\dots,C_p\}\}$).\
Therefore, intuitively, $C[X\{Y\}/V]$ is obtained from $C$ by splitting the vertex $V$ into the edge $X\{Y\}$.
\[splitwelldefined\] The non-planar rooted tree $C[X\{Y\}/V]$ is indeed a construct of ${\bf H}_{{\mathcal{T}}}$.
By induction on the number of vertices of $C$. If $C$ has a single vertex $V=e({{\mathcal{T}}})$, then ${\bf H}_{{{\mathcal{T}}}_V}={\bf H}_{{\mathcal{T}}}$, and, therefore, for any decomposition $X\cup Y=e({{\mathcal{T}}})$, such that $X\{Y\}:{\bf H}_{{{\mathcal{T}}}_{V}}$, we trivially also have that $X\{Y\}:{\bf H}_{{\mathcal{T}}}$.
Suppose that $C=Z(C_1,\dots,C_p)$, where ${\bf H}_{{\mathcal{T}}}\backslash Z\leadsto {\bf H}_{{{\mathcal{T}}}_1},\dots, {\bf H}_{{{\mathcal{T}}}_p}$ and $C_i:{\bf H}_{{{\mathcal{T}}}_i}$.\
- If there exists $1\leq i\leq p$, such that $V\in v(C_i)$, we conclude by the induction hypothesis for $C_i$.\
- If $V=Z$ and if $\{i_1,\dots,i_q\}\cup \{j_1,\dots,j_r\}$ is the partition as above, then, since $X\{Y\}:{{\mathcal{T}}}_V$, it must be the case that the set of edges $Y\cup \bigcup_{i\in \{i_1,\dots,i_q\}}e({{\mathcal{T}}}_i)$ determines a single subtree ${{\mathcal{S}}}$ of ${{\mathcal{T}}}$. Therefore, $${\bf H}_{{\mathcal{T}}}\backslash X\leadsto \{{\bf H}_{{{\mathcal{S}}}}\}\cup \{{\bf H}_{{{\mathcal{T}}}_j}\,|\, j\in \{j_1,\dots,j_r\}\}.$$ Then, if $\{i_1,\dots,i_q\}\neq \emptyset$, the conclusion follows since $Y\{C_{i1},\dots C_{iq}\}:{\bf H}_{{{\mathcal{S}}}}$. If $\{i_1,\dots,i_q\}=\emptyset$, the conclusion follows since $Y$ is trivially the maximal construct of ${\bf H}_{{{\mathcal{S}}}}$.
The differential $d_{{\EuScript O}_{\infty}}$ of ${\EuScript O}_{\infty}$ is defined in terms of splitting the vertices of constructs, as follows: for $({{\mathcal{T}}},\sigma,C)\in {\EuScript O}_{\infty}(n_1,\dots,n_k;n)$, we set $$d_{{\EuScript O}_{\infty}}({{\mathcal{T}}},\sigma,C):=\displaystyle\sum_{\substack{V\in v(C) \\ |V|\geq 2}}\sum_{\substack{(X,Y) \\ X\cup Y=V \\ X\{Y\}:{\bf H}_{{{\mathcal{T}}}_V}}} (-1)^{\delta} ({{\mathcal{T}}},\sigma,C[X\{Y\}/V]),$$ where $\delta$ is the number of edges and leaves in $\alpha({{\mathcal{T}}},\sigma,C[X\{Y\}/V])$ which are on the left from or below the edge determined by $X\{Y\}$: $$\delta=E_{\leq X\{Y\}}(\alpha({{\mathcal{T}}},\sigma,C[X\{Y\}/V])).$$
Observe that, for $({{\mathcal{T}}},C)\in {\EuScript O}_{\infty}(n_1,\dots,n_k;n)$, we have $|({{\mathcal{T}}},C)|=k-1-v(C)$ and $|d_{{\EuScript O}_{\infty}}({{\mathcal{T}}},C)|=k-1-(v(C)+1)$, which shows that $d_{{\EuScript O}_{\infty}}$ is indeed a map od degree $-1$. In particular, for generators $({{\mathcal{T}}},e({{\mathcal{T}}}))$ we have: $$d_{{\EuScript O}_{\infty}}({{\mathcal{T}}},e({{\mathcal{T}}}))=\displaystyle\sum_{\substack{(X,Y) \\ X\cup Y=e({{\mathcal{T}}})\\ X\{Y\}:{\bf H}_{{\mathcal{T}}}}} (-1)^{\delta} ({{\mathcal{T}}},X\{Y\})=\displaystyle\sum_{\substack{(X,Y) \\ X\cup Y=e({{\mathcal{T}}})\\ X\{Y\}:{\bf H}_{{\mathcal{T}}}}} (-1)^{\delta} ({{\mathcal{T}}}_X,X)\circ_{\rho({{\mathcal{T}}}_Y)} ({{\mathcal{T}}}_Y,Y),$$ which shows that $d_{{\EuScript O}_{\infty}}$ is decomposable.
The differential $d_{{\EuScript O}_{\infty}}$ of ${\EuScript O}_{\infty}$ acts on an operation $({{\mathcal{T}}},\sigma,C)$ by splitting the vertices of $C$, whereas the underlying operadic tree $({{\mathcal{T}}},\sigma)$ remains unchanged. When expressing this action, assuming that $({{\mathcal{T}}},\sigma)$ is clear from the context, we shall often write simply $d_{{\EuScript O}_{\infty}}(C)$.
We calculate the differential of $({{\mathcal{T}}},\sigma,\raisebox{-1.15em}{\resizebox{1.2cm}{!}{\begin{tikzpicture}
\node (c) [circle,fill=WildStrawberry,draw=black,minimum size=0.1cm,inner sep=0.2mm,label={[xshift=-0.45cm,yshift=-0.32cm]{\footnotesize $\{x,y\}$}}] at (0,0) {};
\node (d) [circle,fill=WildStrawberry,draw=black,minimum size=0.1cm,inner sep=0.2mm,label={[xshift=-0.45cm,yshift=-0.32cm]{\footnotesize $\{u,v\}$}}] at (0,0.35) {};
\draw[thick] (c)--(d);\end{tikzpicture}}}\,)$, where $({{\mathcal{T}}},\sigma)$ is our favourite operadic tree (see Example \[edgegraph\] and Example \[compositionconstructs\]). By splitting the vertices $\{x,y\}$ and $\{u,v\}$, we get the following four planar trees in the free operad representation of ${\EuScript O}_{\infty}$:
(0,-1.5)–(1,1.4); (1) \[rectangle,draw=black,fill=white,rounded corners=.3cm,inner sep=2mm\] at (0,-1.5) [ ]{}; (2) \[rectangle,draw=black,fill=white,rounded corners=.3cm,inner sep=2mm\] at (1,1.2) ; (3) \[rectangle,draw=black,rounded corners=.3cm,inner sep=2mm\] at (-1,1.2) ; (n2) \[rectangle,draw=none,inner sep=0mm\] at (0.7,2.9) [$3$]{}; (n3) \[rectangle,draw=none,inner sep=0mm\] at (1.3,2.9) [$2$]{}; (m1) \[rectangle,draw=none,inner sep=0mm\] at (-1.6,2.9) [$5$]{}; (m2) \[rectangle,draw=none,inner sep=0mm\] at (-1,2.9) [$2$]{}; (m3) \[rectangle,draw=none,inner sep=0mm\] at (-0.4,2.9) [$2$]{}; (0,-3.15)–(1); (1)–(3); (3)–(-1,2.725); (-0.4,2.725)–(3)–(-1.6,2.725); (0.7,2.725)–(2)–(1.3,2.725);
(0.2,-1.5)–(1.3,-4.2) ; (1) \[rectangle,draw=black,fill=white,rounded corners=.3cm,inner sep=2mm\] at (-0.2,-1.5) [ ]{}; (2) \[rectangle,draw=black,fill=white,rounded corners=.3cm,inner sep=2mm\] at (1.3,-4.2) [ ]{}; (3) \[rectangle,draw=black,rounded corners=.3cm,inner sep=2mm\] at (-1,1.2) ; (n3) \[rectangle,draw=none,inner sep=0mm\] at (1.75,-2.4) [$2$]{};; (m1) \[rectangle,draw=none,inner sep=0mm\] at (-1.55,2.9) [$5$]{}; (m2) \[rectangle,draw=none,inner sep=0mm\] at (-1,2.9) [$2$]{}; (m3) \[rectangle,draw=none,inner sep=0mm\] at (-0.45,2.9) [$2$]{}; (n3) \[rectangle,draw=none,inner sep=0mm\] at (0.3,0.275) [$3$]{}; (0.3,0.1)–(1); (2)–(1.3,-5.8); (2)–(1.75,-2.6); (1)–(3); (3)–(-1,2.725); (-0.4,2.725)–(3)–(-1.6,2.725);
\(1) \[,rectangle,draw=black,rounded corners=.3cm,inner sep=2mm\] at (0.2,-1.45) ; (2) \[rectangle,draw=black,rounded corners=.3cm,inner sep=2mm\] at (1.5,-4.55) ; (3) \[rectangle,draw=black,rounded corners=.3cm,inner sep=2mm\] at (-0.6,1.2) ; (n3) \[rectangle,draw=none,inner sep=0mm\] at (1.5,-2.35) [$3$]{}; (m1) \[rectangle,draw=none,inner sep=0mm\] at (-1,2.9) [$5$]{}; (m2) \[rectangle,draw=none,inner sep=0mm\] at (-0.2,2.9) [$2$]{}; (m3) \[rectangle,draw=none,inner sep=0mm\] at (0.7,0.25) [$2$]{}; (n3) \[rectangle,draw=none,inner sep=0mm\] at (2.4,-2.35) [$2$]{}; (1)–(2)–(1.5,-6.55); (2.4,-2.55)–(2)–(1.5,-2.55); (1)–(3); (1)–(0.7,0.1); (-1,2.725)–(3)–(-0.2,2.725);
\(1) \[,rectangle,draw=black,rounded corners=.3cm,inner sep=2mm\] at (0.2,-1.45) ; (2) \[rectangle,draw=black,rounded corners=.3cm,inner sep=2mm\] at (1.5,-4.55) ; (3) \[rectangle,draw=black,rounded corners=.3cm,inner sep=2mm\] at (-0.6,1.2) ; (n3) \[rectangle,draw=none,inner sep=0mm\] at (1.5,-2.35) [$3$]{}; (m1) \[rectangle,draw=none,inner sep=0mm\] at (-1,2.9) [$5$]{}; (m2) \[rectangle,draw=none,inner sep=0mm\] at (-0.2,2.9) [$2$]{}; (m3) \[rectangle,draw=none,inner sep=0mm\] at (0.7,0.25) [$2$]{}; (n3) \[rectangle,draw=none,inner sep=0mm\] at (2.4,-2.35) [$2$]{}; (1)–(2)–(1.5,-6.55); (2.4,-2.55)–(2)–(1.5,-2.55); (1)–(3); (1)–(0.7,0.1); (-1,2.725)–(3)–(-0.2,2.725);
in which the curly lines represent the edges that arise from the splitting. By counting the edges and the leaves on the left from and below those edges, we get $4$, $0$, $1$ and $1$, respectively. Therefore, $$d_{{\EuScript O}_{\infty}}(\raisebox{-1.15em}{\resizebox{1.2cm}{!}{\begin{tikzpicture}
\node (c) [circle,fill=WildStrawberry,draw=black,minimum size=0.1cm,inner sep=0.2mm,label={[xshift=-0.45cm,yshift=-0.32cm]{\footnotesize $\{x,y\}$}}] at (0,0) {};
\node (d) [circle,fill=WildStrawberry,draw=black,minimum size=0.1cm,inner sep=0.2mm,label={[xshift=-0.45cm,yshift=-0.32cm]{\footnotesize $\{u,v\}$}}] at (0,0.35) {};
\draw[thick] (c)--(d);\end{tikzpicture}}}\,)=\raisebox{-1.4em}{\resizebox{2.05cm}{!}{\begin{tikzpicture}
\node (b) [circle,fill=cyan,draw=black,minimum size=0.1cm,inner sep=0.2mm,label={[yshift=-0.45cm]{\footnotesize $x$}}] at (4,-0.2) {};
\node (c) [circle,fill=cyan,draw=black,minimum size=0.1cm,inner sep=0.2mm,label={[xshift=-0.45cm,yshift=-0.31cm]{\footnotesize $\{u,v\}$}}] at (3.8,0.15) {};
\node (d) [circle,fill=cyan,draw=black,minimum size=0.1cm,inner sep=0.2mm,label={[xshift=0.2cm,yshift=-0.3cm]{\footnotesize $y$}}] at (4.2,0.15) {};
\draw[thick] (c)--(b)--(d);\end{tikzpicture}}}+ \raisebox{-1.5em}{\resizebox{1.2cm}{!}{\begin{tikzpicture}
\node (b) [circle,fill=cyan,draw=black,minimum size=0.1cm,inner sep=0.2mm,label={[xshift=-0.22cm,yshift=-0.25cm]{\footnotesize $y$}}] at (0,0) {};
\node (c) [circle,fill=cyan,draw=black,minimum size=0.1cm,inner sep=0.2mm,label={[xshift=-0.22cm,yshift=-0.25cm]{\footnotesize $x$}}] at (0,0.35) {};
\node (d) [circle,fill=cyan,draw=black,minimum size=0.1cm,inner sep=0.2mm,label={[xshift=-0.45cm,yshift=-0.32cm]{\footnotesize $\{u,v\}$}}] at (0,0.7) {};
\draw[thick] (b)--(c)--(d);\end{tikzpicture}}}\,-\raisebox{-1.7em}{\resizebox{1.2cm}{!}{\begin{tikzpicture}
\node (b) [circle,fill=cyan,draw=black,minimum size=0.1cm,inner sep=0.2mm,label={[xshift=-0.22cm,yshift=-0.25cm]{\footnotesize $u$}}] at (0,0.7) {};
\node (c) [circle,fill=cyan,draw=black,minimum size=0.1cm,inner sep=0.2mm,label={[xshift=-0.22cm,yshift=-0.26cm]{\footnotesize $v$}}] at (0,0.35) {};
\node (d) [circle,fill=cyan,draw=black,minimum size=0.1cm,inner sep=0.2mm,label={[xshift=-0.45cm,yshift=-0.32cm]{\footnotesize $\{x,y\}$}}] at (0,0) {};
\draw[thick] (b)--(c)--(d);\end{tikzpicture}}} \,-\raisebox{-1.7em}{\resizebox{1.2cm}{!}{\begin{tikzpicture}
\node (b) [circle,fill=cyan,draw=black,minimum size=0.1cm,inner sep=0.2mm,label={[xshift=-0.22cm,yshift=-0.25cm]{\footnotesize $v$}}] at (0,0.7) {};
\node (c) [circle,fill=cyan,draw=black,minimum size=0.1cm,inner sep=0.2mm,label={[xshift=-0.22cm,yshift=-0.26cm]{\footnotesize $u$}}] at (0,0.35) {};
\node (d) [circle,fill=cyan,draw=black,minimum size=0.1cm,inner sep=0.2mm,label={[xshift=-0.45cm,yshift=-0.32cm]{\footnotesize $\{x,y\}$}}] at (0,0) {};
\draw[thick] (b)--(c)--(d);\end{tikzpicture}}}\,\, .$$ Observe that the geometric interpretation of this differential is the boundary of the square of the 3-dimensional hemiassociahedron encoded by (see Example \[compositionconstructs\]). [[ 9999 ]{}]{}
In the following technical lemma, we prove that $({\EuScript O}_{\infty},d_{{\EuScript O}_{\infty}})$ is indeed a dg operad. For the sake of readability, we shall write $({{\mathcal{T}}},C)$ for what is actually $({{\mathcal{T}}},\sigma,C)$.
\[dok\] The map $d$ has the following properties:
- $(d_{{\EuScript O}_{\infty}})^2=0$, and
- the composition structure of ${\EuScript O}_{\infty}$ is compatible with $d_{{\EuScript O}_{\infty}}$, i.e. $$\enspace d_{{\EuScript O}_{\infty}}(({{\mathcal{T}}}_1,C_1)\circ_i ({{\mathcal{T}}}_2,C_2))=d_{{\EuScript O}_{\infty}}({{\mathcal{T}}}_1,C_1)\circ_i ({{\mathcal{T}}}_2,C_2)+(-1)^{|C_1|}({{\mathcal{T}}}_1,C_1)\circ_i d_{{\EuScript O}_{\infty}}({{\mathcal{T}}}_2,C_2).$$
1\. The proof that $d_{{\EuScript O}_{\infty}}$ squares to zero goes by case analysis relative to the configuration of vertices of $C$ that got split in constructing the two occurences of the same summand $({{\mathcal{T}}},C')$ in $d_{{\EuScript O}_{\infty}}^2({{\mathcal{T}}},C)$, by showing that the corresponding summands have the opposite sign. This is an easy analysis of the relative edge positions: in all the cases, there will exist exactly one edge that is counted in calculating the sign of one of the two instances of $({{\mathcal{T}}},C')$, but not in calculating the sign of the other one.
For example, suppose that $C'$ is obtained by splitting two different vertices of $C$, i.e. that $$C'=C[X_1\{Y_1\}/V_1][X_2\{Y_2\}/V_2]=C[X_2\{Y_2\}/V_2][X_1\{Y_1\}/V_1]$$ for some $V_1,V_2\in v(C)$. Suppose, moreover, that $V_1$ is above $V_2$ in $C$, and let $V_2\{U\}$ be the first edge on the path from $V_2$ to $V_1$. Suppose finally that, after splitting $V_2$, the edge $V_2\{U\}$ splits into $X_2\{Y_2\{U\}\}$, i.e. that the vertex $Y_2$ stays on the path from $X_2$ to $X_1$. Under these assumptions on the shape of $C'$, let $p_1$ (resp. $p_2$) be the number of internal edges between $V_1$ and $V_2$ (resp. below $V_2$) in $\alpha({{\mathcal{T}}},C)$, and let $l_{i}$, for $i=1,2$, be the number of edges and leaves on the left from the edge $X_i\{Y_i\}$ in $\alpha({{\mathcal{T}}},C[X_i\{Y_i\}/V_i])$. Finally, let $l$ be the number of edges and leaves on the left from the edge $V_2\{U\}$ in $\alpha({{\mathcal{T}}},C[X_1\{Y_1\}/V_1])$. The signs of the operations $C[X_1\{Y_1\}/V_1][X_2\{Y_2\}/V_2]$ and $C[X_2\{Y_2\}/V_2][X_1\{Y_1\}/V_1]$ are then induced from the sums $$\underbrace{l_1+p_1+p_2+l}_\text{$X_1\{Y_1\}/V_1$} +\underbrace{l_2+p_2}_{X_2\{Y_2\}/V_2} \quad\mbox{ and }\quad \underbrace{l_2+p_2}_{X_2\{Y_2\}/V_2}+\underbrace{l_1+p_1+p_2+l+1}_{X_1\{Y_1\}/V_1},$$ respectively, and the conclusion follows since they differ by $1$.
2\. Denote ${{\mathcal{T}}}={{\mathcal{T}}}_1\bullet_i {{\mathcal{T}}}_2$ and $C=C_1\bullet_i C_2$. For the left-hand side of the equality, we have $$\begin{array}{rcl}
d_{{\EuScript O}_{\infty}}(({{\mathcal{T}}}_1,C_1)\circ_i ({{\mathcal{T}}}_2,C_2))&=& \displaystyle\sum_{\substack{V\in v(C) \\ |V|\geq 2}}\sum_{\substack{(X,Y) \\ X\cup Y=V \\ X\{Y\}:{\bf H}_{{{\mathcal{T}}}_V}}} (-1)^{\delta+\varepsilon} ({{\mathcal{T}}},C[X\{Y\}/V])
\end{array}$$ where $\delta=E_{\leq X\{Y\}}(\alpha({{\mathcal{T}}},C[X(Y)/V]))$ and $\varepsilon= E_{i>}(\alpha({{\mathcal{T}}}_1,C_1))\cdot (E(\alpha({{\mathcal{T}}}_2,C_2))-1)$. We prove the stated equality by case analysis with respect to the origin of the vertex $V$ relative to $v(C_1)$ and $v(C_2)$, and, if $V\in v(C_1)$, the position of the edge $X\{Y\}$ relative to the leaf $i$ of $\alpha({{\mathcal{T}}}_1,C_1)$.
Suppose that $V\in v(C_1)$.
- If $X\{Y\}\in E_{\leq i}(\alpha({{\mathcal{T}}}_1,C_1[X(Y)/V]))$, then\
- $E_{\leq X\{Y\}}(\alpha({{\mathcal{T}}}_1,C_1[X\{Y\}/V]))=\delta$, and\
- $E_{i>}(\alpha({{\mathcal{T}}}_1,C_1[X\{Y\}/V]))\cdot (E(\alpha({{\mathcal{T}}}_2,C_2))-1)=\varepsilon$.\
- If $X\{Y\}\in E_{i>}(\alpha({{\mathcal{T}}}_1,C_1[X\{Y\}/V]))$, then\
- $E_{\leq X\{Y\}}(\alpha({{\mathcal{T}}}_1,C_1[X\{Y\}/V]))=\delta-(E(\alpha({{\mathcal{T}}}_2,C_2))-1)$, and\
- $E_{i>}(\alpha({{\mathcal{T}}}_1,C_1[X\{Y\}/V]))\cdot (E(\alpha({{\mathcal{T}}}_2,C_2))-1)=\varepsilon+E(\alpha({{\mathcal{T}}}_2,C_2))-1$.\
In both cases, we have that $$E_{\leq X\{Y\}}(\alpha({{\mathcal{T}}}_1,C_1[X\{Y\}/V]))+E_{i>}(\alpha( {{\mathcal{T}}}_1,C_1[X\{Y\}/V]))\cdot (E(\alpha({{\mathcal{T}}}_2,C_2))-1)=\delta+\varepsilon,$$ which means that $(-1)^{\delta+\varepsilon} ({{\mathcal{T}}},C[X\{Y\}/V])$ appears as a summand in $d_{{\EuScript O}_{\infty}}({{\mathcal{T}}}_1,C_1)\circ_i ({{\mathcal{T}}}_2,C_2)$.
If $V\in V(C_2)$, then\
- $E_{\leq X\{Y\}}(\alpha({{\mathcal{T}}}_2,C_2[X\{Y\}/V]))=\delta-E_{\leq i}(\alpha({{\mathcal{T}}}_1,C_1))-1$, and\
- $E_{i>}(\alpha({{\mathcal{T}}}_1,C_1))\cdot (E(\alpha({{\mathcal{T}}}_2,C_2[X\{Y\}/V]))-1)=\varepsilon-E_{i>}(\alpha({{\mathcal{T}}}_1,C_1)).$\
Observe that $$\label{set1}E_{{ i>}}(\alpha({{\mathcal{T}}}_1,C_1))\cup E_{\leq i}(\alpha({{\mathcal{T}}}_1,C_1))\cup \{i\}={\it E}(\alpha({{\mathcal{T}}}_1,C_1))\backslash\{\rho({{\mathcal{T}}}_1)\}.$$ Therefore, if $|({{\mathcal{T}}}_1,C_1)|$ is even (resp. odd), then the cardinality of is even (resp. odd), and, hence, $$\delta-E_{\leq i}(\alpha({{\mathcal{T}}}_1,C_1))-1+\varepsilon-E_{{ i>}}(\alpha({{\mathcal{T}}}_1,C_1))=_{\mbox{mod}\,2}\delta+\varepsilon$$ $$\mbox{(resp. } \delta-E_{\leq i}(\alpha({{\mathcal{T}}}_1,C_1))-1+\varepsilon-E_{{ i>}}(\alpha({{\mathcal{T}}}_1,C_1))=_{\mbox{mod}\,2}\delta+\varepsilon+1 \mbox{ ),}$$ meaning that $(-1)^{\delta+\varepsilon} ({{\mathcal{T}}},C[X\{Y\}/V])$ appears as a summand in $(-1)^{|({{\mathcal{T}}}_1,C_1)|}({{\mathcal{T}}}_1,C_1)\circ_i d_{{\EuScript O}_{\infty}}({{\mathcal{T}}}_2,C_2)$.
The opposite direction is treated by an analogous analysis.
### The homology of $({\EuScript O}_{\infty},d_{{\EuScript O}_{\infty}})$ {#Ohomology}
We first prove that $${\EuScript O}^{k-2}_{\infty}(n_1,\dots,n_k;n)\xrightarrow{\,d^{k-2}_{{\EuScript O}_{\infty}}(n_1,\dots,n_k;n)\,}\enspace\cdots\enspace\xrightarrow{\,d^{1}_{{\EuScript O}_{\infty}}(n_1,\dots,n_k;n)\,}{\EuScript O}^{0}_{\infty}(n_1,\dots,n_k;n)$$ is an exact sequence.
For $0<m\leq k-2$, we have that ${\mbox{\em Ker}}\, d_{{\EuScript O}_{\infty}}^m(n_1,\dots,n_k;n)= {\mbox{\em Im}}\, d_{{\EuScript O}_{\infty}}^{m+1}(n_1,\dots,n_k;n)$.
Since $d_{{\EuScript O}_{\infty}}$ squares to zero, we have that ${\mbox{ Im}}\, d_{{\EuScript O}_{\infty}}^{m+1}(n_1,\dots,n_k;n)\subseteq {\mbox{ Ker}}\, d_{{\EuScript O}_{\infty}}^m(n_1,\dots,n_k;n)$. For the proof of the opposite inclusion, let $L\in$ $d_{{\EuScript O}_{\infty}}^m(n_1,\dots,n_k;n)$. Since $|L|\neq 0$, it must be the case that, in $d_{{\EuScript O}_{\infty}}^m(n_1,\dots,n_k;n)(L)$, the summands cancel each other out pairwise. Therefore, we may assume, without loss of generality, that the linear combination $L$ is made of triples whose first two components are all given by the same operadic tree ${{\mathcal{T}}}=({{\mathcal{T}}},\sigma)$, i.e. that $$L=k_1({{\mathcal{T}}},C_1)+\cdots +k_p ({{\mathcal{T}}},C_p),$$ where $k_i\in\{+1,-1\}$. Then, for each summand $(-1)^{\delta}({{\mathcal{T}}},C_i[X\{Y\}/V])$ in $d_{{\EuScript O}_{\infty}}^m(n_1,\dots,n_k;n)(L)$, there exists a summand $(-1)^{\delta+1}({{\mathcal{T}}},C_j[U\{V\}/W])$ in $d_{{\EuScript O}_{\infty}}^m(n_1,\dots,n_k;n)(L)$, such that $$C_i[X\{Y\}/Z]=C_j[U\{V\}/W].$$ The latter equality means that $k_i({{\mathcal{T}}},C_i)$ and $k_j({{\mathcal{T}}},C_j)$ appear as summands in $$d_{{\EuScript O}_{\infty}}^{m+1}(n_1,\dots,n_k;n)(k_{ij}({{\mathcal{T}}},C)),$$ where $C$ is obtained by collapsing the edge $U\{V\}$ of $C_i$ (or, equivalently, by collapsing the edge $X\{Y\}$ of $C_j$), for some $k_{ij}\in\{+1,-1\}$. It is easy to see that, if $d_{{\EuScript O}_{\infty}}^{m+1}(n_1,\dots,n_k;n)({{\mathcal{T}}},C)$ contains a summand $({{\mathcal{T}}},C[P\{Q\}/R])$ different than $k_i({{\mathcal{T}}},C_i)$ and $k_j({{\mathcal{T}}},C_j)$, then $({{\mathcal{T}}},C[P\{Q\}/R])$ also appears in $L$. For example, if $$({{\mathcal{T}}},C[P\{Q\}/R][P_1\{P_2\}/P])$$ is a summand of $d_{{\EuScript O}_{\infty}}^m(n_1,\dots,n_k;n)({{\mathcal{T}}},C[P\{Q\}/R])$, then the construct $C[P\{Q\}/R][P_1\{P_2\}/P]$ can also be obtained as a result of splitting first the vertex $R$ of $C$ either into $P_1$ and $P_2\cup Q$, or into $P_2$ and $P_1\cup Q$ by $d_{{\EuScript O}_{\infty}}^{m+1}(n_1,\dots,n_k;n)$, followed by splitting the vertex decorated by the union of the appropriate sets by $d_{{\EuScript O}_{\infty}}^{m}(n_1,\dots,n_k;n)$.
The preimage of $L$ with respect to $d_{{\EuScript O}_{\infty}}^{m+1}(n_1,\dots,n_k;n)$ is obtained by reconstructing, for each cancellable pair in $d_{{\EuScript O}_{\infty}}^m(n_1,\dots,n_k;n)(L)$, the appropriate construct and its coefficient in ${\EuScript O}_{\infty}^{m+1}(n_1,\dots,n_k;n)$ in this way (throwing away the duplicates), and by taking the sum of the obtained constructs.
For $m=0$, we have that ${\mbox{Ker}}\, d^{0}_{{\EuScript O}_{\infty}}(n_1,\dots,n_k;n)={\EuScript O}^{0}_{\infty}(n_1,\dots,n_k;n)$, since constructions have no vertices that could be split. As for the image of $d_{{\EuScript O}_{\infty}}^{1}(n_1,\dots,n_k;n)$, we have $${\mbox{Im}}\, d_{{\EuScript O}_{\infty}}^{1}(n_1,\dots,n_k;n)={\textsf{Span}}_{{\Bbbk}}\Bigg(\displaystyle \bigoplus_{({{\mathcal{T}}},\sigma)\in {\tt {Tree}}(n_1,\dots,n_k;n)}\, \bigoplus_{C:{\bf H}_{{\mathcal{T}}}, |C|=1} \pm C[x\{y\}/\{x,y\}] \pm C[y\{x\}/\{x,y\}] \Bigg),$$ where $\{x,y\}$ is the unique two-element vertex of $C$ and the signs are determined by the position of the vertices $x$ and $y$ in ${\bf H}_{{\mathcal{T}}}$ (considered as the edge-graph with levels), using the criterion from \[edgegraph1\]: if the shortest path between $x$ and $y$ is made of vertical edges only, then one of the vertices $x$ and $y$ is above the other and the construction which respects this position gets multiplied by $-$, and the other one by $+$; otherwise, both constructions get the sign $+$. By collapsing ${\mbox{Im}}\, d_{1}$ to zero, for each ${{\mathcal{T}}}\in {\EuScript O}(n_1,\dots,n_k;n)$, all the constructions of ${\bf H}_{{\mathcal{T}}}$ get glued into a single equivalence class; in particular, different trees from ${\EuScript O}(n_1,\dots,n_k;n)$ give rise to different classes. Therefore, $${\mbox{Ker}}\, d^m_{{\EuScript O}_{\infty}}(n_1,\dots,n_k;n)/{\mbox{Im}}\, d^{m+1}_{{\EuScript O}_{\infty}}(n_1,\dots,n_k;n)=\begin{cases}
\{0\}, \enspace m\neq 0\\
{\EuScript O}(n_1,\dots,n_k;n), \enspace m= 0.
\end{cases}$$ which entails that $H({\EuScript O}_{\infty},d_{{\EuScript O}_{\infty}})\cong H({\EuScript O},0)$. The witnessing quasi-isomorphism $\alpha_{\EuScript O}:{\EuScript O}_{\infty}\rightarrow {\EuScript O}$ is simply the first projection on degree zero, and the zero map elsewhere.
Having established the freeness of ${\EuScript O}_{\infty}$, the decomposability of $d_{{\EuScript O}_{\infty}}$ and the quasi-isomorphism with ${\EuScript O}$, we can now finally conclude.
\[Omin\] The operad ${\EuScript O}_{\infty}$ is the minimal model for the operad ${\EuScript O}$.
By employing the elements of Koszul duality for coloured operads, Theorem \[Omin\] can be established in a more economical way. Recall that the ${\EuScript O}_{\infty}$ operad is originally defined by ${\EuScript O}_{\infty}:=\Omega{\EuScript O}^{{{\scriptstyle \text{\rm !`}}}}$, i.e., by $${\EuScript O}_{\infty}:=({{\mathcal{T}}}_{\mathbb N}(s^{-1} \overline{{\EuScript O}^{{{\scriptstyle \text{\rm !`}}}}}),d),$$ where $s^{-1} \overline{{\EuScript O}^{{{\scriptstyle \text{\rm !`}}}}}$ is the desuspension of the coaugmentation coideal of ${\EuScript O}^{{{\scriptstyle \text{\rm !`}}}}:={{\mathcal{T}}}^c_{\mathbb N}(sE)/ (s^2 R)$, where $E$ and $R$ are as in Definition \[1\] and $s$ denotes the degree $+1$ suspension shift, and $d$ is the unique derivation extending the cooperad structure on ${\EuScript O}^{{{\scriptstyle \text{\rm !`}}}}$. Since ${\EuScript O}$ is self dual (cf. [@VdL Theorem 4.3]), the operations of ${\EuScript O}_{\infty}$ are given by “composite trees of operadic trees”, and the differential of ${\EuScript O}_{\infty}$ is given by the “degrafting of composite trees of operadic trees” (cf. Theorem \[free\]). As explained in [@DCV Section 5.1] for homotopy cooperads, the spaces of such operations can be realized as lattices of [*nested left-recursive operadic trees*]{} under the partial order given by [*refinement of nestings*]{}, i.e. by a subfamily of the family of graph associahedra of Carr-Devadoss introduced in [@CD]. Therefore, Theorem \[Omin\] follows by showing that, for a fixed left-recursive operadic tree ${{\mathcal{T}}}$, the lattice ${{\mathcal{A}}}({\bf H}_{{\mathcal{T}}})$ is isomorphic with the lattice of nestings of ${{\mathcal{T}}}$. This, in turn, is a direct consequence of [@CIO Proposition 2].
### Stasheff’s associahedra as a suboperad of ${\EuScript O}_{\infty}$
The earliest example of an explicit description of the minimal model of a dg operad, predating even the notions of operad and minimal model themselves, is given by the dg $A_{\infty}$-operad, the minimal model of the non-symmetric operad ${\it As}$ for associative algebras, often described in terms of Stasheff’s associahedra [@Sta1].
Recall that the non-symmetric associative operad ${\it As}$, encoding the category of non-unital associative algebras, is defined by ${\it As}(n)={\textsf{Span}}_{{\Bbbk}}(\{t_n\})$, for $n\geq 2$, where $t_n$ the isomorphism class of a planar corolla with $n$ inputs. The one-dimensional space ${\it As}(n)$ is concentrated in degree is zero and the differential is trivial.
In the standard dg framework, the $A_{\infty}$-operad is the quasi-free dg operad $A_{\infty}={{\mathcal{T}}}(\bigoplus_{n\geq 2}{\textsf{Span}}_{{\Bbbk}}(\{t_n\}))$, where $|t_n|=n-2$, with the differential given by $$d(t_n)=\displaystyle\sum_{\substack{n=p+q+r\\ k=p+r+1\\ k,q\geq 2}} (-1)^{p} \,\, t_k \circ_{p+1} t_q.$$
The dg $A_{\infty}$-operad is the minimal model for ${\it As}$. Indeed, the map $\alpha_{\it As}:A_{\infty}\rightarrow {\it As}$, defined as the identity on $t_2$ and as the zero map elsewhere, induces a homology isomorphism $H_{\bullet}(A_{\infty},d)\cong {\it As}$, whereas $d$ is clearly defined in terms of decomposable elements of $A_{\infty}$.
Let ${{\mathcal{A}}}_{\infty}$ be the suboperad of the ${\EuScript O}_{\infty}$ operad determined by linear operadic trees, i.e. operadic trees ${{\mathcal{T}}}$ with univalent vertices such that a vertex $i$ is always adjacent to the vertex $i-1$ (and not to a vertex $i-j$, for some $j>1$). Observe that the univalency requirement ensures that linear operadic trees are closed under the operation of substitution of trees.
The operad ${{\mathcal{A}}}_{\infty}$ is isomorphic to the $A_{\infty}$-operad, and, therefore, it is the minimal model for the operad ${\it As}$.
The restriction to linear operadic trees that defines ${{\mathcal{A}}}_{\infty}$ collapses the set of colours ${\mathbb N}$ of the operad ${\EuScript O}$ to the singleton set $\{1\}$, making therefore ${{\mathcal{A}}}_{\infty}$ a monochrome operad. The conclusion follows from the correspondence between the construct description of associahedra and the standard description underlying the definition of the $A_{\infty}$-operad, established in §\[assoc\].
Let $K_n$, for $n\geq 2$, denote the $(n-2)$-dimensional associahedron, i.e. a CW complex whose cells of dimension $k$ are in bijection with rooted planar trees having $n$ leaves and $n-k-1$ vertices. The sequence ${{\mathcal{K}}}=\{K_n\}_{n\geq 2}$ is naturally endowed with the structure of a non-symmetric topological operad: the composition $$\circ_i: K_r \times K_s\rightarrow K_{r+s-1}$$ is defined as follows: for faces $k_1\in K_{r}$ and $k_2\in K_s$, $\circ_i(k_1,k_2)$ is the face of $K_{r+s-1}$ obtained by grafting the tree encoding the face $k_2$ to the leaf $i$ of the tree encoding the face $k_1$. The topological operad ${{\mathcal{K}}}$ is turned into the dg $A_{\infty}$-operad by taking cellular chain complexes on ${{\mathcal{K}}}$; we refer to [@LV Proposition 9.2.4.] for the details of this transition.
For what concerns geometric realizations of the associahedra, Stasheff initially considered the CW complexes $K_n$ as curvilinear polytopes. They were later given coordinates as convex polytopes in Euclidean spaces in different ways: as convex hulls of points, as intersections of half-spaces, and by truncations of standard simplices. However, it was only in the recent paper [@mttv] of Masuda, Thomas, Tonks and Vallette that a non-symmetric operad structure on a family of convex polytopal realizations of the associahedra has been introduced. This raises the question of an appropriate “geometric realization” of the ${\EuScript O}_{\infty}$ operad, which we leave to a future work.
The combinatorial Boardman-Vogt-Berger-Moerdijk resolution of ${\EuScript O}$ {#Wconstruction}
-----------------------------------------------------------------------------
In [@BM2], Berger and Moerdijk constructed a cofibrant resolution for coloured operads in arbitrary monoidal model categories, by generalizing the Boardman-Vogt $W$-construction for topological operads [@BV]. In this section, by introducing a cubical subdivision of the faces of operadic polytopes, we define the operad ${\EuScript O}^{\circ}_{\infty}$, which is precisely the $W$-construction applied on ${\EuScript O}$.
In order to provide the combinatorial description of the $W$-construction of ${\EuScript O}$, we are going to generalize the notion of a construct of the edge-graph ${\bf H}_{{\mathcal{T}}}$ of an operadic tree ${{\mathcal{T}}}$, to the notion of a circled construct of ${\bf H}_{{\mathcal{T}}}$. A circled construct should be thought of as a two-level construct, i.e. a construct whose vertices are constructs themselves; the idea is that circles determine those “higher” vertices, in the same way as in the definition of the monad of trees. The circles that we add to a construct of ${\bf H}_{{\mathcal{T}}}$ arise from decompositions of ${{\mathcal{T}}}$, and themselves determine a decomposition of that construct – this construction is dual to the one defining the isomorphism $\alpha$ in the proof of Theorem \[free\].
\[dec\] Let $({{\mathcal{T}}},\sigma)\in {\EuScript O}(n_1,\dots,n_k;n)$. The set of [*circled constructs*]{} of the hypergraph ${\bf H}_{{\mathcal{T}}}$ is generated by the following two rules.
- For each (ordinary) construct $C:{\bf H}_{{\mathcal{T}}}$, the construct $C$ together with a single circle that entirely surrounds it, is a circled construct of ${\bf H}_{{\mathcal{T}}}$.
- If $({{\mathcal{T}}},\sigma)=({{\mathcal{T}}}_1,\sigma_1)\bullet_i ({{\mathcal{T}}}_2,\sigma_2)$ and if $C_1$ and $C_2$ are circled constructs of ${\bf H}_{{{\mathcal{T}}}_1}$ and ${\bf H}_{{{\mathcal{T}}}_2}$, respectively, then the construct $C_1\bullet_i C_2:{\bf H}_{{\mathcal{T}}}$, determined uniquely by the composition $\circ_i$ of ${\EuScript O}_{\infty}$, together with all the circles of $C_1$ and $C_2$ (and no other circle), is a circled construct of ${\bf H}_{{\mathcal{T}}}$.
In what follows, we shall write $C^{\circ}:{\bf H}_{{\mathcal{T}}}$ to denote that $C^{\circ}$ is a circled construct of ${\bf H}_{{\mathcal{T}}}$; if $C^{\circ}:{\bf H}_{{\mathcal{T}}}$, we shall denote with $C$ the ordinary construct of ${\bf H}_{{\mathcal{T}}}$ obtained by forgetting the circles of $C^{\circ }$. We shall write $C_p^{\circ }$ to indicate that the circled construct $C^{\circ}$ has $p$ circles. Denote with ${A}^{\circ}({\bf H}_{{\mathcal{T}}})$ the set of all circled constructs of ${\bf H}_{{\mathcal{T}}}$.
Note that the set of edges of a circled construct $C^{\circ}:{\bf H}_{{\mathcal{T}}}$ can be decomposed into two disjoint subsets: the subset of [*circled edges*]{}, i.e. of the edges that lie within a circle, and the subset of [*connecting edges*]{}, i.e. of the edges that connect two adjacent circles. The disjointness is ensured by the fact that Definition \[dec\] disallowes nested circles.
Circled constructs are generalizations of [*circled trees*]{}, in the terminology of [@LV Appendix C.2.3.].
Let, for $k\geq 2$, $n_1,\dots,n_k\geq 1$, and $n=\bigl(\sum^k_{i=1} n_i\bigr)-k+1$, ${\EuScript O}^{\circ}_{\infty}(n_1,\dots,n_k;n)$ be the vector space spanned by triples $({{\mathcal{T}}},\sigma,C^{\circ})$, where $({{\mathcal{T}}},\sigma)\in {\tt{Tree}}(n_1,\dots,n_k;n)$ and $C^{\circ}:{\bf H}_{{\mathcal{T}}}$, subject to the equivalence relation generated by:
> $({{\mathcal{T}}}_{1},\sigma_1,C^{\circ}_1)\sim ({{\mathcal{T}}}_{2},\sigma_2,C^{\circ}_2)$ if there exists an isomorphism $\varphi:{{\mathcal{T}}}_1\rightarrow {{\mathcal{T}}}_2$, such that $\varphi\circ\sigma_1=\sigma_2$, $C_{1}\sim_{\varphi} C_{2}$, and such that, modulo the renaming $\varphi$, the circles of $C^{\circ}_1$ are exactly the circles of $C^{\circ}_2$.
Therefore,
$${\EuScript O}^{\circ}_{\infty}(n_1,\dots,n_k;n)={\textsf{Span}}_{{\Bbbk}} \Bigg(\bigoplus_{({{\mathcal{T}}},\sigma)\in {\tt{Tree}}(n_1,\dots,n_k;n)} {{A}^{\circ}({\bf H}_{{\mathcal{T}}})} \Bigg).$$
The space ${\EuScript O}^{\circ}_{\infty}(n_1,\dots,n_k;n)$ is graded by $|({{\mathcal{T}}},C_p^{\circ})|=|v(C)|-p$.
The ${\mathbb N}$-coloured graded collection $$\{{\EuScript O}^{\circ}_{\infty}(n_1,\dots,n_k;n)\,|\, n_1,\dots,n_k\geq 1\}$$ admits the following operad structure: the composition operation $$\circ_i : {\EuScript O}^{\circ}_{\infty}\displaystyle(n_1,\dots,n_k;n) \otimes {\EuScript O}^{\circ}_{\infty}\displaystyle(m_1,\dots,m_l;n_i)\rightarrow {\EuScript O}^{\circ}_{\infty}\displaystyle(n_1,\dots,n_{i-1},m_1,\dots,m_l,n_{i+1},\dots ,n_k;n)$$ is defined by $$({{\mathcal{T}}}_1,\sigma_1,C_1^{\circ})\circ_i({{\mathcal{T}}}_2,\sigma_2,C_2^{\circ})=(-1)^{\varepsilon}({{\mathcal{T}}}_1\bullet_i {{\mathcal{T}}}_2,{\sigma}_1\bullet_i {\sigma}_2,C^{\circ}_1\bullet_i C^{\circ}_2),$$ where $C^{\circ}_1\bullet_i C^{\circ}_2$ is defined by the second rule of Definition \[dec\], and the sign $(-1)^{\varepsilon}$ is determined as follows. Observe that, for an operation $({{\mathcal{T}}},\sigma,C^{\circ})$, the circles of $C^{\circ}$ carry over to $\alpha({{\mathcal{T}}},\sigma,C)$, where $\alpha$ is the isomorphism from the proof of Theorem $\ref{free}$. This decomposes the set of edges of $\alpha({{\mathcal{T}}},\sigma,C)$ into the set of circled edges and the set of connecting edges. Then, $\varepsilon$ is the number of connecting edges and leaves of $\alpha({{\mathcal{T}}}_1,\sigma_1,C_1)$ on the right from the leaf indexed by $i$, multiplied by the number of all connecting edges and leaves of $\alpha({{\mathcal{T}}}_2,\sigma_2,C_2)$, minus the root. Therefore, the sign is calculated in the analogous way as for the partial composition of ${\EuScript O}_{\infty}$, save that the vertices of operations of ${\EuScript O}_{\infty}$ are identified with the circles of operations of ${\EuScript O}^{\circ}_{\infty}$.
Finally, the derivative $d^{\circ}$ of ${\EuScript O}^{\circ}_{\infty}$ will be the difference $d^{\circ}_1-d^{\circ}_0$ of two derivatives. The derivative $d^{\circ}_1$ acts on $({{\mathcal{T}}},\sigma,C^{\circ})$ by turning circled edges of $C^{\circ}$ into connecting edges, by splitting the circles in two. The associated signs are determined as follows. Let us fix a summand in $d^{\circ}_1({{\mathcal{T}}},\sigma,C^{\circ})$. Suppose that $C'$ is the construct surrounded by the circle of $C^{\circ}$ that we split in two, let $C'_1$ and $C'_2$ be the constructs surrounded by the two resulting circles in the summand, and let $X$ and $Y$ be the unions of the sets decorating the vertices of $C'_1$ and $C'_2$ (i.e. $X$ and $Y$ are the sets obtained by collapsing all the edges of $C'_1$ and $C'_2$), respectively. Suppose, without loss of generality, that the circle surrounding $C'_1$ is below the circle surrounding $C'_2$. The sign of the resulting summand is given by $(-1)^{\delta+e(C'_1)+e(C'_2)}$, where $(-1)^{\delta}$ is the sign of the summand of $d_{{\EuScript O}_{\infty}}({{\mathcal{T}}},\sigma,C[(X\cup Y)/X\{Y\}])$ whose new edge is determined by $X\{Y\}$. Here, $C[(X\cup Y)/X\{Y\}]$ denotes the construct obtained from $C$ by collapsing the edge $X\{Y\}$. Observe that the component $d^{\circ}_1$ does not affect the overall configuration of edges of $C$ as a plain construct. The component $d^{\circ}_0$ collapses circled edges; each resulting summand $({{\mathcal{T}}},\sigma,C[(U\cup V)/U\{V\}]^{\circ})$ will be multiplied by the sign that $({{\mathcal{T}}},\sigma,C)$ gets as a summand of $d_{{\EuScript O}_{\infty}}({{\mathcal{T}}},\sigma,C[(U\cup V)/U\{V\}])$. The component $d^{\circ}_0$ does not affect circles. One might think of $d^{\circ}_1$ as a coercion of the derivative $d_{{\EuScript O}_{\infty}}$ to ${\EuScript O}^{\circ}_{\infty}$, by the identification of the vertices of the operations of ${\EuScript O}$ with the circles of the operations of ${\EuScript O}^{\circ}_{\infty}$, since both $d^{\circ}_1$ and $d_{{\EuScript O}_{\infty}}$ split the corresponding entity. On the other hand, $d^{\circ}_0$ can be seen as the inverse of $d_{{\EuScript O}_{\infty}}$: $d^{\circ}_0$ collapses the edges, whereas $d_{{\EuScript O}_{\infty}}$ creates new edges by splitting the vertices. Therefore, intuitively, $d^{\circ}_1$ acts [*globally*]{}, and $d^{\circ}_0$ [*locally*]{}. The sum $e(C'_1)+e(C'_2)$ in the sign $(-1)^{\delta+e(C'_1)+e(C'_2)}$ pertaining to $d_1^{\circ}$ has to be added in order to ensure that $d^{\circ}$ squares to zero. More precisely, it is needed for the pairs of summands in $(d^{\circ})^2({{\mathcal{T}}},\sigma,C^{\circ})$ that should be cancelled out and which arise by applying $d^{\circ}_0$ and $d^{\circ}_1$ in the opposite order. As a consequence of Lemma \[dok\], $d^{\circ}$ agrees with the composition of ${\EuScript O}^{\circ}_{\infty}$.
In the following two examples, we describe two cubical decompositions of operadic polytopes, given by the appropriate posets of circled constructs, where, like it was the case for ordinary constructs, the partial order is induced by the action of the differential $d^{\circ}$.
For the linear tree
(t1)\[circle,draw=none\] at (-1.9,1.75) [ $({{\mathcal{T}}},\sigma)\enspace=$]{}; (E)\[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0,0) [$1$]{}; (F) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0,1) [$2$]{}; (A) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0,2) [$3$]{}; (D) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0,3) [$4$]{}; (0.7,2.8) – (A)–(-0.7,2.8); (0.35,2.8) – (A)–(-0.35,2.8); (0.25,3.8) – (D)–(-0.25,3.8); (1.1,0.8) – (E)–(-1.1,0.8); (0.8,0.8) – (E)–(-0.8,0.8); (0.5,0.8) – (E)–(-0.5,0.8); (E)–(0,-0.55); (0.7,1.8) – (F)–(-0.7,1.8); (E)–(F) node \[midway,right,xshift=-0.075cm,yshift=0.1cm\] [$x$]{}; (F)–(A) node \[midway,right,xshift=-0.075cm,yshift=0.1cm\] [$y$]{}; (A)–(D) node \[midway,right,xshift=-0.075cm,yshift=0.1cm\] [$z$]{};
by taking all the circled constructs of ${\bf H}_{{\mathcal{T}}}$, we recover a familiar picture of the cubical decomposition of the pentagon from [@LV Appendix C.2.3.], in which we fully labeled only the faces of the shaded square:
By calculating the derivative of the circled construction $C^{\circ}=$, we get the boundary of the square:
.
Here is how we calculated some of the signs above. The first summand on the right-hand side is obtained by applying $d_1^{\circ}$ on $C^{\circ}$, by turning the circled edge $y\{z\}$ into a connecting edge. The sign rule says that this summand will be multiplied by $(-1)^{\delta+1+0}$, where $(-1)^{\delta}$ is the sign that the ordinary construct gets as a summand of $d_{{\EuScript O}_{\infty}}({{\mathcal{T}}},\sigma,\!\raisebox{-0.55em}{\resizebox{1.5cm}{!}{\begin{tikzpicture}
\node (B1) [circle,fill=cyan,draw=black,minimum size=0.1cm,inner sep=0.2mm,label={[xshift=-0.575cm,yshift=-0.31cm]{\footnotesize $\{x,\!y,\!z\}$}}] at (0,0.35) {};
\end{tikzpicture}}}\,)$. Since $(-1)^{\delta}=1$, the final sign is $-$. The third summand on the right-hand side is obtained by applying $d^{\circ}_0$ on $C^{\circ}$, by collapsing the circled edge $x\{y\}$. By the sign rule, it will be multiplied by the sign that $C$ gets as a summand of $d_{{\EuScript O}_{\infty}}({{\mathcal{T}}},\sigma,\!\raisebox{-0.95em}{\resizebox{1.2cm}{!}{\begin{tikzpicture}
\node (L) [circle,fill=cyan,draw=black,minimum size=0.1cm,inner sep=0.2mm,label={[xshift=-0.47cm,yshift=-0.3cm]{\footnotesize $\{x,\!y\}$}}] at (0,0) {};
\node (B1) [circle,fill=cyan,draw=black,minimum size=0.1cm,inner sep=0.2mm,label={[xshift=-0.22cm,yshift=-0.225cm]{\footnotesize $z$}}] at (0,0.35) {};
\draw[thick] (L)--(B1);\end{tikzpicture}}}\,)$, which is $-$, which, together with the $-$ in front of $d_0^{\circ}$, results in $+$. The reader may readily check that applying the differential $d^{\circ}$ on the sum on the right-hand side of the above equality results in $0$.[[ 9999 ]{}]{}
Since the basis of ${\EuScript O}^{\circ}_{\infty}(n_1,\dots,n_k;n)$ is given by the faces of cubical subdivisions of arbitrary operadic polytopes, and not just the associahedra, we give below an example of the cubical subdivision of one of the hexagonal faces of the hemiassociahedron (see \[sec.hemiassociahedron\]).
The cubical subdivision of the shaded hexagonal face
of the hemiassociahedron is given by
where, once again, we fully labeled only the shaded square. [[ 9999 ]{}]{}
The coloured operad ${\EuScript O}^{\circ}_{\infty}$ is the $W$-construction ${W}(H,{\EuScript O})$ of the dg coloured operad ${\EuScript O}$ and the interval $H=N_{\ast}(\Delta^1)$ of normalized chains on the 1-simplex $\Delta^1$.
Recall from [@BM1 Section 8.5.2.] that the operad ${W}(H,{\EuScript O})$ is constructed like the free ${\mathbb N}$-coloured operad over ${\EuScript O}$, but with the additional assignment of a “length in $H$” to each edge of the corresponding trees. More precisely, since $H=N_{\ast}(\Delta^1)$ is the complex concentrated in degrees $0$ and $1$ defined by: $$H_0={\textsf{Span}}_{{\Bbbk}} (\{\lambda_0\})\oplus {\textsf{Span}}_{{\Bbbk}} (\{\lambda_1\}),\quad H_1={\textsf{Span}}_{{\Bbbk}}(\{\lambda\}), \quad \mbox{ and } d_1(\lambda)=\lambda_1-\lambda_0,$$ and since the lengths of the edges of a planar rooted tree $T$ are formally defined by the tensor product $
H(T):=\bigotimes_{e\in e(T)} H$, the space ${W}(H,{\EuScript O})(n_1,\dots,n_k;n)$ is spanned by the equivalence classes of quadruples $({{\mathcal{T}}},\sigma,C,h)$, where $({{\mathcal{T}}},\sigma,C)\in {\EuScript O}_{\infty}(n_1,\dots,n_k;n)$ and $h\in H(\alpha({{\mathcal{T}}},C))$, where $\alpha$ is the isomorphism from Theorem \[free\], under the equivalence relation generated by $$({{\mathcal{T}}},\sigma,C[X\{Y\}/V],s_{X\{Y\}}(h))\sim ({{\mathcal{T}}},\sigma,C,h),$$ for $X\{Y\}:{{\mathcal{T}}}_V$ and $s_e:H(\alpha({{\mathcal{T}}},C))\rightarrow H(\alpha({{\mathcal{T}}},C[X\{Y\}/V]))$ defined by setting the length of $X\{Y\}$ to be $\lambda_0$. The partial composition operation of ${W}(H,{\EuScript O})$ is defined by grafting trees and assigning the length $\lambda_1$ to the new edge. Finally, the differential $\partial_W$ of ${W}(H,{\EuScript O})$ is the sum $\partial_W=\partial_{\EuScript O}+\partial_{H}$ of the [*internal differential*]{} $\partial_{\EuScript O}$, which, being induced from the differential of the operad ${\EuScript O}$, is trivially $0$, and the [*external differential*]{} $\partial_H$, which is itself a sum of the operators $\partial_H^1$ and $\partial_H^2$, induced by the degree zero elements of $H$, which act by assigning the corresponding length to the edges.
The isomorphism between $W(H,{\EuScript O})$ and ${\EuScript O}^{\circ}_{\infty}$ is reflected by the fact that the elements of the basis of ${W}(H,{\EuScript O})(n_1,\dots,n_k;n)$ determined by an operadic tree ${{\mathcal{T}}}$ are in one-to-one correspondence with the circled constructs of ${\bf H}_{{{\mathcal{T}}}}$. Indeed, a circled construct $C^{\circ}:{\bf H}_{{{\mathcal{T}}}}$ determines $h\in H(\alpha({{\mathcal{T}}},\sigma,C))$ as follows: if $X$ is a vertex of the underlying construct $C:{\bf H}_{{\mathcal{T}}}$, then, in $\alpha({{\mathcal{T}}},\sigma,C)$, the edges determined by the set $X$ are “already collapsed” and hence their length is taken to be $\lambda_0$; the connecting edges of $C^{\circ}$ will get the length $\lambda_1$ and the circled edges will get the length $\lambda$. With this identification, it becomes obvious that the external differential $\partial_H$ is precisely $d^{\circ}_1-d^{\circ}_0$.
Combinatorial homotopy theory for cyclic operads {#cyc}
------------------------------------------------
In [@luc Section 1.6.3], Luk' acs defined the ${\mathbb N}$-coloured operad ${\EuScript C}$ governing non-symmetric cyclic operads. His definition is based on [*cyclic operadic trees*]{}, i.e. operadic trees additionally equipped with bijections labeling clockwise all their leaves in a cyclic way with respect to the planar embedding, and the appropriate modification of the substitution operation. In this section, we show that, by switching from operadic to cyclic operadic trees, while preserving the faces of operadic polytopes as components of operations, one obtains the minimal model ${\EuScript C}_{\infty}$ for the operad ${\EuScript C}$.
### Cyclic operads as algebras over the coloured operad ${\EuScript C}$
Consider the equivalence classes of [*cyclic operadic trees*]{}, i.e. of triples $({{\mathcal{T}}},\sigma,{\tau})$, where $({{\mathcal{T}}},\sigma)$ is an operadic tree and $\tau:\{0,1,\dots,n\}\rightarrow l({{\mathcal{T}}})$ is a bijection labeling clockwise all the leaves of ${{\mathcal{T}}}$ in a cyclic way with respect to the planar embedding of ${{\mathcal{T}}}$, under the equivalence relation generated by:
> $({{\mathcal{T}}}_1,\sigma_1,\tau_1)\sim ({{\mathcal{T}}}_2,\sigma_2,\tau_2)$ if there exists an isomorphism $\varphi: {{\mathcal{U}}}({{\mathcal{T}}}_1)\rightarrow {{\mathcal{U}}}({{\mathcal{T}}}_2)$ of planar unrooted trees ${{\mathcal{U}}}({{\mathcal{T}}}_1)$ and ${{\mathcal{U}}}({{\mathcal{T}}}_2)$, obtained from ${{\mathcal{T}}}_1$ and ${{\mathcal{T}}}_2$ by forgetting the respective roots, such that $\sigma_1=\varphi\circ\sigma_2$ and $\tau_1=\varphi\circ\tau_2$.
Let ${\tt{CTree}}(n_1,\dots,n_k;n)$ denote the set of equivalence classes of cyclic operadic trees $({{\mathcal{T}}},\sigma,{\tau})$, such that $({{\mathcal{T}}},\sigma)\in {\tt{Tree}}(n_1,\dots,n_k;n)$.
\[classesunrooted\] The following three cyclic operadic trees, in which the marked leaf is the $0$-th in the cyclic order, are equivalent:
\(A) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0.6,0) [$1$]{}; (B) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0,1) [$2$]{}; (C) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0.3,2) [$3$]{}; (0.6,-0.71)–(A)–(B)–(C); (A)–(1.1,0.81); (B)–(-0.3,1.81); (0.9,1.81)–(B)–(-0.9,1.81); (-0.2,2.81)–(C)–(0.8,2.81); (C)–(0.3,2.81); (r) \[circle,draw=black,fill=black,minimum size=1mm,inner sep=0.0cm\] at (0.3,2.65) ;
\(C) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0.6,0) [$3$]{}; (B) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0.6,1) [$2$]{}; (A) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0.3,2) [$1$]{}; (0,2.81)–(A)–(0.6,2.81); (B)–(A); (-0.3,1.81)–(B)–(0.9,1.81); (B)–(1.5,1.81); (0,0.81)–(C)–(1.2,0.81); (0.6,-0.71)–(C)–(B); (r) \[circle,draw=black,fill=black,minimum size=1mm,inner sep=0.0cm\] at (0.6,-0.5) ;
\(C) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (1.5,2) [$3$]{}; (B) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0.6,1) [$2$]{}; (A) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (-0.3,2) [$1$]{}; (C)–(B)–(A); (0.3,1.81)–(B)–(0.9,1.81); (B)–(0.6,0.29); (-0.6,2.81)–(A)–(0,2.81); (1,2.81)–(C)–(2,2.81); (C)–(1.5,2.81); (r) \[circle,draw=black,fill=black,minimum size=1mm,inner sep=0.0cm\] at (1.5,2.67) ;
[[ 9999 ]{}]{}
Observe that each equivalence class of a cyclic operadic tree $({{\mathcal{T}}},\sigma,\tau)$ can be represented by a triple $(\underline{{\mathcal{T}}},\underline{\sigma},\underline{\tau})$ , for which $\underline{\tau}(0)$ is the root of $\underline{{\mathcal{T}}}$. In Example \[classesunrooted\], this canonical representative is the tree in the middle.
Let, for $k\geq 1$, $n_1,\dots,n_k\geq 1$ and $n=\big(\sum^k_{i=1} n_i\big)-k+1$, ${\EuScript C}(n_1,\dots,n_k;n)$ be the vector space spanned by the equivalence classes of cyclic operadic trees $({{\mathcal{T}}},\sigma,{\tau})\in{\tt{CTree}}(n_1,\dots,n_k;n)$. Note that this definition allows cyclic operadic trees with only one vertex (and no internal edges).
The operad structure on the collection ${\EuScript C}$ is defined in terms of substitution of trees that takes into account the cyclic orderings of leaves: when identifying the leaves of a vertex of the first tree with the leaves of the second tree, the root of that vertex is considered as $0$-th in the planar order. More precisely, for $({{\mathcal{T}}}_1,\sigma_1,{\tau}_1)\in {\EuScript C}(n_1,\dots,n_k;n)$ and $({{\mathcal{T}}}_2,\sigma_2,{\tau_2})\in {\EuScript C}(m_1,\dots,m_l;n_i)$, we have $$({{\mathcal{T}}}_1,\sigma_1,\tau_1)\circ_i ({{\mathcal{T}}}_2,\sigma_2,\tau_2)=({{\mathcal{T}}}_1\bullet_i \underline{{{\mathcal{T}}}_2}, \sigma_1\bullet_i \underline{\sigma_2},\tau_1),$$ where the $\bullet_i$ operation is the one defined for operadic trees in §\[operadictrees\], and where $(\underline{{{\mathcal{T}}}_2},\underline{\sigma_2},\underline{\tau_2})$ is the canonical representative of the class determined by $({{\mathcal{T}}}_2,\sigma_2,{\tau_2})$. The action of the symmetric group is defined by $({{\mathcal{T}}},\sigma,\tau)^{\kappa}=({{\mathcal{T}}},\sigma\circ\kappa,\tau)$.
Algebras over ${\EuScript C}$ are (non-unital, non-symmetric, reduced) dg cyclic operads.
This has been proven in [@luc Proposition 1.6.4]. An alternative proof can be obtained by translating Luk' acs’ definition of ${\EuScript C}$ into an equivalent definition that describes ${\EuScript C}$ in terms of generators (i.e. composite trees) and relations. Such a definition is obtained by extending the set of generators $E$ and the set of relations $R$ from Definition \[1\] by adding the unary generators encoding cyclic permutations and the relations governing them, as follows.
Indeed, relying on Lemma \[representation\], it can be shown that ${\EuScript C}\simeq{{\mathcal{T}}}_{\mathbb N}(\hat{E})/(\hat{R})$, where the set of generators $\hat{E}$ is given by binary operations
for $n,k\geq 1$, equipped with the action of the transposition $(21)$ that sends to , and unary operations
for $n\geq 1$, realized as cyclic permutations (i.e. bijections preserving the cyclic order) of the cyclically ordered set $(0,1,\dots,n)$, and the set of relations $\hat{R}$ is given by the relation from Definition \[1\], together with the following equalities, concerning the action of cyclic permutations:
$=$ =\
$=$ $=$
where, in , it is assumed that $j\leq\tau(0)\leq k+j-1$, i.e. that $\tau(0)=k-i+j$ for some $1\leq i\leq k$, and $\tau_1$ is determined by $\tau_1(i)=0$ and $\tau_2$ by $\tau_2(0)=j$, and, in , it is assumed that $1\leq\tau(0)\leq j-1$ (resp. $k+j\leq\tau(0)\leq n+k-1$), and $\tau_1$ is determined by $\tau_1(0)=\tau(0)$ (resp. $\tau_1(0)=\tau(0)-k+1$). Note that the relation from Definition \[1\] does not appear in the above description of ${\EuScript C}$, since it can be proven by a sequence of equalities witnessed by the relations , , , and .
A ${\EuScript C}$-algebra is then a dg ${\mathbb N}$-module $(A,d)=\{(A(n),d_{A(n)})\}_{n\geq 1}$, equipped with the obvious unary and binary operations. Observe that the unary generators determine a right ${\mathbb Z}_{n+1}$ action on each $A(n)$, making $\{A(n)\}_{n\geq 1}$ a cyclic ${\mathbb N}$-module, the relations and ensure the associativity of the operadic composition, while the relations and establish the compatibility of the action of cyclic permutations with the operadic composition. That this is indeed a cyclic operad follows according to [@opsprops Proposition 42.].
### The combinatorial ${\EuScript C}_{\infty}$ operad
The ${\EuScript C}_{\infty}$ operad is the dg operad whose structure is induced by the one of the ${\EuScript O}_{\infty}$ operad, by replacing operadic trees with cyclic operadic trees, while preserving the faces of operadic polytopes as components of operations. Here, the operadic polytope associated to a cyclic operadic tree $({{\mathcal{T}}},\sigma,\tau)$ remains simply the hypergraph polytope for the edge-graph ${\bf H}_{{\mathcal{T}}}$ of the tree ${{\mathcal{T}}}$. In particular, if ${{\mathcal{T}}}$ has only one vertex, its associated edge-graph is the empty hypergraph ${\bf H}_{\emptyset}$ (i.e. the unique hypergraph whose set of vertices is empty), whose set of constructs is the singleton containing the empty construct. Observe that this modification is well defined, since equivalent cyclic operadic trees have the same associated edge-graph.
More precisely, for $k\geq 1$, $n_1,\dots,n_k\geq 1$ and $n=(\sum^k_{i=1} n_i)-k+1$, we define ${\EuScript C}_{\infty}(n_1,\dots,n_k;n)$ to be the vector space spanned by the equivalence classes of quadruples $({{\mathcal{T}}},\sigma,\tau,C)$, where $({{\mathcal{T}}},\sigma,\tau)\in {\tt{CTree}}(n_1,\dots,n_k;n)$ and $C:{\bf H}_{{{\mathcal{T}}}}$. For $({{\mathcal{T}}}_1,\sigma_1,\tau_1,C_1)\in {\EuScript C}_{\infty}(n_1,\dots,n_k;n)$ and $({{\mathcal{T}}}_2,\sigma_2,\tau_2,C_2)\in \linebreak{\EuScript C}_{\infty}(m_1,\dots,m_l;n_i)$, we define the partial composition operation $\circ_i$ by $$({{\mathcal{T}}}_1,\sigma_1,\tau_1,C_1)\circ_i ({{\mathcal{T}}}_2,\sigma_2,\tau_2,C_2):=({{\mathcal{T}}}_1\bullet_i \underline{{{\mathcal{T}}}_2},\sigma_1\bullet_i \underline{\sigma_2},C_1\bullet_i C_2,\tau_1),$$ where $(\underline{{{\mathcal{T}}}_2},\underline{\sigma_2},\underline{\tau_2})$ is the canonical representative of the class determined by $({{\mathcal{T}}}_2,\sigma_2,{\tau_2})$, and $C_1\bullet_i C_2$ is defined exactly as in §\[Oinf\]. This determines an operad on vector spaces which is easily proven to be free over the equivalence classes represented of left-recursive cyclic operadic trees (i.e. cyclic operadic trees whose underlying operadic trees are left-recursive): $${\EuScript C}_{\infty}\simeq {{\mathcal{T}}}_{\mathbb N}\Bigg(\, \displaystyle\bigoplus_{k\geq 1}\,\,\bigoplus_{n_1,\dots,n_k\geq 1}\,\bigoplus_{({{\mathcal{T}}},{\tau})\in {\tt{CTree}}(n_1,\dots,n_k;n)}\,{\Bbbk}\Bigg).$$
Indeed, the corresponding isomorphism $\alpha$ is defined by introducing the decomposition of an operation $({{\mathcal{T}}},\sigma,\tau,C)\in{\EuScript C}_{\infty}(n_1,\dots,n_k;n)$ by the construction analogous to the one from the proof of Theorem \[free\], which, in addition, takes into account the data given by $\tau$, as follows. Denote, for $1\leq i\leq k$, with $\sigma_i$ the index given by $\sigma$ to the $i$-th vertex in the left-recursive ordering of ${{\mathcal{T}}}$.
- If $C$ is the maximal construct $e({{\mathcal{T}}})$, then
\(r) \[rectangle,draw=black,rounded corners=.1cm,inner sep=1mm\] at (0,0) [$\enspace({{\mathcal{T}}},\tau)\enspace$]{}; (rl) \[circle,draw=none,inner sep=0.4mm\] at (-0.8,1.1) [${\bf\sigma}_1$]{}; (rr) \[circle,draw=none,inner sep=0.4mm\] at (0.8,1.1) [$\sigma_k$]{}; (rl) \[circle,draw=none,inner sep=0.4mm\] at (-0.8,0.8) [$n_{\sigma_1}$]{}; (rr) \[circle,draw=none,inner sep=0.4mm\] at (0.8,0.8) [$n_{\sigma_k}$]{}; (rc) \[circle,draw=none,inner sep=0.4mm\] at (0,0.8) [$\dots$]{}; (erc) \[circle,draw=none,inner sep=0.4mm\] at (0,1.1) [$\dots$]{}; (rb) \[circle,draw=none,inner sep=0.4mm\] at (0.15,-0.65) [$n$]{}; (0.8,0.7)–(r)–(-0.8,0.7); (r)–(0,-0.7);
- Suppose that $C=X\{C_1,\dots,C_p\}$, where ${\bf H}_{{\mathcal{T}}}\backslash X\leadsto {\bf H}_{{{\mathcal{T}}}_1},\dots, {\bf H}_{{{\mathcal{T}}}_p}$ and $C_i:{\bf H}_{{{\mathcal{T}}}_i}$. Let $({{\mathcal{T}}}_X,\tau)$ be the left-recursive cyclic operadic tree obtained from $({{\mathcal{T}}},\sigma,\tau)$ by collapsing all the edges from $e({{\mathcal{T}}})\backslash X$. Note that this construction preserves $\tau$. Once again, the collapse of the edges ${\it e}({{\mathcal{T}}})\backslash X$ that defines ${{\mathcal{T}}}_X$ is, in fact, the collapse of the subtrees ${{\mathcal{T}}}_i$, $1\leq i\leq p$, of ${{\mathcal{T}}}$. Denote with $\tau_i$ the indexing of the leaves of ${{\mathcal{T}}}_i$ that sends $0$ to the root of ${{\mathcal{T}}}_i$. We define $$\quad\quad\quad\alpha({{\mathcal{T}}},\sigma,\tau,C)=\bigg((\cdots(\alpha({{\mathcal{T}}}_X,\tau,X)\circ_{\rho({{\mathcal{T}}}_{1})} \alpha({{\mathcal{T}}}_{1},\tau_1,C_{1}) )\cdots)\circ_{{\rho({{\mathcal{T}}}_{p})}}\alpha({{\mathcal{T}}}_{p},\tau_p,C_{p})\bigg)^{\sigma_C},$$ where $\sigma_C$ is the permutation determined uniquely thanks to Lemma \[subtreedecomposition\].
The free operad representation of the operation $({{\mathcal{T}}},\sigma,\tau,C)$, where
\(T) \[circle,draw=none,minimum size=4mm,inner sep=0.1mm\] at (-1.8,0.95) [ $({{\mathcal{T}}},\sigma,\tau)=$]{}; (A) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0.6,0) [$2$]{}; (B) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0,1) [$3$]{}; (C) \[circle,draw=black,minimum size=4mm,inner sep=0.1mm\] at (0.3,2) [$1$]{}; (x)\[circle,draw=none,minimum size=4mm,inner sep=0.1mm\] at (0.45,0.55) [$x$]{}; (y) \[circle,draw=none,minimum size=4mm,inner sep=0.1mm\] at (0.35,1.55) [$y$]{}; (0.6,-0.71)–(A)–(B)–(C); (A)–(1.1,0.81); (B)–(-0.3,1.81); (0.9,1.81)–(B)–(-0.9,1.81); (-0.2,2.81)–(C)–(0.8,2.81); (C)–(0.3,2.81); (r) \[circle,draw=black,fill=black,minimum size=1mm,inner sep=0.0cm\] at (0.3,2.65) ;
is given by
(0.4,3.7)–(1,2)–(1.6,3.7); (1) \[rectangle,draw=black,fill=white,rounded corners=.5cm,inner sep=2mm\] at (0,-0.75) [ ]{}; (2) \[rectangle,draw=black,fill=white,rounded corners=.35cm,inner sep=2mm\] at (1,2.1) ; (s1) \[rectangle,draw=none,inner sep=0mm\] at (-0.8,1) [${\bf 2}$]{}; (s3) \[rectangle,draw=none,inner sep=0mm\] at (1.65,3.8) [${\bf 1}$]{}; (s2) \[rectangle,draw=none,inner sep=0mm\] at (0.35,3.8) [${\bf 3}$]{}; (n1) \[rectangle,draw=none,inner sep=0mm\] at (-0.85,0.725) [$2$]{}; (n2) \[rectangle,draw=none,inner sep=0mm\] at (0.325,3.525) [$4$]{}; (n3) \[rectangle,draw=none,inner sep=0mm\] at (1.675,3.525) [$3$]{}; (n3) \[rectangle,draw=none,inner sep=0mm\] at (0.65,0.65) [$6$]{}; (0,-2.45)–(1)–(2); (1)–(-0.75,0.875);
[[ 9999 ]{}]{}
The space ${\EuScript C}_{\infty}(n_1,\dots,n_k;n)$ is then graded by setting $|({{\mathcal{T}}},\sigma,\tau,C)|:=e({{\mathcal{T}}})-v(C)$. Analogously as we did for the operad ${\EuScript O}_{\infty}$, the free operad structure of ${\EuScript C}_{\infty}$ is used for the introduction of signs for the corresponding dg extension. In particular, the differential $d_{{\EuScript C}_{\infty}}$ of ${\EuScript C}_{\infty}$ acts like the differential $d_{{\EuScript O}_{\infty}}$ of ${\EuScript O}_{\infty}$, i.e. it splits the vertices of constructs and does not change the underlying cyclic operadic tree; exceptionally, $d_{{\EuScript C}_{\infty}}$ maps the empty construct to $0$.
We can, therefore, conclude.
The operad ${\EuScript C}_{\infty}$ is the minimal model for the operad ${\EuScript C}$.
[0000000]{} M. Batanin, M. Markl, Koszul duality in operadic categories, [arXiv:1812.02935v2](https://arxiv.org/abs/1812.02935), 2019. C. Berger, I. Moerdijk, Axiomatic homotopy theory for operads, [*Comment. Math. Helv.*]{} 78, 805–831, 2003. C. Berger, I. Moerdijk, The Boardman-Vogt resolution of operads in monoidal model categories, [*Topology*]{} 45 (5), 807-849, 2006. C. Berger, I. Moerdijk, Resolution of coloured operads and rectification of homotopy algebras, Categories in algebra, geometry and mathematical physics, 31–58, [*Contemp. Math.*]{}, 431, Amer. Math. Soc., Providence, 2007. J. M. Boardman, R. M. Vogt, Homotopy invariant algebraic structures on topological spaces, [*Lecture Notes in Mathematics*]{}, Vol. 347. Springer-Verlag, Berlin, 1973. M. P. Carr, S. L. Devadoss, Coxeter complexes and graph-associahedra, [*Topology Appl.*]{} 153, no. 12, 2155–2168, 2006. D.-C. Cisinski, I. Moerdijk, Dendroidal sets and simplicial operads, [*Journal of Topology*]{}, Volume 6, Issue 3, 705–756, 2013. P.-L. Curien, J. Ivanovi' c, J. Obradovi' c, Syntactic aspects of hypergraph polytopes, [*J. Homotopy Relat. Struct.*]{} <https://doi.org/10.1007/s40062-018-0211-9>, 2018. M. Dehling, B. Vallette, Symmetric homotopy theory for operads, to appear in [*Algebraic & Geometric Topology*]{}, [arXiv:1503.02701](https://arxiv.org/abs/1503.02701), 2015. K. Došen, Z. Petrić, Hypergraph polytopes, [*Topology and its Applications*]{} 158, 1405–1444, 2011. K. Došen, Z. Petrić, Weak Cat-operads, [*Logical Methods in Computer Science*]{} 11 (1), 1–23, 2015. G. C. Drummond-Cole, B. Vallette, The minimal model for the Batalin–Vilkovisky operad, [*Selecta Mathematica*]{}, Volume 19, Issue 1, 1–47, 2013. E. M. Feichtner, B. Sturmfels, Matroid polytopes, nested sets and Bergman fans, [*Port. Math. (N.S.)*]{} 62, 437–468, 2005. E. Getzler, J. D. S. Jones, Operads, homotopy algebra and iterated integrals for double loop spaces, [arXiv:hep-th/9403055](https://arxiv.org/abs/hep-th/9403055), 1994. V. Ginzburg, M. M. Kapranov, Koszul duality for operads, [*Duke Math. Journal*]{}, 76(l):203-272, 1994. V. Hinich, Homological algebra of homotopy algebras, [*Comm. Algebra*]{} 25, 3291–3323, 1997. B. Jurčo, L. Raspollini, C. Saemann, M. Wolf, $L_{\infty}$-Algebras of Classical Field Theories and the Batalin–Vilkovisky Formalism, [arXiv:1809.09899v2](https://arxiv.org/abs/1809.09899), 2018. J.-L. Loday, B. Vallette, Algebraic Operads, Die Grundlehren der mathematischen Wissenschaften in Einzeldarstellungen mit besonderer Ber" ucksichtigung der Anwendungsgebiete, Springer, 2012. A. Luk' acs, Cyclic Operads, Dendroidal Structures, Higher Categories, PhD Thesis, 2010. M. Markl, Models for operads, [*Communications in Algebra*]{}, Volume 24, Issue 4, 1471–1500, 1996. M. Markl, Homotopy diagrams of algebras, Proceedings of the 21st Winter School “Geometry and Physics”, Palermo: Circolo Matematico di Palermo, \[161\]-180. <http://eudml.org/doc/221412>, 2002. M. Markl, Operads and PROPs, Elsevier, [*Handbook for Algebra*]{}, Vol. 5, 87-140, 2008. M. Markl, S. Schnider, J. Stasheff, [Operads in Algebra, Topology and Physics]{}, Amer. Math. Soc., Providence, 2002. N. Masuda, H. Thomas, A. Tonks, B. Vallette, The diagonal of the associahedra, [ arXiv:1902.08059](https://arxiv.org/abs/1902.08059), 2019. P. McMullen, E. Schulte, Abstract Regular Polytopes, Cambridge University Press, Cambridge, 2002. A. Postnikov, Permutohedra, associahedra, and beyond, [*Int. Math. Res. Not.*]{} 2009, 1026–1106, 2009. A. Postnikov, V. Reiner, L. Williams, Faces of generalized permutohedra, [*Doc. Math.*]{} 13, 207–273, 2008. M. Robertson, The homotopy theory of simplicially enriched multicategories, [arXiv:1111.4146](https://arxiv.org/abs/1111.4146) , 2011. M. Spitzweck, Operads, algebras and modules in general model categories, PhD thesis, Bonn, 2001. M. Spivak, Calculus On Manifolds: A Modern Approach To Classical Theorems Of Advanced Calculus, Westview Press; 5th edition, 1971. J. D. Stasheff, Homotopy associativity of $H$-spaces, I, [*Trans Amer Math Soc*]{} Vol. 108, No. 2, 275-292. 1963. J. D. Stasheff, Homotopy associativity of $H$-spaces, II, [*Trans Amer Math Soc*]{} Vol. 108, No. 2, 293–312. 1963. P. Van der Laan, Coloured Koszul duality and strongly homotopy operads, PhD Thesis, [arXiv:math/0312147v2](https://arxiv.org/abs/math/0312147), 2003. R. M. Vogt, Cofibrant operads and universal $E_{\infty}$ operads, [*Topology and its Applications*]{} 133, 69–87, 2003. B. C. Ward, Massey products for graph homology, [arXiv:1903.12055v1](https://arxiv.org/abs/1903.12055), 2019.
|
---
abstract: 'We study the decidability and expressiveness issues of $\mu$-calculus on data words and data $\omega$-words. It is shown that the full logic as well as the fragment which uses only the least fixpoints are undecidable, while the fragment containing only greatest fixpoints is decidable. Two subclasses, namely BMA and BR, obtained by limiting the compositions of formulas and their automata characterizations are exhibited. Furthermore, Data-LTL and two-variable first-order logic are expressed as unary alternation-free fragment of BMA. Finally basic inclusions of the fragments are discussed.'
author:
- |
Thomas Colcombet and Amaldev Manuel [^1]\
[LIAFA, Université Paris-Diderot]{}\
[{thomas.colcombet, amal}@liafa.univ-paris-diderot.fr]{}\
bibliography:
- 'mu.bib'
title: '$\mu$-calculus on data words'
---
[^1]: The research leading to these results has received funding from the European Union’s Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 259454.
|
---
abstract: 'Driven-dissipative phase transitions are currently a topic of intense research due to the prospect of experimental realizations in quantum optical setups. The most paradigmatic model presenting such a transition is the Kerr model, which predicts the phenomenon of optical bistability, where the system may relax to two different steady-states for the same driving condition. These states, however, are inherently out-of-equilibrium and are thus characterized by the continuous production of irreversible entropy, a key quantifier in thermodynamics. In this paper we study the dynamics of the entropy production rate in a quench scenario of the Kerr model, where the external pump is abruptly changed. This is accomplished using a recently developed formalism, based on the Husimi $Q$-function, which is particularly tailored for driven-dissipative and non-Gaussian bosonic systems \[Phys. Rev. Res. [**2**]{}, 013136 (2020)\]. Within this framework the entropy production can be split into two contributions, one being extensive with the drive and describing classical irreversibility, and the other being intensive and directly related to quantum fluctuations. The latter, in particular, is found to reveal the high degree of non-adiabaticity, for quenches between different metastable states.'
author:
- 'Bruno O. Goes'
- 'Gabriel T. Landi'
bibliography:
- 'references.bib'
title: 'Entropy production dynamics in quench protocols of a driven-dissipative critical system'
---
Introduction {#Sec:Intro}
============
Entropy can be spontaneously generated in a physical process. In addition, when a system is connected to an environment, there may also be a flux of entropy. Hence, the entropy of an open system, classical or quantum, evolves according to, $$\label{eq:Entropy_evolution_OS}
\frac{dS}{dt} = \Pi - \Phi,$$ where $\Pi \geq 0$ is the irreversible entropy production rate and $\Phi$ is the entropy flux from the system to the reservoir. Steady states are characterized by $dS/dt = 0$. These may either be equilibrium steady states (ESSs), characterized by $\Pi_{\text{ESS}} = \Phi_{\text{ESS}} = 0$, or non-equilibrium steady states (NESSs), which occur in systems connected to multiple baths, or which are externally driven. In this case $\Pi_{\text{NESS}}=\Phi_{\text{NESS}} > 0$, which means that entropy is being continuously produced within the system, all of which flows to the environment. Entropy production is hence the core concept in quantifying how far from equilibrium a process takes place, or in other words, how irreversible it is.
A particularly interesting class of non-equilibrium processes are those presenting driven-dissipative phase transitions (DDPTs) [@hartmann_quantum_2008; @diehl_quantum_2008; @minganti_spectral_2018], which occur in quantum optical systems subject to a competition between dissipation and a coherent drive. When non-linear media is involved, these system may undergo a phase transition, where the NESS abruptly changes. This therefore falls in the class of dissipative transitions (also called non-equilibrium phase transitions in the classical context). Driven-dissipative transitions can be continuous or discontinuous [@marro_dickman_1999; @tomadin_nonequilibrium_2011; @carmichael_breakdown_2015] and are associated with the closing of the Liouvillian super-operator gap [@kessler_dissipative_2012; @minganti_spectral_2018]. They therefore offer the possibility of exploring, in a controlled way, many-body quantum phases without any classical analog. And for this reason, they have been the focus of significant theoretical studies [@risken_quantum_1987; @kessler_dissipative_2012; @lee_unconventional_2013; @torre_keldysh_2013; @minganti_spectral_2018; @sieberer_dynamical_2013; @carmichael_breakdown_2015; @chan_limit-cycle_2015; @mascarenhas_matrix-product-operator_2015; @sieberer_nonequilibrium_2014; @lee_dissipative_2014; @weimer_variational_2015; @casteels_power_2016; @mendoza-arenas_beyond_2016; @jin_cluster_2016; @bartolo_exact_2016; @casteels_critical_2017; @savona_spontaneous_2017; @rota_critical_2017; @biondi_nonequilibrium_2017; @casteels_quantum_2017; @raghunandan_high-density_2018; @gelhausen_dissipative_2018; @foss-feig_emergent_2017; @hannukainen_dissipation-driven_2018; @gutierrez-jauregui_dissipative_2018; @vicentini_critical_2018; @hwang_dissipative_2018; @vukics_finite-size_2019; @patra_driven-dissipative_2019; @tangpanitanon_hidden_2019], as well as experimental investigations [@fink_observation_2017; @fitzpatrick_observation_2017; @fink_signatures_2018; @rodriguez_probing_2017], in the last decade.
Since the entropy production rate $\Pi$ is the fundamental quantity that characterizes a non-equilibrium process, one may ask how does it behave dynamically, when the system is forced to adjust from one steady-state to another. A particularly interesting scenario is when the system is subject to a quantum quench; i.e., a sudden change in one of the system’s parameters [@calabrese_time_2006]. In a thermodynamic language, this is a highly non-adiabatic process, analogous to the free expansion of a gas after a sudden change in volume. Hence, on top of the entropy production due to the intrinsic non-equilibrium nature of the NESS, there will also be a contribution due to this non-adiabatic transient dynamics. Quench dynamics have been the subject of intense research in the last two decades, although most of the work have been on closed systems undergoing unitary dynamics. Examples include the evolution of the expectation values of observables in quantum spin chains [@calabrese_time_2006], the connection with work statistics [@silva_statistics_2008; @dorner_emergent_2012; @gambassi_large_2012; @varizi_quantum_2020], and the change in diagonal entropy [@polkovnikov_microscopic_2008; @polkovnikov_microscopic_2011; @polkovnikov_colloquium_2011; @alba_entanglement_2017]. Experimental studies have also been carried out in ultra-cold atoms in optical lattices [@orzel_squeezed_2001; @greiner_collapse_2002; @greiner_quantum_2002; @kinoshita_quantum_2006; @bloch_quantum_2012; @leonard_monitoring_2017], where the long coherent times allow one to assess the dynamics over extended time scales.
DDPTs, however, are usually associated with photon loss dissipation, and are thus inherently open. As a consequence, both the NESS, as well as the transient dynamics after a quench, should be associated with a finite entropy production. Very little is known about the behavior of the entropy production across DDPTs, however. Specially concerning the quench dynamics. Unfortunately, photon losses are not a standard thermal processes, since they effectively occur at zero-temperature. As a consequence, the standard formulation of entropy production does not apply [@Timpanaro2019a]. Recently, we proposed a theory of entropy production suited for photon loss dissipation using concepts from quantum phase space [@santos_wigner_2017]. This allowed us to incorporate not only thermal, but also vacuum fluctuations. This theory is suited for experiments in quantum optical setups, as illustrated in [@brunelli_experimental_2018; @Rossi2020]. Originally, it was formulated for Gaussian states and processes. But recently this has been extended for the non-Gaussian case [@goes_quantum_2020]. This makes it particularly suited for dealing with DDPTs, specially those involving quenches, where Gaussianity is seldom preserved.
In Ref. [@goes_quantum_2020] this formalism was applied to quantify the entropy production in a NESS. The theory, however, is also suited for describing entropy production dynamically. This article aims to demonstrate the applicability, and usefulness, of the framework of [@goes_quantum_2020] to study the dynamics of the entropy production in a quench scenario for DDPTs. We focus on the paradigmatic Kerr bistability model [@bartolo_exact_2016; @casteels_critical_2017; @minganti_spectral_2018; @roberts_driven-dissipative_2020] which describes an optical cavity filled with a non-linear medium, coherently pumped by a laser, and subjected to an incoherent single photon loss mechanism (see Fig. \[fig:KBM\_portrait\]). This model presents a discontinuous DDPT as a function of the external laser drive, which at the mean field level can be associated with a bistable behavior of the cavity field. This analysis, as we hope to show, will provide an insightful, and timely, analysis of a problem that lies at the boundaries between statistical physics and quantum optics.
The paper is organized as follows: in Sec. \[Sec:The\_KBM\] we introduce the Kerr bistability model, its thermodynamic limit and briefly discuss the spectral properties of the Liouvillian; in Sec. \[Sec:Wehrl\_entropy\_prod\] we briefly review Ref. [@goes_quantum_2020] and give the expressions for the entropy production/flux rate. The formalism is then applied to the sudden quench dynamics scenario in Sec. \[sec:Quench\_dyn\_ent\_prod\]. We first present a Gaussianity test, to show the intrinsic non-Gaussian structures that emerge from the quench dynamics. This is then followed by an exposition of the main results, where we analyze the relevant thermodynamics during the quench evolution. Conclusions are drawn in Sec. \[Sec:conclusions\], where we also discuss possible directions of future research.
Kerr bistability model {#Sec:The_KBM}
======================
The Model {#sec:Explaining_the_model}
---------
In this section we review the Kerr bistability model (KBM), which will be the main focus of this work. The model is described, in a frame rotating with the pump frequency $\omega_p$, by the Hamiltonian $$\label{H}
H= \Delta\dg{a}a +i\Epsilon(\dg{a}-a)+\frac{U}{2}\dg{a}\dg{a}aa,$$ where $\dg{a}$ ($a$) is the single mode creation (annihilation) operator, obeying the bosonic algebra $[a,\dg{a}]=1$. This model describes an optical cavity, filled with a non-linear medium responsible for an effective photon-photon interaction (described by the parameter $U$). The parameter $\Delta = \omega_c - \omega_p$ is the detuning between the cavity and pump freuqencies. Finally, $\Epsilon$ describes the coherent drive amplitude. The system is also subjected to single photon losses at rate $\kappa$ (see Fig. \[fig:KBM\_portrait\]). These are described within the Born-Markov approximation, so that the state of the system $\rho$ evolves according to the Gorini–Kossakowski–Sudarshan–Lindblad (GKLS) master equation ($\hbar = 1$) [@drummond_quantum_1980], $$\label{M}
\pd{t}{\rho}=-i[H,\rho]+2\kappa\bigpar{a\rho\dg{a}-\frac{1}{2}\{\dg{a}a,\rho\}} = \mathcal{L}(\rho),$$ where $\mathcal{L}$ is the Liouvillian super-operator.
Thermodynamic limit {#sec:Thermo_limit}
-------------------
The system described by Eq. can undergo a phase transition for sufficiently large pump intensities $\Epsilon$. In order to clarify the nature of this transition, and also better connect it to the standard statistical mechanics literature, it is convenient to define a fictitious thermodynamic limit, which is where the transition actually takes place. That is, we introduce a dimensionless parameter $N$ and define the thermodynamic limit to correspond to $N\to\infty$. This parameter is introduced in a reparametrization of the pump intensity, as $\Epsilon=\sqrt{N}\epsilon$, where $\epsilon$ is finite. The first moment is known to behave as $\langle a \rangle \propto \Epsilon$. We therefore define $\langle a \rangle:=\mu=\sqrt{N}\alpha$, where $\alpha$ is finite and represents the order parameter of the system. These parametrizations show that the pump term is extensive, i.e. $\Epsilon(\mu - \bar{\mu}) \propto O(N)$. In order to properly define the thermodynamic limit, we take as a physical assumption, that this should also hold for the average energy in general. That is, $\mean{H}\sim O(N)$. Hence, we must reparametrize all other parameters accordingly. In the case of , this amounts solely to a rescaling of $U$. Since $\mean{\dg{a}\dg{a}aa}\propto O(N^2)$, we therefore reparametrize $U=u/N$ [@goes_quantum_2020].
To summarize, we introduce a parameter $N$ by rescaling $\Epsilon=\sqrt{N}\epsilon$ and $U=u/N$. Both $\epsilon$ and $u$ are now taken to be finite and the thermodynamic limit is defined as $N\to \infty$.
Mean-field behavior and exact solution for the steady-state
-----------------------------------------------------------
Within the mean-field approximation, one sets $\langle a^\dagger a a \rangle \simeq N^{3/2} |\alpha|^2 \alpha$. As a consequence, the equation of motion for the scaled amplitude $\alpha$, becomes $$\label{eq:KBM_amp_dyn_Rescaled_MFA}
\pd{t}\alpha = -(\kappa +i\Delta+iu |\alpha|^2)\alpha+ \epsilon,$$ When $\Delta<-\sqrt{3}\kappa$, the steady-state of this equation is known to present a bistable behavior [@drummond_quantum_1980]. This is illustrated in Fig. (\[fig:firstmoment\_with\_plotmarkers\]). The bistability region occurs for $\epsilon$ within the interval $$\label{eq:KBM_MFA_sol}
\epsilon_\pm = \sqrt{n_\pm \big[\kappa^2 + (\Delta + n_\pm u)^2\big]}, \quad n_{\pm} = \frac{-2\Delta \pm \sqrt{\Delta^2 - 3\kappa^2}}{3u}.$$ For $\epsilon_- < \epsilon < \epsilon_+$, Eq. presents 3 steady-state solutions, two of which are stable. At the level of the full master equation , however, the nature of this bistability is somewhat subtle. For, as shown in [@drummond_quantum_1980] (see also [@vogel_quasiprobability_1989; @Habraken_2012; @bartolo_exact_2016; @roberts_driven-dissipative_2020]), the steady-state is always unique and can, in fact, be solved analytically. Indeed, the moments of the system are found to be given by [@drummond_quantum_1980] $$\label{eq:KBM-Exact_moments}
\mean{(\dg{a})^na^m} = \sqrt{2}\frac{\cj{\xi}^n\xi^m\Gamma(\cj{x})\Gamma(x)}{\Gamma(\cj{x}+n)\Gamma(x+m)}\frac{_0{F}_2(\cj{x}+n,x+m;|\xi|^2)}{_0{F}_2(\cj{x},x;|\xi|^2)},$$ where $\xi = 2\Epsilon/iU$ and $x = 2(i\Delta + \kappa)/iU$. Here $_0{F}_2(a,b;c)$ and $\Gamma$ denote the hyper-geometric and gamma functions, respectively. The predictions of this exact solution are shown as the black-solid lines in Fig. . As can be seen, at a certain critical value $\epsilon_c$ the system jumps abruptly from a state with low photon occupation, to another with high occupation. Henceforth, we shall refer to the state for $\epsilon < \epsilon_c$ as the dark phase and that for $\epsilon > \epsilon_c$ as the bright phase (there is no closed-form expression for $\epsilon_c$, which has to be computed numerically).
![NESS expectation value of the first moment of the Kerr model , as a function of the rescaled pump $\epsilon$. The mean-field solution is shown in gray-dotdashed lines, while the exact solution is shown in black (for $N = 20$). We also mark in the figure the points $\epsilon_i = 0.5$ and $\epsilon_f$, of the quench protocol, that will be used in Sec. ; viz., (a)-(f) $\epsilon_f = 0.6$, $0.8$, $\epsilon_c$, $1.1$, $\epsilon_+$, $1.3$, where $\epsilon_c = 0.933$ is the critical pump and $\epsilon_+ = 1.16616$ marks the edge of the bistability region. Other parameters were fixed at $\kappa = 1/2$, $\Delta = -2$ and $u = 1$. []{data-label="fig:firstmoment_with_plotmarkers"}](Abs_alpha_w_plotmarkers_Main.pdf){width="45.00000%"}
Spectral properties of the phase transition {#sec:spectral_properties}
-------------------------------------------
This apparent contradiction between the mean-field and exact solutions is resolved by analyzing the full spectrum of the Liouvillian , defined by [@kessler_dissipative_2012; @minganti_spectral_2018] $$\mathcal{L}(\rho_k) = \zeta_k \rho_k,$$ where $\rho_k$ are the eigenmatrices associated with the eigenvalue $\zeta_k$ (which may be complex, since $\mathcal{L}$ is not Hermitian). The NESS, in particular, is associated with the zero eigenvalue, $\zeta_0 = 0$, $$\mathcal{L}(\rho_{\text{NESS}}) = 0$$ It can be shown that, in general, $\Re{\zeta_k} \leqslant 0$ [@breuer2002theory], which ensures stability of the dynamics under GKLS evolution. We can thus order the eigenvalues as $0 = |\Re{\zeta_0}| < |\Re{\zeta_1}|< \ldots$. The Liouvillian gap is then defined as $\lambda = |\Re{\zeta_1}|$. Physically, it determines the slowest relaxation rate in the long-time limit. Or, what is equivalent, the tunneling time $\tau_{\text{tun}}$ between the two branches of optical bistability [@risken_quantum_1987].
As studied in detail in Refs. [@minganti_spectral_2018; @kessler_dissipative_2012; @casteels_critical_2017], in the Kerr model the gap closes asymptotically within the entire bistability region $\epsilon \in [\epsilon_-,\epsilon_+]$. This is illustrated in Fig. \[fig:gap\_closing\]. As a consequence, for any finite $N$, there will always be a unique steady-state, but the gap between this and the first excited state becomes vanishingly small as $N$ increases, thus giving rise to a bistable solution (see also [@macieszczak_towards_2016; @macieszczak_theory_2020]). This hence reconciles the mean-field and exact solutions.
![Liouvillian gap $\lambda$ as a function of $\epsilon$. The dashed black vertical line represents the critical pump $\epsilon_c = 0.933$. The orange patch marks the bistable region, as given by Eq. . Other parameters are the same as in Fig. \[fig:firstmoment\_with\_plotmarkers\].[]{data-label="fig:gap_closing"}](Liouvillian_gap_plot.pdf){width="45.00000%"}
Wehrl entropy production rate for the KBM {#Sec:Wehrl_entropy_prod}
=========================================
To study the thermodynamics of the KBM model, we use a phase space method based on the Husimi $Q$-function, introduced in Ref. [@goes_quantum_2020]. A quantum-phase space formulation of the entropy production was first introduced in Ref. [@santos_wigner_2017], and also experimentally assessed in Ref. [@brunelli_experimental_2018]. The problem was formulated in terms of the Wigner function, which generates physical probability densities for Gaussian systems subjected to Gaussian dynamics. This procedure make it possible to circumvent the so called ultra-cold catastrophe [@uzdin2019passivity]; that is, the divergence of the usual entropy production rate and entropy flux rate in the zero temperature limit. The Wigner function, however, can become negative, which would lead to complex entropies. To amend this, one can alternatively use the Husimi $Q$-function, defined as $$Q(\mu,\cj{\mu}) = \frac{1}{\pi}\bra{\mu}\rho\ket{\mu},$$ where $\ket{\mu}$ is a coherent state and $\cj{\mu}$ denotes complex conjugation. The $Q$-function is such that $Q(\mu,\cj{\mu}) \geq0$ [@gardiner2004quantum] and can be operationally interpreted as the probability distribution of the outcomes of a heterodyne measurement [@serafini2017quantum] (see also Appendix. \[App:heterodyne\_mesurement\]). As the basic entropic quantifier, we use the Wehrl entropy [@wehrl_general_1978], defined as the Shannon entropy of the $Q$-function: $$\label{eq:Wehrl_entropy_def}
S(Q) = - \int \dd^2\mu \; Q\ln Q.$$ Physically, $S(Q)$ quantifies the entropy of the system, convoluted with additional noise introduced by measuring in the coherent state basis. As a consequence, $S(Q)$ upper bounds the von Neumann entropy, $S(\rho)=-\Tr{\rho\ln{\rho}}$, i.e. $S(\rho)\leq S(Q)$. The Wehrl entropy may thus be viewed as a semi-classical approximation to the von Neumann entropy. But despite these limitations, this is currently the only available framework for dealing with the thermodynamics of DDPTs.
We proceed by mapping the master equation Eq. into a quantum Fokker-Planck equation $$\label{eq:QFPeqn}
\pd{t}Q = \mathcal{U}(Q) + \pd{\mu}J(Q)+\pd{\cj{\mu}}\cj{J}(Q)$$ where $\mathcal{U}(Q)$ and $J(Q)$ are differential operators associated with the unitary and dissipative parts of . The exact form of these quantities can be found using standard correspondence tables [@gardiner2004quantum]. The latter, in particular, reads $$\label{J}
J(Q)= - \kappa(\mu Q + \pd{\cj{\mu}}Q),$$ and can be interpreted as the irreversible quasi-probability current associated with the photon loss dissipator. One may verify that $J(Q)$ vanishes when $Q$ is the vacuum state, which is the fixed point of the dissipator (although not of the whole master equation, due to the presence of the unitary term). This corroborates its interpretation as a quasiprobability current [@santos_wigner_2017].
The unitary term, on the other hand, is somewhat lengthier and reads $$\label{eq:KBM_Q_unitary_contribution}
\mathcal{U}(Q) = \mathcal{U}_{\Delta} +\mathcal{U}_{\Epsilon} +\mathcal{U}_{U},$$ where $$\begin{aligned}
\label{eq:KBM_Q_unitary_contribution_Delta}
\mathcal{U}_{\Delta} &= i \Delta(\mu\pd{\mu}{Q}-\cj{\mu}\pd{\cj{\mu}}{Q}),\\[0.2cm]
\label{eq:KBM_Q_unitaryE}
\mathcal{U}_{\Epsilon} &= -\Epsilon (\pd{\mu}{Q}+\pd{\cj{\mu}}{Q}),\\[0.2cm]
\label{eq:KBM_Q_unitaryU}
\mathcal{U}_{U}&= \frac{i U}{2}[2|\mu|^2(\mu \pd{\mu}{Q}-\cj{\mu}\pd{\cj{\mu}}{Q})+\mu^2\dpd{\mu}{Q}-\cj{\mu}^2\dpd{\cj{\mu}}{Q}].\end{aligned}$$ One notices, in particular, that the photon-photon interaction term, being quartic, leads to terms in $\mathcal{U}_U$ containing second-order derivatives. This is a special feature of quantum non-Gaussian models. In classical Fokker-Planck equations, second-order derivatives stem only from dissipation. In quantum models, on the other hand, one may in general obtain derivatives of arbitrary orders.
Differentiating with respect to time, we find $$\label{eq:Wehrl_rate}
\frac{dS}{dt} = - \int \dd^2\mu \; (\pd{t}Q)\ln Q.$$ Inserting in , we can then split $dS/dt$ as \[c.f. Eq. \] $$\label{eq:Wehrl_rate_comparison}
\frac{dS}{dt} = \Pi_{\mathcal{U}} + \Pi_J - \Phi,$$ where $$\begin{aligned}
% \label{eq:Pi_identification}
% \Pi_t &\doteq\Pi_{\mathcal{U}}+\Pi_J,\\
\label{eq:Pi_u_def}
\Pi_{\mathcal{U}} &:= -\int \dd^2\mu\; \mathcal{U}(Q)\ln Q,\\
\label{eq:dissipative_contribution_id}
\Pi_J - \Phi &:= -
\int \dd^2\mu\;\Big\{\pd{\mu}J(Q)+\pd{\cj{\mu}}\cj{J}(Q)\Big\}\ln Q.
% \int \dd^2\mu\; \mathcal{D}(Q)\ln Q.\end{aligned}$$ The term $\Pi_{\mathcal{U}}$ represents how the unitary term contributes to $dS/dt$. This is a feature which is unique of phase-space entropies, such as , and is associated with their coarse-grained nature. It will be discussed further below. The terms $\Pi_J$ and $\Phi$, on the other hand, refer to the splitting of the dissipative contribution into a term associated with an entropy production rate, $\Pi_J$, and another identified as an entropy flux rate $\Phi$.
Dissipative contribution {#sec:Dissipative_Pi_Phi}
------------------------
As shown in [@goes_quantum_2020; @santos_wigner_2017], the correct splitting of $\Pi_J$ and $\Phi$ in Eq. should have the form
[rCl]{} \[eq:entropy\_production\_rate\] \_J &=& \^2 0,\
\[eq:Total\_entropy\_flux\_rate\] &=& \^2 (| J(Q) + |[J]{}(Q)) = 2.
The quantity $J(Q)/Q$ can be interpreted as a phase-space velocity [@seifert_stochastic_2012], so that $\Pi_J$ is seen to be proportional to the mean squared phase space velocity, $\mean{|J(Q)/Q|^2}$.
The entropy flux, Eq. , is associated with the photon loss rate. To see that, we start from Eq. and compute the time-evolution of the cavity occupation $\langle a^\dagger a \rangle$, which reads $$\frac{\dd \langle a^\dagger a \rangle}{\dd t} = \Epsilon \langle a^\dagger + a\rangle - 2 \kappa \langle a^\dagger a\rangle.$$ The first term describes the effect of the external coherent pump in populating the cavity, while the second describes the loss rate to the environment. The entropy flux is thus found to be directly the photon loss rate, corroborating its interpretation as a “flux”.
Within the context of DDPTs, it is interesting to analyze the behavior of $\Pi_J$ and $\Phi$ in the thermodynamic limit (Sec. \[sec:Thermo\_limit\]). In this case it is convenient to define the fluctuation operator $\delta \dg{a}\delta a = \dg{a}a - N|\alpha|^2$. Substituting in Eq. leads to $$\label{eq:splitting_entropy_flux_rate}
\Phi = \Phi_{\text{ext}} + \Phi_q = 2\kappa |\alpha|^2 N + 2\kappa \mean{\delta \dg{a} \delta a},$$ where $\Phi_{\text{ext}} \propto O(N)$ is a clearly extensive contribution, while $\Phi_q$ is the contribution associated with quantum fluctuations.
A similar splitting can be done for the entropy production rate . We define the displaced phase space variable $\nu = \mu - \alpha\sqrt{N}$, which leads to a splitting of the current as $J = -\sqrt{N}\kappa \alpha Q + J_\nu(Q) $, where $J_\nu(Q) = - \kappa(\nu Q + \pd{\cj{\nu}}Q)$. Substituting this in , we then find $$\label{eq:Pi_d}
\Pi_J = \Pi_{\text{ext}}+\Pi_d =2\kappa|\alpha|^2 N +\frac{2}{\kappa}\int \dd^2\nu\;\frac{|J_\nu(Q)|^2}{Q}.$$ As can be seen, the first contribution to the entropy production rate is $\Pi_{\text{ext}}= \Phi_{\text{ext}}$. This means that the part of the entropy flow associated with the first moments $\alpha$ produces an equal amount of entropy (and hence does affect the system entropy, according to Eq. ). The other contribution, $\Pi_d$, concerns only the quantum fluctuations (we do not call it $\Pi_q$ since there will also be a quantum contribution to $\Pi$ from the unitary part, as we now discuss).
Unitary contribution {#sec:unitary_contribution}
--------------------
Next we turn to the unitary term $\Pi_{\mathcal{U}}$ in . An interesting feature of the Husimi $Q$-function is that the only unitary terms which contribute to $dS/dt$ are those involving second-derivatives. That is, the terms associated with $\Delta a^\dagger a$ and $i \Epsilon(a^\dagger -a)$, Eqs. and , vanish. This can be proven integrating by parts multiple times. The same is also true for the first two terms in Eq. . Thus, the only non-trivial contributions come from the last two terms. Integrating by parts multiple times, these can be written as
$$\label{eq:Pi_U_contribution}
\Pi_{\mathcal{U}} =\frac{i U}{2} \int \dd^2\mu\frac{( \mu ^2(\pd{\mu}Q)^2-\cj{\mu}^2 (\pd{\cj{\mu}}(Q))^2)}{Q}.$$
This contribution to the entropy production stems from the second-order derivatives in the quantum Fokker-Planck equation. Usually, second-order derivatives are associated with diffusion. Since, in this case, these derivatives refer to the unitary part, this cannot be standard diffusion, which would not conserve energy. Instead, the structure appearing in the last two terms of Eq. represents a special type of diffusion, which is compatible with unitarity. This type of phenomena was studied in [@Altland2012; @Altland2012a], in the context of the Dicke model, where the authors showed that it corresponds to a type of decoherence, linked with the coarse-graining introduced by the phase-space representation.
Unlike $\Pi_J$, which is always non-negative (as expected by the 2nd law), the sign of $\Pi_\mathcal{U}$ is not well defined. This, in a sense, is expected since this quantity measures the effect of the unitary dynamics on $S(Q)$. And one cannot expect that all unitary dynamics always lead to an increase in the Wehrl entropy. Notwithstanding, we have recently carried out a study of this type of term in the context of dynamical phase transitions in the Lipkin-Meshkov-Glick model [@goes_wehrl_2020]. There, we found that even though $\Pi_\mathcal{U}$ did not have a well defined sign, this became asymptotically the case in the thermodynamic limit. Thus, our expectation is that for critical models, this term should tend to be non-negative in the thermodynamic limit (although it is not clear whether this could somehow be proved in general).
We have also carried out a detailed analysis of $\Pi_\mathcal{U}$ in the NESS of the KBM model [@goes_quantum_2020]. We found that, quite surprisingly, it behaved similarly to what is expected from the *classical* entropy production in across non-equilibrium transitions [@herpich_collective_2018; @noa_entropy_2019; @crochik_entropy_2005; @shim_macroscopic_2016; @herpich_universality_2019].
Quantum entropy production dynamics {#sec:Quench_dyn_ent_prod}
===================================
Quench protocol and computational details
-----------------------------------------
We now arrive at the core of our paper, where we probe the dynamics of the different entropic quantities during a quench protocol of the KBM model. The quench makes the system evolve from the initial NESS, with pump $\epsilon_i$, to a final NESS, with pump $\epsilon_f$. Both initial and final states are already non-equilibrium in nature. In addition, however, there will also be an entropy production associated with the quench dynamics, a highly non-adiabatic process.
From this point on, all quantities will be given in units of $\kappa = 1/2$. We fix $\Delta = -2$ and $u = 1$. We also start all quenches from $\epsilon_i = 0.5$. The only free parameters are then $\epsilon_f$ and $N$. We will make frequent reference to Fig. \[fig:firstmoment\_with\_plotmarkers\], which shows the different values we chose for $\epsilon_f$. For these parameters, the mean-field bistability region occurs between $\epsilon_- = 0.701373$ and $\epsilon_+ =1.16616$ \[c.f. Eq. \]. Moreover, the critical point (determined numerically) is at $\epsilon_c = 0.933$. The system is thus always initialized in the dark phase, $\epsilon_i=0.5 < \epsilon_c$. We recall the nomenclature we adopted: the dark phase refers to the first branch, where $\epsilon < \epsilon_c$, and the bright phase refers to the upper branch, where $\epsilon > \epsilon_c$. Moreover, the region between $[\epsilon_-, \epsilon_+]$ is referred to as the bistability region.
All simulations are done by expressing the relevant operators in the Fock basis, using a sufficiently large number of basis states $n_\text{max}$ to ensure convergence. Manipulations involving the Liouvillian are then carried out using standard vectorization methods [@Vectorization]. The quench protocol we investigated can be summarized as follows:
1. The system is initialized in the steady-state of the Liouvillian $\mathcal{L}_i$, associated with $\epsilon = \epsilon_i$. That is, $\rho_i$ is the solution of $\mathcal{L}_i(\rho_i) = 0$.
2. At time $t=0$ we abruptly change the pump amplitude to a certain value $\epsilon_f$, which defines a new Liouvillian $\mathcal{L}_f$;
3. For $t>0$ the state evolution is governed by the final Liouvillian according to $$\rho_t=e^{\mathcal{L}_f t}\rho_i,$$ which is the formal solution of . $\rho_t$ is computed numerically by discretizing the evolution in small time steps, usually $\Delta t = 0.2$;
4. At each time step we compute $\langle a \rangle$ and $\langle a^\dagger a \rangle$. From the former we find $\alpha = \langle a \rangle/N$ and from the latter we obtain the net entropy flux rate , $\Phi = 2 \kappa \langle a^\dagger a \rangle$.
5. The Husimi $Q$-function is computed numerically at each time step, by constructing approximate coherent states, $$\ket{\mu} = e^{-|\mu|^2/2} \sum\limits_{n=0}^{n_{\text{max}}} \frac{\mu^n}{\sqrt{n!}} \ket{n}.$$ We compute $Q(\mu, \bar{\mu})$ for a sufficiently fine grid of points ($\mu, \bar{\mu})$ in the complex plane.
6. The entropy production rates $\Pi_J$ and $\Pi_\mathcal{U}$ in Eqs. and are computed numerically, by integrating the corresponding functions over the complex plane grid. Derivatives of the Husimi function can be computed efficiently using Bargmann state [@gardiner2004quantum], as detailed in Appendix \[App:Details\_Pies\].
These simulations are computationally costly and the cost increases significantly with $N$. The reason is twofold. First, larger values of $N$ require a larger number of basis elements $n_\text{max}$, to ensure convergence. Second, larger $N$ also requires larger grids in the complex plane, which significantly increases the cost of the numerical integrations involved in computing $\Pi_J$ and $\Pi_\mathcal{U}$.
We begin in Sec. \[sec:Gauss\_tests\] by showing that, in general, the dynamics is not Gaussian. This is interesting, since in the thermodynamic limit this would be the case for the NESS. However, due to the highly non-adiabatic nature of the quench, the intermediate states will in general deviate significantly from Gaussianity. Next we turn to the properties of the entropy production rate and flux rate. In Sec. \[Sec:Entropy\_flux\_dyn\] we study the entropy flux rate $\Phi$ in Eq. and in Sec. \[Sec:Entropy\_flux\_dyn\] we study the component $\Pi_d$ \[of $\Pi_J$, Eq. \], and the unitary contribution $\Pi_\mathcal{U}$ in .
Gaussianity of the state during the dynamics {#sec:Gauss_tests}
--------------------------------------------
We can quantify the degree of non-Gaussianity in $\rho_t$ by comparing it with a corresponding Gaussian state $\rho_t^G$ having the same first and second moments. Following [@genoni_quantifying_2010], we do this using the quantum Kullback-Leibler divergence $$\label{eq:gaussian_measure}
\begin{split}
\mathcal{G}(\rho_t) &=D(\rho_t || \rho_t^G) = \tr\Big\{ \rho_t \ln \rho_t - \rho_t \ln \rho_t^G\Big\}
%= S(\rho_t^G)-S(\rho_t)\\ %= \tr{\rho_t \ln{\rho_t}} - \tr{\rho_t \ln{\rho_t^G}}\\
% &=h(\sqrt{\det{\Theta}}) - S(\rho_t)
\end{split}$$ The state is Gaussian if $\mathcal{G} = 0$ and non-Gaussian otherwise. The results for $\mathcal{G}(\rho_t)$, for the quenches laid out in Fig. \[fig:firstmoment\_with\_plotmarkers\], are presented in Fig. \[fig:gaussian\_measure\].
![Degree of non-Gaussianity \[Eq. \], as a function of time, for the different quench protocols in Fig. \[fig:firstmoment\_with\_plotmarkers\]. Each curve correspond to a different value of $N$, as shown in panel (a). Other parameters are the same as Fig. .[]{data-label="fig:gaussian_measure"}](Gauss_test_1.pdf){width="45.00000%"}
The quench in figure \[fig:gaussian\_measure\](a) goes to $\epsilon_f=0.6 < \epsilon_-$ and thus remains in the dark phase, outside the bistability region. We observe that most evolutions remain nearly Gaussian, except $N=1$. This is consistent with the fact that Gaussianity, even in the NESS, is only expected in the thermodynamic limit. In figure \[fig:gaussian\_measure\](b) the quench goes to $\epsilon_f = 0.8$. Again, it is still in the dark phase, but now it lies inside the bistability region and below the critical point; that is, $\epsilon_- < \epsilon_f < \epsilon_c$. Now we observe that $N=5$ is not even close to being Gaussian, but one does roughly find Gaussianity for $N=10$ onward.
The real action starts in Fig. \[fig:gaussian\_measure\](c), where $\epsilon_f=\epsilon_c$. The corresponding final NESS will be roughly a mixture of the dark and bright phases. We observe that the dynamics is markedly non-Gaussian for any value of $N$. But for larger values of $N$ one still finds that there is a tendency toward becoming Gaussian.
The situation changes completely in figure \[fig:gaussian\_measure\](d), where we show results for $\epsilon_f=1.1$, thus representing a quench from the dark to the bright phase, but within the bistability region, $\epsilon_c < \epsilon_f < \epsilon_-$ (see Fig. \[fig:firstmoment\_with\_plotmarkers\]). In this case we find that $\mathcal{G}(\rho_t)$ *increases* with $N$. This indicates that, in this case, a Gaussian limit would never be reached, no matter the values of $N$ used. It therefore represents the most dramatic example, over all quenches, of a markedly non-Gaussian dynamics. A similar behavior is observed in Fig \[fig:gaussian\_measure\](e), where $\epsilon_f = \epsilon_+$. Unlike (d), however, we now see that $\mathcal{G}(\rho_t)$ tends to fall for long times. This is related to the time-scales of the problem, which are much slower in Fig. \[fig:gaussian\_measure\](d).
Finally, in figure \[fig:gaussian\_measure\](f) we show results for $\epsilon_f = 1.3 > \epsilon_+$. It is found that, for intermediate times, the state is non-Gaussian for any size $N$, but quickly tends back toward zero (notice the different horizontal scale in comparison with (d) and (e)).
The above results therefore clearly show that, during the quench dynamics, the state will in general be highly non-Gaussian. This justifies the use of the full numerical algorithm described above, in contrast to, say, Gaussianization techniques. It also highlights one of the advantages of using the Husimi function, which is non-negative for any quantum state (in contrast, for instance, with the Wigner function which can become negative).
Entropy flux rate dynamics {#Sec:Entropy_flux_dyn}
--------------------------
Fig. \[fig:Phi\_T\_quench\] summarizes the results for the entropy flux rate . This quantity is dominated by the first term, which is extensive in $N$. The dashed black lines represents the values of $\Phi_{\text{NESS}}$ computed from the exact solution . In Figs.\[fig:Phi\_T\_quench\](c) and \[fig:Phi\_T\_quench\](d), the MFA solution, Eq. , is represented by red dash-dotted lines, for comparison.
In Figs. \[fig:Phi\_T\_quench\](a) and \[fig:Phi\_T\_quench\](b) the system relaxes to the exact NESS after a relatively short transient (when contrasted to (d)-(f)). This is related to the fact that both the initial and final NESSs belong to the same manifold of dark-phase states. The behaviour of the flux is in agreement with the non-Gaussianity in Fig. \[fig:gaussian\_measure\]. In Fig. \[fig:Phi\_T\_quench\](c), where the quench goes to the critical point $\epsilon_c$, the system tends relaxes to the NESS predicted by the MFA solution in Eq. when $N\rightarrow \infty$. Recall that this quench was the first to present a non-Gaussian dynamics.
In Fig.\[fig:Phi\_T\_quench\](d), where the quench is toward the bright phase, we begin to observe a non-trivial dependency on $N$ (all plots refer to $\Phi/N$). Recall that, in this case, the degree of non-Gaussianity increases with $N$. One also notices that in this case the system does not relax to the exact NESS, nor to the MFA solution, even for the significantly larger simulation times (compare the horizontal axis with the other curves). Given a sufficient amount of time, the system would likely relax. However, this requires significantly longer simulations, which are outside the numerical capabilities of our code.
Finally, in Figs.\[fig:Phi\_T\_quench\](e) and (f), one observes a much more orderly transition between the two NESSs, as depicted by the dashed black lines. The transient time scales also tends to be relatively $N$-independent, except for $N = 1$.
In Fig. \[fig:Phiq\_quench\] we present the plots for the quantum entropy flux $\Phi_q$ in Eq. . This quantity behaves similarly to $\Pi_d$, which will be discussed in the next section. Thus, we shall comment more thoroughly on it below. At this point, we just call attention to the overall vertical scale of $\Phi_q$, in comparison with $\Phi$ in Eq. . The latter is normalized by $N$. Notwithstanding, for certain quenches one finds that $\Phi_q$ can reach values which are comparable in magnitude to $\Phi_\text{ext}$. This is particularly true for $\epsilon_f > \epsilon_c$ (Figs. \[fig:Phiq\_quench\](d)-(f)).
Quantum entropy production dynamics {#quantum-entropy-production-dynamics}
-----------------------------------
![ $\Pi_d$, Eq. , as a function of time. The black dashed lines correspond to the NESS values in the thermodynamic limit (the line in image (c) is not shown since it falls outside the scale). Other details are as in Fig. \[fig:gaussian\_measure\]. []{data-label="fig:Pies1_quench"}](Pies_1_Quench_dyn_main.pdf){width="45.00000%"}
In this section we present and compare the dynamics of the quantum entropy production rate due to dissipation, $\Pi_d$ ( Fig. \[fig:Pies1\_quench\]), and due to the unitary contribution, $\Pi_{\mathcal{U}}$ (in Fig. \[fig:Pies2\_quench\]). We begin by comparing the two quantities in the cases of (a) $\epsilon_f = 0.6$, (b) $\epsilon_f = 0.8$ and (c) $\epsilon_f = \epsilon_c$. In these three cases, we observe that $\Pi_d$ and $\Pi_{\mathcal{U}}$ are of the same order of magnitude. Physically, this means that irreversibility due to dissipation is comparable to the one introduced by the coarse-graining. We call attention to a non-trivial $N$-dependency in $\Pi_d$ in Fig. \[fig:Pies1\_quench\](c), where the curve for $N= 5$ is actually above $N=1$, unlike all other cases. It is not clear to us, precisely why this happens.
Conversely, for the quenches to (d) $\epsilon_f = 1.1$, (e) $\epsilon_f = \epsilon_+$ and (f) $\epsilon_f = 1.3$, we find that $\Pi_d \gg \Pi_\mathcal{U}$; that is, dissipation becomes the main source of irreversibility. In all these cases both $\Pi_d$ and $\Pi_{\mathcal{U}}$ present an $N$-dependent transient, peak at some instant of time and then eventually relaxing back to the NESS value. Notice how the maximal values reached by $\Pi_d$ are significantly higher than those of the NESS. This allows one to conclude that the transition from a dark to a bright NESS is accompanied by a significant production of entropy. Please note also the horizontal scales, showing that the relaxation times in Figs. \[fig:Pies1\_quench\](d) and \[fig:Pies2\_quench\](d) are significantly larger. Conversely, in Figs. \[fig:Pies1\_quench\](f) and \[fig:Pies2\_quench\](f), the relaxation times reduce significantly. Notwithstanding, the maximum entropy production rate is still very high.
These results show that the thermodynamic response of the system depends sensibly on the final pump value $\epsilon_f$. When the quench is to the same phase we find that $\Pi_d \approx \Pi_{\mathcal{U}}$, while when we abruptly change the phase $\Pi_d \gg \Pi_{\mathcal{U}}$: in the former, the contributions of the dissipation and uncertainty introduced by the coarse-graining are comparable and in the later the dissipation dominates over the measurement procedure. When the system is quenched toward the bright phase, one finds a strong dependency on $N$, together with a peak of the entropy production rates before the relaxation to the NESS. Whether or not $\epsilon_f$ lies inside the bistability region, plays an essential role in the time-scales involved, which become significantly slower due to the metastable character of this region.
Conclusion and outlook {#Sec:conclusions}
======================
Entropy production is the key concept characterizing systems out of equilibrium. And much is still unknown about how it behaves as a system undergoes a non-equilibrium transition. In this paper we have employed a phase-space formalism to characterize the entropy production of the Kerr bistability model, a prototypical model of driven-dissipative quantum optical systems, presenting a discontinuous transition. This paper complements [@goes_quantum_2020], which studied the NESS of this model. Here our focus was on dynamical aspects of the entropy production in a sudden quench scenario.
We showed that, in general, the state during the dynamics is highly non-Gaussian. By analyzing the quenches to different representative configurations, we have found that the thermodynamic response of the system presents at least 3 markedly different behaviours. For quenches within the same phase, the unitary and dissipative contributions to the entropy production are of the same order of magnitude and the time-scales involved are all relatively short and $N$-independent. For quenches from the dark to the bright phase, but still inside the bistability region, we find that $\Pi_d \gg \Pi_\mathcal{U}$ and the time-scales become significantly longer. Finally, for quenches from the dark to the bright phase, but outside the bistability region, one still has $\Pi_d \gg \Pi_\mathcal{U}$, but now the time-scales become once again relatively short and $N$-independent.
By gauging how far the system is from equilibrium, together with the relative contribution from unitary and dissipative dynamics, this study has therefore helped shed light on the intricate interplay between criticality and dissipation in non-equilibrium transitions. It also serves to tighten the connection between quantum optical models and statistical mechanics.
Finally, we believe that these results also show the flexibility of the Wehrl entropy production framework in dealing with driven-dissipative quantum optical systems far from equilibrium. In future research, we plan to consider infinitesimal quenches, as well as models presenting continuous transitions. From a computational point of view, these studies would benefit tremendously from more efficient methods of computing the Husimi $Q$-function and the associated phase space quantities. This would be an absolute necessity if one is to extend these studies, for instance, to multimode systems such as the Dicke model or the optical parametric oscillators.
*Acknowledgements -* The authors acknowledge fruitful discussions with C. Fiore and J. Goold. G. T. L. acknowledges the financial support of the São Paulo Funding Agency FAPESP (Grants No. 2017/50304-7, 2017/07973-5 and 2018/12813-0) and the Brazilian funding agencyCNPq (Grant No. INCT-IQ 246569/2014-0). B. O. G. acknowledges the financial support from the Brazilian funding agencies CNPq and Capes, D. M. Valente, S.V. Moreira and J. P. Santos for insightful comments on the work.
\[Appendix\]
Heterodyne measurements {#App:heterodyne_mesurement}
=======================
In order to clarify the physical content of the Wehrl entropy, Eq. , we briefly review here the interpretation of the Husimi $Q$-function as the probability distribution of the outcomes of a heterodyne measurement. We start by considering a generalized measurement, described by the continuous set of Kraus operators $$M_{\mu} = \frac{1}{\sqrt{\pi}}\ket{\mu}\bra{\mu}.$$ This set is properly normalized, which is a consequence of the (over)completeness of the coherent states basis, $$\int \dd^2\mu \;\dg{M}_{\mu} M_{\mu}=\int \dd^2\mu \;\;\frac{\ket{\mu}\bra{\mu}}{\pi} =1.$$ Thus, the probability of obtaining outcome $\mu$ will be given by, $$p_{\mu}=\tr{M_{\mu}\rho \dg{M}_{\mu}} = \frac{1}{\pi}\bra{\mu}\rho\ket{\mu} \equiv Q(\mu,\cj{\mu}),$$ which is precisely the Husimi $Q$-function. The Wehrl entropy can therefore be interpreted in an operational sense, as the entropy associated with the outcome probability distribution of an heterodyne measurement. This highlights its coarse-grained character, as it also encompasses the additional noise introduced by the measurement.
Computation of $\Pi_J$ and $\Pi_{\mathcal{U}}$ from Bargmann state {#App:Details_Pies}
==================================================================
To compute $\Pi_J$ and $\Pi_{\mathcal{U}}$ numerically, we make use of the Bargmann state, defined as, $$\label{eq:def_Barg_state}
||\mu\rangle = \exp{\frac{|\mu|^2}{2}}\ket{\mu}.$$ where $\ket{\mu}$ is the usual coherent state. This state has the useful property that the action of the derivatives with respect $\mu$ and $\cj{\mu}$ are mapped into the action of the creation/annihilation operators as, $$\label{def:Barg_derivatives}
\begin{split}
\pd{\mu}{||\mu\rangle} &= \dg{a}||\mu\rangle\\
\pd{\cj{\mu}}{\langle\mu||} &= \langle\mu||a
\end{split}$$ We can use this to write the phase-space currents in a computationally more convenient way. First we express the $Q$-function as $$Q(\mu,\cj{\mu}) = \frac{\exp{-|\mu|^2}}{\pi}\langle\mu||\rho||\mu\rangle.$$ Using Eq. to compute $\pd{\cj{\mu}}{Q}$, we then find $$\label{eq:1st_Barg_derivative}
\begin{split}
\pd{\cj{\mu}}{Q} &= - \mu Q +\frac{\exp{-|\mu|^2}}{\pi}\langle\mu||a\rho||\mu\rangle\\
&= - \mu Q +\frac{1}{\pi}\langle\mu|a\rho|\mu\rangle,
\end{split}$$ where, in the second line, we already recast the state in terms of usual coherent states. We then find that $$J(Q) = - \frac{\kappa}{\pi}\langle\mu|a\rho|\mu\rangle$$ which allows us to write $\Pi_J$ in Eq. as, $$\Pi_J = \frac{2\kappa}{\pi}\int \dd^2\mu\;\frac{|\langle\mu|a\rho|\mu\rangle|^2}{\langle \mu |\rho | \mu \rangle}.$$ Now, we turn to the computation of Eq. . First, we note it can be rewritten as, $$\Pi_{\mathcal{U}} = U \int \dd^2\mu\; \Im{\frac{\cj{\mu}^2(\pd{\cj{\mu}}{Q})^2}{Q}}.$$ Hence, using Eq. and the definition of $Q(\mu,\cj{\mu})$, we obtain $$\Pi_{\mathcal{U}} = U \int \dd^2\mu\; \Im{\frac{|\mu|^2}{\pi}\bigpar{|\mu|^2\bra{\mu}\rho\ket{\mu} + 2\cj{\mu}\bra{\mu} a\rho\ket{\mu}} + \cj{\mu}\frac{\bra{\mu}a\rho\ket{\mu}^2}{\bra{\mu}\rho\ket{\mu}}}$$ During the simulations, the quantities $\langle \mu |\rho | \mu \rangle$ and $\langle\mu|a\rho|\mu\rangle$ are directly computed from $\rho_t$.
|
---
abstract: |
The presence of magneto-acoustic waves in magnetic structures in the solar atmosphere is well-documented. Applying the technique of solar magneto-seismology (SMS) allows us to infer the background properties of these structures. Here, we aim to identify properties of the observed magneto-acoustic waves and study the background properties of magnetic structures within the lower solar atmosphere.
Using the Dutch Open Telescope (DOT) and Rapid Oscillations in the Solar Atmosphere (ROSA) instruments, we captured two series of high-resolution intensity images with short cadence of two isolated magnetic pores. Combining wavelet analysis and empirical mode decomposition (EMD), we determined characteristic periods within the cross-sectional (*i.e.,* area) and intensity time series. Then, by applying the theory of linear magnetohydrodynamics (MHD), we identified the mode of these oscillations within the MHD framework.
Several oscillations have been detected within these two magnetic pores. Their periods range from 3 to 20 minutes. Combining wavelet analysis and EMD enables us to confidently find the phase difference between the area and intensity oscillations. From these observed features, we concluded that the detected oscillations can be classified as slow sausage MHD waves. Further, we determined several key properties of these oscillations such as the radial velocity perturbation, magnetic field perturbation and vertical wavenumber using solar magneto-seismology. The estimated range of the related wavenumbers reveals that these oscillations are trapped within these magnetic structures. Our results suggest that the detected oscillations are standing harmonics, and this allows us to estimate the expansion factor of the waveguides by employing SMS. The calculated expansion factor ranges from 4-12.
author:
- 'N. Freij$^{1}$, I. Dorotovič$^{2}$, R. J. Morton$^{3}$, M. S. Ruderman$^{1,4}$, V. Karlovský$^{5}$ and R. Erdélyi$^{1,6}$'
bibliography:
- 'saus\_seis.bib'
title: 'On the properties of slow MHD sausage waves within small-scale photospheric magnetic structures'
---
Introduction {#Intro}
============
Improvements in space- and ground-based solar observations have permitted the detection and analysis of small-scale waveguide structures in the Sun’s lower atmosphere. One such structure is a magnetic pore: a magnetic concentration with a diameter that ranges from $0.5$ to $6$ Mm with magnetic fields of $1$ to $3$ kG that typically last for less than a day [@1970SoPh...13...85S]. Magnetic pores are highly dynamic objects due to e.g., constant buffeting from the surrounding granulation in the photosphere. A collection of flows and oscillations have been observed within and around magnetic pores . The major apparent difference between a sunspot and a magnetic pore is the lack of a penumbra: a region of strong and often very inclined magnetic field that surrounds the umbra.
It is important to understand which magnetohydrodynamic (MHD) waves or oscillatory modes can be supported in magnetic flux tubes in the present context. The reason for this is two-fold: it clarifies the observational signatures of each mode; and, whether that mode will manifest given the conditions of the local plasma. Furthermore, absorption of the global acoustic *p*-mode, and flux tube expansion will induce a myriad of MHD waves. @roberts investigated how the slow mode may be extracted elegantly from the governing MHD equations, considering the special case of a vertical uniform magnetic field in a vertically stratified medium. The approach may, in principle, be generalized with non-uniform magnetic fields [@luna-cardozo] and, by taking into account non-linearity, background flows and dissipative effects. However, as we will show below, a first useful insight still can be made within the framework of ideal linear MHD applied to a static background.
It is very difficult to directly (or often even indirectly) measure the background physical parameters (plasma-$\beta$ or density, for example) of localised solar structures. For the magnetic field, the most common method is to measure the Stokes profiles of element lines in the lower solar atmosphere and then perform Stokes inversion in order to determine the magnetic field vectors. More recently, the development of solar magneto-seismology (SMS) has allowed the estimation of the local plasma properties which are generally impossible to measure directly [@Andries2009; @Ruderman2009]. While this technique has been used for many years in the solar corona, only recently has it been applied to the lower solar atmosphere. For example, @fujimura accomplished this by observing and identifying wave behaviour in lower solar structures and interpreting the observed waves as standing MHD waves. A recent review on lower solar atmospheric application of MHD waves is given by e.g. @banerjee [@jess2015multiwavelength] and partially by @2013SSRv..175....1M in the context of Alfv[é]{}n waves.
Extensive numerical modelling of wave propagation in small-scale flux tubes has been undertaken by @khomenko [@hasan2008dynamics; @fedun2; @fedun1; @2011ApJ...730L..24K; @2011AnGeo..29..883S; @2012ApJ...755...18V; @wedemeyer2012magnetic; @Mumford2015]. These models are of localised magnetic flux tubes and the effect of vertical, horizontal or torsional coherent (sub) photospheric drivers mimicking plasma motion at (beneath) the solar surface on these flux tubes. It was found that extensive mode conversion may take place within flux tubes as well as the generation of slow and fast MHD modes or the Alfv[é]{}n mode that depended on the exact driver used.
@vogler and @cameron, using the MURaM code, simulated larger scale magnetic structures, including pores, to build up a detailed picture of the physical parameters (density, pressure and temperature) as well as flows in and around these structures, which has good observational agreement.
[@doretala] observed the evolution of a magnetic pore’s area for 11 hours in the sunspot group NOAA 7519 [see @sobotka; @doretalb]. They reported that the periodicities of the detected perturbations were in the range of 12-97 minutes and were interpreted as slow magneto-acoustic-gravity sausage MHD waves. @morton, using the Rapid Oscillations in the Solar Atmosphere (ROSA) instrument installed on the Dunn Solar Telecope (DST), also detected sausage oscillations in a solar pore. The lack of Doppler velocity data made it difficult to conclude whether the waves were propagating or standing. The oscillatory phenomena were identified using a relatively new technique (at least to the solar community), known as Empirical Mode Decomposition (EMD). The EMD process decomposes a time series into Intrinsic Mode Functions (IMFs) which contain the intrinsic periods of the time series. Each IMF contains a different time-scale that exists in the original time series [see @terradas]. This technique was first proposed by @huang and offers certain benefits over more traditional methods of period analysis, such as wavelets or Fourier transforms.
observed several large magnetic structures and analysed the change in time of the cross-sectional area and total intensity of these structures. Phase relations between the cross-sectional area and total intensity have been investigated by @michal2013 [@moreels2013phase]. The phase difference found observationally was 0$\degree$, i.e., in-phase, which matches the phase relation for slow MHD sausage waves. Further, these magnetic structures were able to support several oscillations with periods not too dissimilar to standing mode harmonics in an ideal case. [@2015ApJ...806..132G] observed a magnetic pore within Active Region NOAA 11683, using high-resolution scans of multiple heights of the solar atmosphere using ROSA and Interferometric Bidimensional Spectrometer (IBIS) on the DST. They showed that sausage modes were present in all the observed layers which were damped whilst they propagated into the higher levels of solar atmosphere. The estimated energy flux that suggests the could contribute to the heating of the chromosphere.
Standing waves are expected to exist in the lower solar atmosphere bounded by the photosphere and transition region [@mein; @leibacher]. Numerical models also predict this behaviour [@zhugzhda1; @erdelyi; @malins]. Standing waves have been potentially seen in the lower solar atmosphere; using the Hinode space-borne instrument suite, @fujimura observed pores and inter-granular magnetic structures, finding perturbations in the magnetic field, velocity and intensity. The phase difference between these quantities gave an unclear picture as to what form of standing waves these oscillations were. Standing slow MHD waves have been detected in coronal loops with SoHO and TRACE (for reviews see, e.g., [@wang2011standing; @2012RSPTA.370.3193D]) and transverse (kink) oscillations have been detected in coronal loops [e.g @1999ApJ520880A; @taroyan; @oshea; @2008ApJ...687L..45V for a review see [@Andries2009; @Ruderman2009]]. Harmonics of a standing wave have potentially been seen in flare loops using ULTRACAM [e.g., @mathioudakis]. @fleck also reported the observation of standing waves in the lower solar chromosphere, by measuring the brightness and velocity oscillations in Ca II lines.
In this article, we exploit phase relations between the area and intensity of two magnetic pores, in order to identify the wave mode of the observed oscillations. This information combined with the methods of solar magneto-seismology, allows us to determine several key properties of these oscillations and of the magnetic structures themselves. Section \[DnA\] details the observational data, its reduction and the analysis method. Section \[Wave\] discusses the theory of the applicable MHD wave identification as well as the solar magneto-seismology equations used to estimate properties of the observed oscillations. Section \[res\] contains the results of the data analysis, while Sect. \[conc\] summarises.
Data and Method of Analysis {#DnA}
===========================

Two high-resolution datasets are investigated within this article. The first dataset was acquired using the Dutch Open Telescope (DOT) [@rutten], located at La Palma in the Canary Islands. The data were taken on $12$th August $2007$ with a G-band ($430.5$ nm) filter which samples the low photosphere and has a formation height of around $250$ km above the solar surface. The observation started at $08$:$12$ UTC and lasted for $92$ minutes with a cadence of $15$ seconds with a total field-of-view (FOV) of $60$ Mm by $40.75$ Mm. The DOT is able to achieve high spatial ($0.071''$ per pixel) resolution, due to the DOT reduction pipeline. It comes at a cost of temporal cadence which is decreased to $30$ seconds as data reduction uses speckle reconstruction . Note that the DOT does not have an adaptive optics system.
The second dataset was obtained on the $22$nd August $2008$ with the Rapid Oscillations in the Solar Atmosphere (ROSA) imaging system situated at the Dunn Solar Telescope (see @jess1 for details on experimental setup and data reduction techniques). Observation started at $15$:$24$ UTC, and data were taken using a $417$ nm bandpass filter with a width of $0.5$ nm. The $417$ nm spectral line corresponds to the blue continuum which samples the lower photosphere and the formation height of the filter wavelength corresponds to around $250$ km above the solar surface. It should be noted that this is an average formation height. This is because the contributions to the line are from a wide range of heights and the lines also form at different heights depending on the plasma properties [@gband].
ROSA has the ability for high spatial ($0.069''$ per pixel) and temporal ($0.2$ s) resolutions. After processing through the ROSA pipeline the cadence was reduced to $12.8$ s to improve image quality via speckle reconstruction [@20764]. To ensure alignment between frames, the broadband time series was Fourier co-registered and de-stretched . Count rates for intensity are normalised by the ROSA pipeline.
The methodology of this analysis follows the one also applied by @morton and . The area of the pore is determined by summing the pixels that have intensity values less than $3\sigma$ of the median background intensity, which is a large quiet-Sun region. This method contours the pore area well, but not perfectly, as the intensity between the pore and the background granulation is not a hard boundary. The top row of Figure \[overview\] shows the magnetic pores at the start of the observation sequence, by DOT and ROSA, respectively. Further, the output from the area analysis is shown in the bottom row for both magnetic pores. A strong linear trend can be observed for the DOT pore. The intensity time series was determined by total intensity of all the pixels within the pore. To search for periodic phenomena in the time series, two data analysis methods were used: wavelets and EMD. The wavelet analysis employs an algorithm that is a modified version of the tool developed by @torrence. The standard Morlet wavelet, which is a plane sine wave with the amplitude modulated by a Gaussian function, was chosen due to its high resolution in the frequency domain. The EMD code employed here is the one of @terradas. First, we de-trended each time series by linear regression followed by wavelet analysis to determine the periodicity of the oscillations as a function of time. Secondly, cross-wavelet is applied to calculate the phase difference between the area and intensity series as a function of time. Although it is possible to obtain a better visual picture of the phase relation between the two signals by using EMD, the results agreed with the cross-wavelet analysis when checked.
MHD wave theory {#Wave}
===============
The sausage mode {#Saus}
----------------
We aim to identify MHD sausage modes and, as such, it is important to have a theoretical understanding of these modes. Assume, that a magnetic pore is modelled adequately by a cylindrical waveguide with a straight background magnetic field, i.e., $\textbf{B}_0=B_0\hat{\textbf{z}}$. We note that, for reasons of clarity, in the following discussion the theory does not take into account gravitational effects on wave propagation. However, the influence of gravity may be important for wave propagation in magnetic pores, especially at the photospheric level where the predicted scale height is comparable to the wavelengths of observed oscillations. Therefore we should be cautious with the interpretations. The velocity perturbation is denoted as $\textbf{v}_1= (v_r,v_{\theta},v_z)$. From the theory of ideal linear MHD waves in cylindrical wave-guides, for the $m=0$ modes (here $m$ is the azimuthal wave number), i.e., for axi-symmetric perturbations, the equations determining $v_r$ and $v_z$ decouple from the governing equation for $v_{\theta}$. Hence, we will have magneto-acoustic modes described by $v_r$ and $v_z$ and the torsional Alfvén mode is described by $v_{\theta}$. We are interested in the slow magneto-acoustic mode in this paper, so we neglect the $v_{\theta}$ component. The same applies to the component of the magnetic field in the $\theta$-direction. The linear magneto-acoustic wave motion is then governed by the following ideal MHD equations, $$\begin{aligned}
&&\rho_0 \frac{\partial v_r}{\partial t}=-\frac{\partial}{\partial r}
\left(p_1+\frac{B_0b_z}{\mu_0}\right)+\frac{B_0}{\mu_0}\frac{\partial b_r}{\partial z},
\label{eq:mom_r}\\
&&\rho_0\frac{\partial v_z}{\partial t}=-\frac{\partial p_1}{\partial z},
\label{eq:mom_z}\\
&&\frac{\partial b_r}{\partial t}=B_0\frac{\partial v_r}{\partial z},
\label{eq:mag_r}\\
&&\frac{\partial b_z}{\partial t}=-B_0\frac{1}{r}\frac{\partial (rv_r)}{\partial r},
\label{eq:mag_z}\\
&&\frac{\partial p_1}{\partial t}=-\rho_0
c_s^2\left(\frac{1}{r}\frac{\partial(rv_r)}{\partial r}+\frac{\partial v_z}{\partial z}\right),
\label{eq:press1}\\
&&\frac{\partial \rho_1}{\partial t}=-\rho_0\left(\frac{1}{r}\frac{\partial (rv_r)}{\partial r}+\frac{\partial v_z}{\partial z}\right).
\label{eq:den}
\end{aligned}$$ Here, $p$ is the gas pressure, $\rho$ is the density and $\textbf{b} = (b_r,b_{\theta},b_z)$ is the perturbed magnetic field. We have assumed that the plasma motion is adiabatic. The subscripts $0$ and $1$ refer to unperturbed and perturbed states, respectively.
Now, assume that the wave is harmonic and propagating and let $v_r=A(r)\cos(kz-\omega t)$. We then obtain the following equations for the perturbed variables, $$\begin{aligned}
&&\omega b_r=-B_0kv_r,\\
&&\rho_0\left(\frac{v_A^2k^2}{\omega}-\omega\right)A(r)\sin(kz-\omega t)=\frac{\partial}{\partial r}\left(p_1+\frac{B_0b_z}{\mu_0}\right)
&&\label{eq:n1}\\
&&\rho_0\frac{\partial v_z}{\partial t}=-\frac{\partial p_1}{\partial z},
\label{eq:n2}\\
&& b_z=\frac{B_0}{\omega}\frac{1}{r}\frac{\partial (rA(r))}{\partial r}\sin(kz-\omega t),
\label{eq:n3}\\
&&\frac{\partial p_1}{\partial t}=c_s^2\frac{\partial\rho_1}{\partial t}=-\rho_0 c_s^2\left(\frac{1}{r}\frac{\partial(rv_r)}{\partial r}+\frac{\partial v_z}{\partial z}\right)
\label{eq:n4}
\end{aligned}$$ Integrating Equation (\[eq:n4\]) with respect to $t$ and using (\[eq:n2\]) gives $$\begin{aligned}
&&p_1=c_s^2\rho_1=-\frac{\omega\rho_0
c_s^2}{(c_s^2k^2-\omega^2)}\frac{1}{r}\frac{\partial
(rA(r))}{\partial r}\sin(kz-\omega t).
\label{eq:n5}
\end{aligned}$$ Comparing Equation (\[eq:n3\]) to (\[eq:n5\]) it can be noted that the magnetic field, $b_z$, and the pressure (density) are $180$ degrees out of phase. This depends on the sign of $c_s^2k^2-\omega^2$, which is assumed to be positive. Consideration of Equations (\[eq:n1\]), (\[eq:n3\]) and (\[eq:n5\]) leads to the conclusion that $v_r$ is $90\degree$ out of phase with $b_z$ and $-90\degree$ out of phase with $p_1$.
The flux conservation equation for the perturbed variables gives the following relation, $$\begin{aligned}
&&B_0S_1=-b_{1z}S_0,
\label{eq:flux}
\end{aligned}$$ where $S$ refers to the cross-sectional area of the flux tube. We conclude that the perturbation of the area is out of phase with the perturbation of the z-component of the magnetic field, hence, the area is in-phase with the fluctuations of the thermodynamic quantities. Perhaps more importantly, we re-write Equation (\[eq:flux\]) as $$\begin{aligned}
&&\frac{S_1}{S_0}=-\frac{b_{1z}}{B_0}.
\label{eq:mag_area}
\end{aligned}$$ Hence, if we are able to measure oscillations of a pore’s area, we can calculate the percentage change in the magnetic field due to these oscillations (assuming conservation of flux in the pore). This was previously suggested by [@2015ApJ...806..132G]. Exploiting this relation will allow a comparison to be made between the observed changes in pore area and the magnetic oscillations found from Stokes profiles (e.g. [@Balthasar2000]). Further, as there are known difficulties with using the Stokes profiles, observing changes in pore area could provide a novel way of validating or refuting the observed magnetic oscillations derived from Stokes profiles. These simplified phase relations were confirmed in a more complicated case by e.g., [@michal2013] and [@moreels2013phase], who also derived the phase relations for other linear MHD waves.
By measuring the change in pore area with time, we will also be able to estimate the amplitude of the radial velocity perturbation. The changes in area are related to changes in radius of the flux tube by $$\begin{aligned}
&&\frac{S_1}{S_0}=\frac{2r_1}{r_0},
\label{eq:area_rad}
\end{aligned}$$ where $r_0$ and $r_1$ are the unperturbed radius and perturbation of the radius, respectively, assuming the flux tube has a cylindrical geometry. Once a periodic change in radius is identified, the radial velocity of the perturbation can then be calculated using the following relation $$\begin{aligned}
&&v_r=\frac{\partial r_1}{\partial t}=\frac{2\pi r_1}{P}.
\label{eq:rad_vel}
\end{aligned}$$
Note, the term “sausage mode” was introduced for waves in magnetic tubes with a circular cross-section. The main property of these waves that distinguishes them from other wave modes is that they change the cross-sectional area. The cross sectional area of observed pores are typically non-circular. However, it seems to be reasonable to use the term sausage mode for any wave mode that changes the cross-sectional area. Several preceding papers have looked into non-circular, e.g., elliptic shapes, and found the effects to be marginal on the MHD waves within these tubes (see and ).
Period ratio of standing slow MHD wave {#stand}
--------------------------------------
The period of a standing wave in a uniform and homogeneous flux tube is given by $P \approx 2L/nc_{ph}$, where $L$ is the tube length, $n$ is a integer determining the wave mode harmonic and $c_{ph}$ is the phase speed of the wave. This ratio is for ideal homogeneous tubes, however, this is not the case for the solar atmosphere from the photosphere to the transition region. @luna-cardozo modelled the effect of density stratification and expansion with height of the fluxtube on the ratio of the fundamental and first overtone periods for a vertical flux tube sandwiched between the photosphere and transition region. Their analysis studied the slow standing MHD sausage mode and assumed a thin flux tube with a small radial expansion with height. They investigated two cases; case one is where the flux tube undergoes weak magnetic expansion with constant density, finding, $$\begin{aligned}
&&\frac{\omega_{2}}{\omega_{1}}= 2 - \frac{15}{2}\frac{\beta_{f}}{(6+5\beta_{f})\pi^{2}}(\Gamma-1),
\label{mag_strat}
\end{aligned}$$ where $\omega_{i}$ is the period of specific harmonic or overtone (i.e., 1, 2), $\beta_{f}$ is the plasma-$\beta$ at the base of the flux tube and $\Gamma$ is the ratio of the radial size of the flux tube at the apex to the foot-point. Here, Equation (\[mag\_strat\]) is Equation (43) from [@luna-cardozo]. Case two is where the flux tube has density stratification but a constant vertical magnetic field, finding, $$\begin{aligned}
&&\frac{\omega_{2}}{\omega_{1}}= \left[\frac{16\pi^{2} + \displaystyle\left(\ln\frac{1 - \sqrt{1 - \kappa_{1}}}{1 + \sqrt{1 - \kappa_{1}}}\right)^{2}}{4\pi^2 + \displaystyle\left(\ln\frac{1 - \sqrt{1 - \kappa_{1}}}{1 + \sqrt{1 - \kappa_{1}}}\right)^{2}} \right]^{1/2},
\label{den_strat}
\end{aligned}$$ where $\kappa_{1}$ is the square root of the ratio of the density at the top of the fluxtube to the density at the footpoint ($\kappa_{1} = (\rho_{apex}/\rho_{footpoint})^{0.5}$). Here, Equation (\[den\_strat\]) is Equation (40) from [@luna-cardozo]. Here, the upper end of the flux tube may well be the transition region while the footpoint is in the photosphere. It should be noted that the form of Equation (\[den\_strat\]) depends on the longitudinal density profile; here,a density profile where the tube speed increased linearly with height was used. This may or may not model a realistic pore and given the uncertainty of the equilibrium quantities this must be kept in mind in order to avoid over-interpretation. Equations(\[mag\_strat\]) and (\[den\_strat\]) modelling the frequency ratio of standing oscillations indicate that the ratio of the first harmonic to the fundamental will be less than two for fluxtube expansion while the density stratification could increase this value. Further, the thin fluxtube approximation is used to derive these equations. Obviously, in a real flux tube, both the density and magnetic stratification would be present at the same time and would alter the ratio. This is not accounted for at the moment. Further, Equations (\[mag\_strat\]) and (\[den\_strat\]) are independent of height which may limit the results as it has been suggested that the height to the transition region varies [@tian2009solar].
Results and Discussion {#res}
======================
Figures \[DOT\_wls\] and \[ROSA\_wls\] show the results of wavelet analysis of the area and intensity time series for the DOT and DST telescopes, respectively. The original signal is displayed above the wavelet power spectrum, the shaded region marks the cone of influence (COI), where edge effects of the finite length of the data affect the wavelet transform results. The contours show the confidence level of 95%.
DOT Pore
--------
There are four distinct periods found in the area time series of the pore; 4.7, 8.5, 20 and 32.6 minutes. The last period is outside the COI due to the duration of the time series, so it has been disregarded. It should be noted that periods of 8, 14 and 35 minutes have been observed in sunspots by @kobanov. This is important as pores and sunspots share a number of common features. The intensity wavelet shows 4 periods of oscillations; 4.7, 8.6, 19.7 and 35 minutes. These periods are similar, if not the same as the period of the area oscillations, which enables a direct comparison of the two quantities. There is significant power that is co-temporal which can be observed in both the intensity and area wavelets.
Using cross-wavelet in conjunction with the EMD allows the verification of the phase difference between the area and intensity signals for each period. These methods show that the phase difference is very close to 0, *i.e.*, the oscillations are in-phase meaning that they are slow sausage MHD waves. Further, the percentage change in intensity is also of the same order as reported in @Balthasar2000 and @fujimura. This suggests, we are most likely observing the same oscillatory phenomena as these authors.
We also have to be certain that any change in area we observe is due to the magneto-acoustic wave rather than a change in the optical depth of the plasma. @fujimura provide an insight into the expected differences between the phase of magnetic field and intensity oscillations due to waves or the opacity effect. They demonstrate that the magnetic field (pore area) should be in-phase (out-of-phase) with the intensity if the oscillations are due to changes in optical depth. We note that this is the same relationship expected for the fast magneto-acoustic sausage mode. Hence, the identification of the fast magneto-acoustic mode in pores may prove difficult with only limited datasets.
The application of Equations (\[eq:area\_rad\]) and (\[eq:rad\_vel\]) require information about the amplitude of the area perturbation. This can be achieved using either an FFT power spectrum or the IMF’s amplitude from the EMD analysis. Here, we use EMD for the amplitudes (which are time-average values) and they are $3.87\mathrm{x}10^5$ km$^2$, $3.61\mathrm{x}10^5$ km$^2$ and $5.90\mathrm{x}10^5$ km$^2$ for the oscillations with periods of 4.7, 8.5 and 20 minutes, respectively. It was not possible to find the amplitude of the largest period, as it did not appear in the EMD output. The values of area perturbation translate (using Equation \[eq:area\_rad\]) to 37, 34 and 56 km respectively for the amplitude of the radial perturbation. Note that the increase in radius is about $100$ km meaning the perturbation is only of the order of 1 pixel (at the DOT’s resolution).
Using the values above, allows us to calculate the radial velocity perturbation for each period (by means of Equation \[eq:rad\_vel\]). For the periods of 4.7, 8.5 and 20 minutes, we determine the radial velocity perturbation as 0.82, 0.42 and 0.29 km s$^{-1}$, respectively. The obtained radial speeds are very sub-sonic as the sound speed is $\approx$ 10 km s$^{-1}$ in the photosphere. They are, however, of the order of observed horizontal flows around pores.
Further, it is also possible to estimate the percentage change in magnetic field expected from the identified linear slow MHD sausage modes. The percentage change in pore area, hence magnetic field, is found to be $$\frac{A_1}{A_0} = \frac{b_1}{B_0} \rightarrow 4-7\%.$$ For another magnetic pore, the percentage change was found to be similar at 6% [@2015ApJ...806..132G]. Let us now assume that the equilibrium magnetic field strength of the pore takes typical values of 1000-2000 G. Then, the amplitude of the magnetic field oscillations should be 40-140 G. The lower end of this estimated range of percentage change in magnetic field agrees well with the percentage changes in the magnetic field obtained using Stokes profiles by, for example, @Balthasar2000 and @fujimura. However, the upper end of the range, i.e. $\sim 140$ G, appears twice as large as any of the previously reported periodic variations in magnetic field. This apparent difference could be due to the spatial resolution of the magnetograms averaging out the magnetic field fluctuations. A summary of our findings can be found in Table \[tab:ampl\].
[cccc|cccc]{} DOT & $r_{1}$ & $v_{r_1}$ & $\frac{b_{z1}}{B_{0}}$ & ROSA & $r_{1}$ & $v_{r_1}$ & $\frac{b_{z_1}}{B_{0}}$\
----------
Period 1
4.7 mins
----------
& 37 km & 0.82 km s$^{-1}$ & 4.34% &
----------
Period 1
2-3 mins
----------
& 69.1 km & 3.03 km s$^{-1}$ & 26.3%\
----------
Period 2
8.4 mins
----------
& 34 km & 0.42 km s$^{-1}$ & 4.04% &
----------
Period 2
5.5 mins
----------
& 74.2 km & 1.41 km s$^{-1}$ & 28.2%\
-----------
Period 3
19.7 mins
-----------
& 56 km & 0.29 km s$^{-1}$ & 6.60% &
----------
Period 3
10 mins
----------
& 117 km & 1.23 km s$^{-1}$ & 44.5%\
Now, we estimate the wavelength (wavenumber) for each mode. An important fact needs to be remembered, *i.e*, the velocity perturbation determined is radial, not vertical. Further, since the waveguide is strongly stratified, we define the wavelength as the distance between the first two nodes, which is the half wavelength of the wave. However, in this regime, the vertical phase speed of the slow sausage MHD wave is the tube speed, which is $c_T\approx4.5$ km s$^{-1}$ using typical values for the photospheric plasma [@1983SoPh...88..179E; @evans]. For the periods of $4.7$, $8.5$ and $20$ minutes we obtain estimates of the wavelength (wavenumber) as $1269$ km ($4.95$ $\mathrm{x}10^{-6}$ m$^{-1}$), $2268$ km ($2.77$ $\mathrm{x}10^{-6}$ m$^{-1}$) and $5319$ km ($1.18$ $\mathrm{x}10^{-6}$ m$^{-1}$), respectively. Note that these wavelengths are larger than the scale height in the photosphere ($\approx 160$ km) or the lower chromosphere. For the observed pore, it had an average radius, $a=1.5$ Mm, where $ka= 8, 5, 2$. See Table \[tab:wavelength\] for a summary.
[cccc|cccc]{} DOT & $\lambda_z$ & $k_z$ & $k_{z}a$ & ROSA & $\lambda_z$ & $k_z$ & $k_{z}a$\
----------
Period 1
4.7 mins
----------
& 1269 km & 4.95 $\mathrm{x}10^{-6}$ m$^{-1}$ & 8 &
----------
Period 1
2-3 mins
----------
& 540-810 km & 7.76-12 $\mathrm{x}10^{-6}$ m$^{-1}$ & 4-6\
----------
Period 2
8.4 mins
----------
& 2268 km & 2.77 $\mathrm{x}10^{-6}$ m$^{-1}$ & 5 &
----------
Period 2
5.5 mins
----------
& 1485 km & 4.2 $\mathrm{x}10^{-6}$ m$^{-1}$ & 2\
-----------
Period 3
19.7 mins
-----------
& 5319 km & 1.18 $\mathrm{x}10^{-6}$ m$^{-1}$ & 2 &
----------
Period 3
10 mins
----------
& 2700 km & 2.33 $\mathrm{x}10^{-6}$ m$^{-1}$ & 1\
ROSA Pore
---------
There are four distinct periods found in the area time series of the pore observed by ROSA; 2-3, $5.5$, $10$ and $27$ minutes. All of these reported periods are at least at $95\%$ confidence level (or over). A few words about two of the periods have to be mentioned. Firstly, the power of the $2$-$3$ minute period is spread broadly and, as such, it is hard to differentiate the exact period. Secondly, the $10$-minute period slowly migrates to $13.5$ minutes as the time series comes to its end. The intensity wavelet shows four periods of oscillations; 2-3, 5.5, 10 and 27 minutes. For the pore observed by DOT, the oscillations found in the area and intensity data share similar periods. Also, there is another period that is below the $95\%$ confidence level for white noise at 1-2 minutes at the start of the time series. This is a similar behaviour as found for the DOT pore.
We found that the phase difference between the area and intensity periods is 0. This means, as before, that these oscillations are in-phase and are interpreted as signatures of slow sausage MHD waves. While we have chosen not to discuss the out-of-phase behaviour, there are small regions of 45$\degree$ phase difference that has been previously reported . This needs to be investigated in the future, as the authors are unaware of which MHD mode would cause this behaviour, however, it has been suggested that is due to noise within the dataset . As for the DOT pore, the same properties can be obtained for each period observed as within the ROSA pore and is summarized in Table \[tab:ampl\] and \[tab:wavelength\].
The amplitudes for the area oscillations are 2.29 $\mathrm{x}10^5$ km$^2$, 2.45 $\mathrm{x}10^5$ km$^2$ and 3.87 $\mathrm{x}10^5$ km$^2$ for periods of 2-3, 5.5 and 10 minutes, respectively. The $13.5$-minute period is found by the EMD process as well and has an amplitude which is the same as that of the $10$-minute period. Again, it was not possible to find the amplitude of the largest period. These then, lead to the radial perturbation amplitude of 69.1, 74.2 and 117 km and the radial velocity perturbation as 3.03, 1.41 and 1.23 km s$^{-1}$, respectively. The increase in radius is around $100$ km meaning the perturbation is only of the order of 2 pixels (at ROSA’s resolution). This means that for each part of the structure, its radius increases by 2 pixels. Once again, the radial velocity perturbations are found to be sub-sonic.
The percentage change in the pore’s area, and, thus the magnetic field is given by $$\frac{A_1}{A_0} = \frac{b_1}{B_0} \rightarrow 25 - 45\%.$$ From the above relations we conclude that the size of the magnetic field oscillation is in the region of $200$-$400$ G. This is a substantial increase when compared to the measurements of the pore detected by DOT, as the amplitudes for these oscillations are of the same order but the cross-sectional area of the pore is an order of magnitude smaller. This suggests that the oscillation strength might be independent of the scale of the structure .
Once again, we determine the wavelength (wavenumber) for each period, using the tube speed as defined in the previous section. For the periods of $2-3$, $5.5$ and $10$ minutes we obtain estimates of the wavelength (wavenumber) as $540$-$810$ km ($7.76\mathrm{x}10^{-6}$ m$^{-1}$), 1485 km ($3.58\mathrm{x}10^{-6}$ m$^{-1}$) and 2.2 Mm ($2.85\mathrm{x}10^{-6}$ m$^{-1}$), respectively. For the observed pore radius, $a=0.5$ Mm, we obtain values of $ka = 2, 1.8, 1.5$ and $1.5$.
Standing Oscillations
---------------------
[cc|cc]{} DOT Period (Mins) & Ratio ($P_{1}/P_{i}$) & Rosa Period (Mins) & Ratio ($P_{1}/P_{i}$)\
--
--
8.5 mins & - &
--
--
10 mins & -\
--
--
4.7 mins & 1.81 &
--
--
5.5 mins & 1.81\
--
--
& &
--
--
2-3 mins & 3.3-5\
![The range of solutions for Equation (\[mag\_strat\]). The threaded areas are where the period ratios are either less than one or greater than 2. The horizontal line divides the image into a weak ($<2$ kG) and strong ($>2$ kG) field regions for the plasma-$\beta$. The blue contour lines indicate observed period ratios for this paper and the values within .[]{data-label="fig:harm"}](harmonics.pdf){width="45.00000%"}
With the important understanding that the observed waves are trapped, there is a possibility of them being standing ones. Assuming that the pore can be modelled as a straight homogeneous magnetic flux tube which does not expand with height, the sharp gradients (often modelled as discontinuities) of the temperature/density at the photosphere and at the transition region form a resonant cavity which can support standing waves [see @fleck; @malins].
Calculating the harmonic periods ($P \approx 2L/nc_{ph}$, where $L$ is the distance between the boundaries ($2$ Mm), $n$ is the harmonic number), a fast MHD oscillation ($c_{p}\approx12$ kms$^{-1}$) would have a fundamental period $\sim$333 s, while the period of a slow MHD wave ($c_{p}\approx5.7$ kms$^{-1}$) would be $\sim$700 s. Other slow MHD sausage waves have been observed with phase speeds similar to this . The interpretation of the observed waves is that they are slow MHD sausage waves, which in the ideal homogeneous case is most similar to the observed results, however, it is still different by two minutes. Therefore, the basic assumption of an ideal homogeneous flux tube (constant $L$, constant $c_{ph}$ etc.) is inadequate to explain the results presented in this paper. There are several further considerations that need to be taken into account. From observations, many magnetic structures are not cylindrical or symmetrical and are often irregular in shape. Further to this, large-scale magnetic structures have been thought to be made up of either a tight collection of small-scale flux tubes or one large monolithic structure [@priest1984solar and references within]. Also, these magnetic structures extend from the photosphere to the transition region which means that the plasma-$\beta$ will vary by an order of $2$ magnitude, which will change the dynamics of the MHD waves considerably. We have also ignored the effect of gravity (*i.e.,* density stratification) , as well as the equally important fact that flux tubes expand with height (*i.e.,* magnetic stratification) which alters the ratio of the periods, i.e, $P_{1}/P_{2}\neq2$ [@luna-cardozo]. All of these effects will further affect the wave dynamics inside flux tubes.
Here, we will ignore periods greater than $10$ minutes as shown above, in the ideal homogeneous case, the largest period possible is $11.6$ minutes for MHD waves (with the above assumptions). Here, we will consider two effects: the effect of density stratification and magnetic expansion with height in the radial direction. For the first case; Equation (\[den\_strat\]) is calculated with typical density values from the VAL-III C model @1981ApJS...45..635V at the apex (transition region) and footpoint (photosphere) of the flux tube. The VAL-III C model is an estimation of a quiet-Sun region and the interior density ratio between photosphere and transition region of a flux tube need not necessarily differ greatly from that of the exterior atmosphere (see Figures 3 and 1 of [@GFME13a] and [@GFE14], respectively). The resulting value for the period ratio in this circumstance is 1.44 (density values are 2.727$\mathrm{x}10^{-7}$ and 2.122$\mathrm{x}10^{-13}$ g cm$^{-3}$ for the footpoint and apex respectively). Using the model given by @Maltby1986, which models a sunspot umbra, this period ratio is 1.38 (density values are 1.364$\mathrm{x}10^{-6}$ and 9.224$\mathrm{x}10^{-14}$ g cm$^{-3}$ for the footpoint and apex respectively). This does not correspond well to the results in this paper, but only for one previously reported result; a highly-dynamical non-radially uniform sunspot . The ratio is substantially smaller than what is detected here, which means the first harmonic should be at $\approx 5.9$ minutes. This model does not seem to be applicable to the observational results presented here. The reason for this, the authors believe, is due to the effect of finite radius. The dispersion relation for slow MHD waves in a finite radial fluxtube, shows that the dispersion related to the finite tube radius increases the wave frequency. The shorter the wavelength, the stronger the dispersion effect is. Hence, the relative increase of the first overtone frequency due to the effect of finite radius is larger than that of the fundamental harmonic. This modifies the period of the first harmonic to be higher, which shifts the period ratio to be larger than values that are obtained theoretically in the thin tube approximation.
Figure \[fig:harm\] details the various solutions (*i.e.,* period ratio) for Equation (\[mag\_strat\]) over a large range of plasma-$\beta$ and expansion ratio ($\Gamma$). It is difficult to estimate how much a flux tube expands with height, therefore, we explore the parameter space widely, taking $\Gamma$ of 0-15. The values for the plasma-$\beta$ is divided into strong ($\geq2$ kG) and weak ($\leq2$ kG) field regions, as the magnetic field of flux tubes hypothesised, will vary from $0.5$ kG to $4$ kG. The magnetic pores were observed before the launch of NASA’s Solar Dynamics Observatory (SDO), so the best magnetic data comes from the Michelson Doppler Imager (MDI) instrument on-board NASA’s Solar and Heliospheric Observatory (SOHO). As such, the magnetic field of these pores is hard to know precisely due to their small scale and MDI’s large pixel size. However, ground-based observations of similar sized pores reveal magnetic fields ranging from $1$ kG to $2.5$ kG. The blue contour lines show the parameter space that matches the period ratios reported in this article and the ones in . For example, if the plasma-$\beta$ is around 1, the expansion factor for the three period ratios reported here are around $4$, $6$ and $9$. If we have plasma-$\beta$ $\ll 1$, the expansion ratio starts to increase rapidly.
Once again, this effect can be dominant when the flux tube expands too much, however, it is unlikely that a flux tube would expand by such a large amount. @1982GApFD21237B, for example, suggests that when the internal gas pressure exceeds the external gas pressure, the flux tube becomes unstable and this occurs when the flux tube expands greatly with height.
For the cases presented in this paper, the flux tube has to expand four to six times to have a period ratio that is observed. In a number of numerical simulations that model these types of flux tubes, the magnetic field expands approximately 4-10 times which happens to be not too dissimilar to our findings [see also @khomenko; @fedun2; @fedun1]. It should be noted that these estimates for expansion are for flux tubes with magnetic fields that have a field strength less than $2$ kG.
Unfortunately, as of yet, little is known about the source of the oscillations analysed in this paper. One possible origin of MHD sausage waves is suggested by e.g. @khomenko and @fedun2, where magneto-acoustic wave propagation in small-scale flux tubes was modelled using non-linear MHD simulations. One of the results of their simulations is that five-minute vertical drivers can generate a mixture of slow and fast sausage modes in localised magnetic flux tubes that propagate upwards. Furthermore, @fedun1 model the effect of photospheric vortex motion on a thin flux tube, finding that vertex motions can excite dominantly slow sausage modes. However, these simulations need to be developed further, before we may comfortably link them to our assertions.
Another potential source is from mode conversion that will occur at the lower region of the photosphere within sunspots and magnetic pores. For example, [@0004-637X-746-1-68], modelled a background sunspot-like atmosphere and solving the non-linear ideal MHD equations for this system, found that the fast MHD wave will turn into a slow MHD sausage wave at the Alfvén-acoustic equipartition level (which is where the sound speed is equal the Alfvén speed) and the reverse is also true. The fast MHD wave to Alfvén conversion occurs higher up where there is a steep Alfvén speed gradient, as the fast MHD wave will reflect from this boundary. Below this level, the MHD waves are fast and above this level, slow MHD waves can be supported. This level occurs at approximately 200 km in their model. The observations used within this paper are thought to form at a height around 250 km. Further, sunspot umbra’s are depressed in height and it would likely be the same for magnetic pores. These facts can offer an insight into the formation height of G-band since we believe that we are observing a primary slow acoustic mode modified by the magnetic field i.e., the slow MHD sausage wave.
A word of caution: without LOS Doppler data, it is difficult to know whether the oscillations reported are standing or propagating. The data available for magnetic pores does not cover higher levels of the solar atmosphere such as the chromosphere or the transition region. The data presented here only represents a slice of the flux tube near the photosphere. Future work is needed to acquire simultaneous observations of magnetic pores in several wavelengths in order to sample the solar atmosphere at different heights. With detailed spectral images would allow other LOS quantities such as Doppler velocity and magnetic field to be measured. This way, the oscillations could be determined confidently as standing or propagating due to their different phase relations.
Conclusions {#conc}
===========
The use of high-resolution data with short cadence, coupled with two methods of data analysis (wavelets and EMD), has allowed the observation of small-scale wave phenomena in magnetic waveguides situated on the solar surface. By studying the area and intensity perturbations of magnetic pores, it enables the investigation of the phase relations between these two quantities with the use of wavelets and EMD. The in-phase ($0^\circ$ phase difference) behaviour reveals that the oscillations observed are indicative of slow sausage MHD waves. Further, with the amplitude of oscillations measured, several properties could be estimated; such as the amplitude of the magnetic field perturbation and radial speed of the perturbation. The scale of the magnetic field perturbation that are caused by slow MHD waves are of the order $10$% and have radial speeds that are sub-sonic when compared to the sound speed at the photosphere. With the MHD mode of these waves identified, the obtained vertical wavelength indicates that the flux tubes would have a strong reflection at the transition region boundary. Further indicating a chromospheric resonator. Finally, the investigation of the period ratio of the oscillations suggests that the fundamental and first harmonic has been observed within these flux tubes. The period ratio observed coupled with magneto-seismology enabled an expansion factor to be calculated that was in very good agreement to values found in numerical models used for MHD wave simulations.
The authors want to thank the anonymous referee for their comments as it has improved the clarity of this paper. The authors thank J. Terradas for providing the EMD routines used in the data analysis. Wavelet power spectra were calculated using a modified computing algorithms of wavelet transform original of which was developed and provided by C. Torrence and G. Compo, and is available at URL: http://paos.colorado.edu/research/wavelets/. The DOT is operated by Utrecht University (The Netherlands) at Observatorio del Roque de los Muchachos of the Instituto de Astrofsica de Canarias (Spain)funded by the Netherlands Organisation for Scientific Research NWO, The Netherlands Graduate School for Astronomy NOVA, and SOZOU. The DOT efforts were part of the European Solar Magnetism Network. NF, MSR and RE are grateful to the STFC and The Royal Society for the support received. RE is also grateful to NSF, Hungary (OTKA, Ref. No. K83133).
|
---
abstract: 'Monolayer transition metal dichalcogenides (TMDs) offer new opportunities for realizing quantum dots (QDs) in the ultimate two-dimensional (2D) limit. Given the rich control possibilities of electron valley pseudospin discovered in the monolayers, this quantum degree of freedom can be a promising carrier of information for potential quantum spintronics exploiting single electrons in TMD QDs. An outstanding issue is to identify the degree of valley hybridization, due to the QD confinement, which may significantly change the valley physics in QDs from its form in the 2D bulk. Here we perform a systematic study of the intervalley coupling by QD confinement potentials on extended TMD monolayers. We find that the intervalley coupling in such geometry is generically weak due to the vanishing amplitude of the electron wavefunction at the QD boundary, and hence valley hybridization shall be well quenched by the much stronger spin-valley coupling in monolayer TMDs and the QDs can well inherit the valley physics of the 2D bulk. We also discover sensitive dependence of intervalley coupling strength on the central position and the lateral length scales of the confinement potentials, which may possibly allow tuning of intervalley coupling by external controls.'
author:
- 'Gui-Bin Liu'
- Hongliang Pang
- Yugui Yao
- Wang Yao
bibliography:
- 'refs.bib'
title: Intervalley coupling by quantum dot confinement potentials in monolayer transition metal dichalcogenides
---
[GB]{}
\#1\#2[\#1|\#2]{}
\#1\#2\#3[\#1|\#2|\#3]{}
\#1[|\#1]{} \#1[\#1|]{}
\#1[\#1]{}
[^1]
Introduction
============
Semiconductor quantum dots (QDs) have been widely explored in the past several decades for technological applications such as lasers and medical markers [@Tartakovskii_Tartakovskii_2012____Quantum]. Spin and other quantum degrees of freedom of single electron or hole confined in QDs are also extensively researched as potential carriers of information for quantum computing and quantum spintronics [@Hanson_Vandersypen_2007_79_1217__Spins; @Liu_Sham_2010_59_703__Quantum; @Loss_DiVincenzo_1998_57_120__Quantum; @Reimann_Manninen_2002_74_1283__Electronic; @Burkard_Loss_1999_59_2070__Coupled]. QDs in conventional semiconductors are realized either by forming nanocrystals or by lateral confinements in two-dimensional (2D) heterostructures. Such lateral confinements can be provided by the thickness variations of the heterostructures or patterned electrodes [@Liu_Sham_2010_59_703__Quantum; @Hanson_Vandersypen_2007_79_1217__Spins]. The emergence of graphene [@CastroNeto_Geim_2009_81_109__electronic; @Geim_Novoselov_2007_6_183__rise], the 2D crystal with single atom thickness, has offered a new system to host QDs. Because of the zero gap nature of graphene, QD confinement has to be facilitated by terminations of the monolayer, and the various geometries explored include the graphene nano-island connected to source and drain contacts via narrow graphene constrictions [@Ponomarenko_Geim_2008_320_356__Chaotic; @Guttinger_Ensslin_2009_103_46810__Electron; @Wang_Chang_2011_99_112117__Gates], and graphene nanoribbon of armchair edges with patterned electrodes [@Trauzettel_Burkard_2007_3_192__Spin]. The properties of these graphene QDs are affected significantly by the edges, and precise control on edge terminations are thus desired.
Monolayer group-VIB transition metal dichalcogenides (TMDs) are newly emerged members of the 2D crystal family [@Wang_Strano_2012_7_699__Electronics]. These TMDs have the chemical composition of MX$_{2}$ (M = Mo,W; X = S,Se), and the monolayer has a structure of X-M-X covalently bonded hexagonal quasi-2D network. The 2D bulk has a direct bandgap in the visible frequency range [@splendiani_emerging_2010; @mak_atomically_2010], ideal for optoelectronic applications. This bandgap also makes possible the realization of QDs without the assistance of termination edges [@Kormanyos_Burkard_2014_4_11034__Spin]. QDs can be defined by lateral confinement potentials on an extended monolayer, e.g. by patterned electrodes, similar to the QDs in III-V heterostructures [@Hanson_Vandersypen_2007_79_1217__Spins]. As the conduction and valence band edges are shifted in the same way by the electrodes, the QDs will have a unique type-II band alignment with its surroundings (cf. figure 1). Moreover, QD confinement potentials can also be realized by the lateral heterojunctions between different TMDs within a single uniform crystalline monolayer. Various band alignment can form between different TMDs with different bandgaps and workfunctions. Lateral heterostructures with MoSe$_{2}$ islands surrounded by WSe$_{2}$ on a crystalline monolayer has been realized very recently [@Huang_Xu_2014____Lateral]. This can realize QD confinement, also with type-II band alignment, for electrons at the MoSe$_{2}$ region. The band edge discontinuity at the heterojunction, with an order of magnitude of 0.2–0.4 eV [@Rivera_Xu_2014___1403.4985_Observation; @Lee_Kim_2014___1403.3062_Atomically; @Furchi_Mueller_2014___1403.2652_Photovoltaic; @Cheng_Duan_2014___1403.3447_Electroluminescence; @Fang_Javey_2014___1403.3754_Strong], forms the potential well (with a vertical wall).
Besides the unique 2D geometries, monolayer TMD QDs are highly appealing because of the interesting properties of the 2D bulk. The conduction and valence band edges are both at the degenerate $K$ and $-K$ valleys at the corners of the hexagonal Brillouin zone. Remarkably, the direct-gap optical transitions are associated with a valley dependent selection rule: left- (right-) handed circular polarized light excites interband transitions in the $K$ ($-K$) valley only [@Yao_Niu_2008_77_235406__Valley; @Xiao_Yao_2012_108_196802__Coupled]. Based on this selection rule, optical pumping of valley polarization [@zeng_valley_2012; @mak_control_2012; @cao_valley_selective_2012], and optical generation of valley coherence [@Jones_Xu_2013_8_634__Optical], have been demonstrated. Another remarkable property of the 2D TMDs is the strong interplay between spin and the valley pseudospin [@Gong_Yao_2013___1303.3932_Magnetoelectric; @Jones_Xu_2014_10_130__Spin; @Xu_Heinz_2014____]. These quantum degrees of freedom with versatile controllability well suggest that single electron in monolayer TMD QDs can be promising carrier of information for quantum spintronics, provided that the bulk properties of interest can be inherited by the QDs. Since these interesting bulk properties are associated with the valley pseudospin, a central problem is whether valley is still a good quantum number and whether the bulk valley physics is preserved in the QDs. A natural concern is that the lateral QD confinement may result in valley hybridization, like in silicon QDs [@Saraiva_Koiller_2009_80_81305__Physical; @Koiller_Das_2004_70_115207__Shallow; @Goswami_Eriksson_2007_3_41__Controllable; @Friesen_Coppersmith_2010_81_115324__Theory; @Culcer_Das_2010_82_155312__Quantum; @Friesen_Coppersmith_2007_75_115318__Valley; @Nestoklon_Ivchenko_2006_73_235334__Spin; @Boykin_Lee_2004_84_115__Valley], which may completely change the valley physics in QDs.
In this paper, we present a systematic study on the intervalley coupling strength by QD confinement potentials on extended TMD monolayer. The numerical calculations used two methods: (i) the envelope function method (EFM) [@Kohn_canonical_transformation; @Pantelides_Sah_1974_10_621__Theory; @DiVincenzo_Mele_1984_29_1685__Self; @Wang_Zunger_1999_59_15806__Linear], in conjunction with first-principles wavefunctions of the band-edge Bloch states of the 2D bulk; and (ii) the real-space tight-binding (RSTB) approach based on a three-band model for monolayer TMDs [@Liu_Xiao_2013_88_85433__Three]. The EFM is limited to circular shaped QDs and is used primarily as a benchmark for the RSTB approach which can handle arbitrarily shaped QDs. The intervalley coupling is defined here as the off-diagonal matrix element, due to the QD potentials, between states with the same spin but opposite valley index. For circular QD potentials with various lateral size, potential depth and smoothness, the intervalley couplings from the two different approaches agree well, which justify both methods. We then use the RSTB method to address QD potentials of lower symmetry, including the triangular, hexagonal, and square shaped QDs, where the intervalley coupling strength is investigated as functions of lateral size, depth and smoothness of confinement potentials.
Our main findings are summarized below. For confinement potentials with the $C_{3}$ rotational symmetry (like that of the lattice), both the numerical results and the symmetry analysis show that the intervalley coupling strength depends sensitively on the central position of the potential: the coupling is maximized (zero) if the potential is centered at a M (X) site (cf. figure 1). **** The results stated below refer to the M-centered potentials. The intervalley coupling as a function of the lateral size of the QDs exhibits fast oscillations with an nearly exponentially decaying envelop. When the wall of the potential well changes from a vertical one to a sloping one, the intervalley coupling strength has a fast decrease by 2-3 orders of magnitude, and saturates when the length scale of the slope is beyond five lattice constants. For potentials with sloping walls, the intervalley coupling increases with increasing the potential depth. For potentials with vertical walls, the dependence on the potential depth is non-monotonic. Interestingly, when all parameters are comparable, the intervalley coupling can be smaller by several orders of magnitude in certain QD potentials. These include the circular shaped QDs, the triangular and hexagonal shaped QDs with all sides along the zigzag crystalline axes. The latter two types of confinement potentials can be relevant in QDs formed by lateral heterostructures of different TMDs [@Huang_Xu_2014____Lateral]. In contrast to graphene QDs [@Trauzettel_Burkard_2007_3_192__Spin] and silicon QDs [@Saraiva_Koiller_2009_80_81305__Physical; @Koiller_Das_2004_70_115207__Shallow; @Goswami_Eriksson_2007_3_41__Controllable; @Friesen_Coppersmith_2010_81_115324__Theory; @Culcer_Das_2010_82_155312__Quantum; @Friesen_Coppersmith_2007_75_115318__Valley; @Nestoklon_Ivchenko_2006_73_235334__Spin; @Boykin_Lee_2004_84_115__Valley], the intervalley coupling here is much smaller as the electron wavefunction vanishes at the boundary of QDs. For all QDs studied, the largest intervalley coupling is upper bounded by 0.1 meV, found for small QDs with diameters of 20 nm and with confinement potentials of vertical walls. Such intervalley coupling is much smaller compared to the diagonal energy difference between states with the same spin but opposite valley index, which arises from the spin-valley coupling and ranges from several to several tens meV in different monolayer TMDs [@Liu_Xiao_2013_88_85433__Three]. Therefore, our results mean that valley hybridization is in general negligible, and valley pseudospin in monolayer TMD QDs is a good quantum degree of freedom as in the 2D bulk, with the optical addressability for quantum controls.
The rest of the paper is organized as follows. In section \[sec:EFM\], we formulate the EFM for the two-band $\kp$ model of the monolayer TMDs [@Xiao_Yao_2012_108_196802__Coupled], which can be exactly solved for circular shaped QDs. We then present the numerical results for intervalley coupling, calculated in conjunction with the first-principles wavefunctions. In section \[sec:Sym\], we present the symmetry analysis on the M-centered and X-centered potentials with $C_{3}$ rotational symmetry. Section \[sec:RSTB\] is on the RSTB approach. Numerical results by RSTB method for circular QDs are first compared with the EFM results, and then the RSTB results for various shaped QDs is presented. The results are qualitatively the same and quantitatively comparable for different MX$_{2}$, and MoS$_{2}$ is presented as the example throughout the paper.
Circular QDs in envelope function method\[sec:EFM\]
===================================================
The model and method
--------------------
![Schematics of a circular confinement potential on a MoS$_{2}$ monolayer centered at (a) Mo site (larger blue dot) and (b) S site (smaller orange dot). $\omega\equiv e^{i\frac{2\pi}{3}}$, $\omega^{*}$ and $1$ are respectively the projection of the plane wave component of the Bloch function at $K$ on each Mo site (i.e. $e^{iK\cdot\vr_{{\rm Mo}}}$), referred as the lattice phase factor in the text. (c) The spatial profile of the conduction and valence band edges for the circular potential well with radius $R$ and depth $V$. The length scale $L$ characterizes the smoothness of the potential.\[fig:well\]](fig1){width="15cm"}
We consider first a QD defined by a circular shaped potential $U(r)$ in a TMD monolayer. Such confinement potential can be characterized by three parameters: the lateral size, the depth, and the smoothness of the potential well. Without loss of generality, we assume the potential has the form $$U(r)=\begin{cases}
0, & r<R-\frac{L}{2}\\
\frac{V}{2}\big[\dfrac{\tanh\frac{C(r-R)}{L/2}}{\tanh C}+1\big], & R-\frac{L}{2}\le r\le R+\frac{L}{2}\\
V, & r>R+\frac{L}{2}
\end{cases}\label{eq:potential}$$ where $R$ and $V$ are respectively the lateral size and the depth of the potential well. $L$ characterizes the smoothness of the potential well boundary [\[]{}see figure \[fig:well\](c)[\]]{}. Large $L$ corresponds to smooth potentials (e.g. in confinement generated by patterned electrodes), while $L=0$ corresponds to potential well with vertical wall (e.g. in lateral heterostructures of different TMDs). The parameter $C=2.5$ is used here.
For typical confinement with $R$ much larger than the lattice constants, the bound states in the QDs are formed predominantly from the band-edge Bloch states in the $\pm K$ valleys of the 2D bulk. In the EFM, we start off with the two-band $k\cdot p$ Hamiltonian derived for the band edges at the $\pm K$ valleys of monolayer TMDs [@Xiao_Yao_2012_108_196802__Coupled; @Liu_Xiao_2013_88_85433__Three], where valley is a discrete index. Thus the EFM leads to confined wavefunctions formed with the $K$ and $-K$ valley Bloch states respectively, denoted as $\Psi^{\tau,s}$ where $\tau=\pm$ is the valley index denoting the $\pm K$ valley, and $s=+(\uparrow)$ or $-(\downarrow)$ is the spin index. Taking these states as the basis, intervalley coupling here refers to the off-diagonal matrix elements between these states due to the confinement potential $U(r)$ [@Saraiva_Koiller_2009_80_81305__Physical].
As confinement potential is spin-independent, intervalley coupling vanishes between states with opposite spin index. For holes, the band edges of the 2D bulk are spin-valley locked because of the giant spin-orbit coupling [@zhu_giant_2011; @Xiao_Yao_2012_108_196802__Coupled], i.e. valley $K$ ($-K$) has spin up (down) only. Thus, intervalley coupling is absent for holes. We concentrate on the intervalley coupling of confined electron states $\Psi^{+,s}$ and $\Psi^{-,s}$ in the two valleys with the same spin index, which is defined as $$V_{{\rm inter}}^{s}=\bkthree{\Psi^{+,s}}{U(r)}{\Psi^{-,s}}
=
\int\Psi^{+,s}(\vr)^{*}U(r)\Psi^{-,s}(\vr)\,{\rm d}\vr.
\label{eq:Vvcs}$$ $V_{\rm inter}^{s}$ here is the off-diagonal matrix element, due to the QD potentials, between states with the same spin but opposite valley index. This quantity determines to what degree the two bulk valleys can hybridize in the QD confinement, by competing with the diagonal energy difference between $\Psi^{-,s}$ and $\Psi^{+,s}$. If $V_{\rm inter}^{s}$ is much smaller than the diagonal energy difference, then the bulk valley index is still a good quantum number in QD. If the coupling is comparable or larger than the diagonal energy difference, then the QD eigenstates will be superpositions of the two bulk valleys. In the limit that $V_{\rm inter}^{s}$ is much larger than the diagonal energy difference, the QD eigenstates are the symmetric and antisymmetric superpositions of the two valleys, and their energy splitting is given by $2|V_{\rm inter}^s|$. Due to the time-reversal symmetry, $|V_{{\rm inter}}^{\uparrow}|=|V_{{\rm inter}}^{\downarrow}|$. Hence we consider here only spin-up case and drop the spin index $s$ in superscripts for brevity.
To obtain $\Psi^{\tau}$ in EFM, we substitute $-i\nabla$ for $\vk$ in the $k\cdot p$ Hamiltonian of the 2D bulk[@Xiao_Yao_2012_108_196802__Coupled; @Liu_Xiao_2013_88_85433__Three], and the confinement potential $U(r)$ is added as an onsite energy term. The QD Hamiltonian in each valley is then [@Hewageegana_Apalkov_2008_77_245426__Electron; @Pereira_Peeters_2007_7_946__Tunable; @Chen_Chakraborty_2007_98_186803__Fock; @Matulis_Peeters_2008_77_115423__Quasibound; @Recher_Trauzettel_2009_79_85407__Bound; @Pereira_Farias_2009_79_195403__Landau; @Hewageegana_Apalkov_2009_79_115418__Trapping] $$\begin{aligned}
H^{\tau} & = & \begin{bmatrix}\frac{\Delta}{2}+U(r) & at(-i\tau\frac{\partial}{\partial x}-\frac{\partial}{\partial y})\\
at(-i\tau\frac{\partial}{\partial x}+\frac{\partial}{\partial y}) & -\frac{\Delta}{2}+\tau s\lambda+U(r)
\end{bmatrix},\label{eq:Htau}\end{aligned}$$ where $\Delta\sim2$ eV is the bulk bandgap of TMD monolayer, $a$ is the lattice constant, and $t$ is the effective hopping. The bases of the Hamiltonian are the conduction and valence Bloch states at $\pm K$ point, $\varphi_{\alpha}^{\tau}(\vr)=e^{i\tau\vK\cdot\vr}u_{\alpha}^{\tau}(\vr)$ ($\alpha=$c, v and $u_{\alpha}^{\tau}$ is the cell periodic part), and the eigenfunction of is a spinor of envelope function $$\psi^{\tau}=\begin{bmatrix}\psi_{{\rm c}}^{\tau}(\vr)\\
\psi_{{\rm v}}^{\tau}(\vr)
\end{bmatrix},$$ which satisfy the Schrodinger equation $H^{\tau}\psi^{\tau}=E\psi^{\tau}$. The overall wavefunctions of the QD states are then $$\Psi^{\tau}(\vr)=\psi_{{\rm c}}^{\tau}(\vr)\varphi_{{\rm c}}^{\tau}(\vr)+\psi_{{\rm v}}^{\tau}(\vr)\varphi_{{\rm v}}^{\tau}(\vr).$$
Since $U(r)$ has the rotational symmetry, it is easy to verify that the eigen spinor $\psi^{\tau}$ is of the form $$\psi^{\tau}=\begin{bmatrix}b_{{\rm c}}^{\tau}(r)e^{im\theta}\\
ib_{{\rm v}}^{\tau}(r)e^{i(m+\tau)\theta}
\end{bmatrix},\label{eq:psib}$$ where $m$ is integer, and $(r,\theta)$ are the polar coordinates of $\vr$. $b_{{\rm c}}^{\tau}$ and $b_{{\rm v}}^{\tau}$ are real radial functions that satisfy
$$\left\{ \begin{array}{lc}
{\displaystyle \tau\frac{\partial b_{{\rm c}}^{\tau}}{\partial r}-\frac{m}{r}b_{{\rm c}}^{\tau}=[-\frac{\Delta}{2}+s\tau\lambda+U(r)-E]b_{{\rm v}}^{\tau}} & \vphantom{\Bigg[}\\
{\displaystyle \tau\frac{\partial b_{{\rm v}}^{\tau}}{\partial r}+\frac{m+\tau}{r}b_{{\rm v}}^{\tau}=-[\frac{\Delta}{2}+U(r)-E]b_{{\rm c}}^{\tau}}
\end{array}\right.\label{eq:Eqbcbv}$$
$b_{{\rm c}}^{\tau}$ and $b_{{\rm v}}^{\tau}$ are Bessel functions in the region where $U(r)$ is a constant ($r<R-\frac{L}{2}$ or $r>R+\frac{L}{2}$), and need to be solved numerically in the region $R-\frac{L}{2}\le r\le R+\frac{L}{2}$. Boundary conditions are then matched at $r=R-\frac{L}{2}$ and $r=R+\frac{L}{2}$ to determine the solutions and eigenvalues. Bound states are found to exist in the energy range $(\frac{\Delta}{2},\frac{\Delta}{2}+V)$, as expected. Here $t$ is used as the unit of energy and $a$ as the unit of length.
The intervalley coupling defined in Eq. (\[eq:Vvcs\]) can then be expressed as $$\begin{aligned}
V_{{\rm inter}} & = & \bkthree{\sum_{\alpha}\psi_{\alpha}^{+}e^{i\bm{K}\cdot\vr}u_{\alpha}^{+}}U{\sum_{\beta}\psi_{\beta}^{-}e^{-i\vK\cdot\vr}u_{\beta}^{-}}=\sum_{\alpha,\beta}V_{\alpha\beta}\ ,\label{eq:Vinter}\end{aligned}$$ where
$$V_{\alpha\beta}=\sum_{\Delta\bm{G}}S_{\alpha\beta}(\Delta\bm{G})I_{\alpha\beta}(\Delta\bm{G}),\label{eq:Vab}$$
$$I_{\alpha\beta}(\Delta\bm{G})=\int\psi_{\alpha}^{+}(\vr)^{*}\psi_{\beta}^{-}(\vr)e^{i(\Delta\bm{G}-2\bm{K})\cdot\vr}U(r)\,{\rm d}\vr,\label{eq:IdG}$$
and $$\begin{aligned}
S_{\alpha\beta}(\Delta\bm{G}) & = & \sum_{\bm{G}}c_{\alpha}^{+}(\bm{G})^{*}c_{\beta}^{-}(\bm{G}+\Delta\bm{G}).\label{eq:SdG}\end{aligned}$$ In the above equations, $\alpha,\beta\in${c, v}, and $c_{\alpha}^{\tau}(\bm{G})$ are the Fourier coefficients of the periodic part of the Bloch function $u_{\alpha}^{\tau}(\vr)=\sum_{\bm{G}}c_{\alpha}^{\tau}(\bm{G})e^{i\bm{G}\cdot\vr}$. $\bm{G}$ is a reciprocal lattice vector (RLV), and $\Delta\bm{G}=\bm{G}'-\bm{G}$ is also a RLV. Here we are interested in the QD ground states only, which correspond to the $m=0$ case in equation see figure \[fig:ene\](a)[\]]{}. The coefficients $c_{\alpha}^{\tau}(\bm{G})$ are obtained here from the first-principles all-electron wavefunctions calculated by the ABINIT package [@abinit1; @note2].
We note that the Hamiltonian in equation does not contain the conduction band spin-valley interaction [@Liu_Xiao_2013_88_85433__Three; @Kormanyos_Falko_2013_88_45416__Monolayer], which is of the Ising form that changes only the diagonal energies but not the wavefunctions of the states $\Psi^{+,s}$ and $\Psi^{-,s}$. Thus, this spin-valley interaction does not affect the intervalley coupling strength defined here, but it leads to an energy difference between states $\Psi^{+,s}$ and $\Psi^{-,s}$ that ranges from several to several tens meV [@Liu_Xiao_2013_88_85433__Three; @Kormanyos_Falko_2013_88_45416__Monolayer]. Valley hybridization in the QDs is determined by the competition between $|V_{{\rm inter}}|$ and this diagonal energy difference.
Numerical results \[sub:Numerical-results\]
-------------------------------------------
![Energy levels and envelope functions for circular QDs calculated with EFM. (a) Energy levels of the lowest several bound states measured from the bulk band edge. Three different potentials with $V=$0.2, 0.05, and 0.005 respectively are shown for comparison. Other parameters are $R=80$ and $L=10$. (b) The radial envelope functions $b_{{\rm c}}^{+}(r)$ and $b_{{\rm v}}^{+}(r)$ [\[]{}see equation for the ground states in (a), i.e. the lowest-energy ones with $m=0$. Note that $b_{{\rm v}}^{+}(r)$ are magnified by a factor of 80 for clarity. The green shaded region corresponds to the slopping wall of the potential well. Units for energy and length are $t$ and $a$ respectively.\[fig:ene\]](figEne-w){width="14cm"}
In this subsection we present the numerical calculation results. The parameters from [@Xiao_Yao_2012_108_196802__Coupled] are used: $t=1.1\,$eV, $a=3.193\,$Å, $\Delta=1.66\,$eV, and $2\lambda=0.15\,$eV. We consider potentials centered at a Mo site, as shown in figure \[fig:well\](a). As we will prove in Section \[sec:Sym\], the intervalley coupling is strongest for such Mo-centered confinement potential, and will vanish if the circular confinement potential is centered at a S site.
### Energy levels and bound states
We first look at the energy eigenvalues and eigenstates of equation , which give the energy spectrum in the circular QD confinement, and the radial envelope functions to calculate the intervalley coupling [\[]{}cf. equation . The energy spectra are shown in figure \[fig:ene\](a) for $R=80$, $L=10$ and three well depths$V=$0.2, 0.05, and 0.005. For $V=0.2$ and 0.05, only the low-energy parts of the spectra are shown. $m$ is the azimuthal quantum number of the envelope function defined in equation . Obviously, the ground state has $m=0$. The spectra shown are for the $K$ valley ($\tau=1$) and it is obvious from equation that states of $\tau$, $s$, $m$ are degenerate with states of $-\tau,$ $-s$, $-m$. For the ground states, the radial envelope functions $b_{{\rm c}}^{+}(r)$ and $b_{{\rm v}}^{+}(r)$ are shown in figure \[fig:ene\](b), from which we can see that they are smooth and localized within the well and the $b_{{\rm v}}^{+}(r)$ components are orders of magnitude smaller compared to $b_{{\rm c}}^{+}(r)$. This means that for electron confined in the quantum dot, the wavefunction is predominantly contributed by the conduction band edge Bloch functions, and the contribution from the valence band edge is negligible. In general, the Hamiltonian shall be solved for addressing confinement of Dirac fermions, as we did here. Nevertheless, as shown here, the much simplified approach of effective mass approximation can be well justified for monolayer TMDs with the large bandgap. Importantly, the wavefunction has vanishing amplitude at the QD boundary [\[]{}green shaded region in figure \[fig:ene\](b)[\]]{}. This is in contrast to the wavefunctions in graphene QDs [@Trauzettel_Burkard_2007_3_192__Spin] and silicon QDs [@Saraiva_Koiller_2009_80_81305__Physical; @Koiller_Das_2004_70_115207__Shallow; @Goswami_Eriksson_2007_3_41__Controllable; @Friesen_Coppersmith_2010_81_115324__Theory; @Culcer_Das_2010_82_155312__Quantum; @Friesen_Coppersmith_2007_75_115318__Valley; @Nestoklon_Ivchenko_2006_73_235334__Spin; @Boykin_Lee_2004_84_115__Valley] where significant valley hybridization is found.
### The intervalley coupling
![(a) The magnitude of the intervalley coupling $|V_{{\rm inter}}|$ vs the radius $R$ for circular QDs with $L=0$ (black) and $L=10$ (red), and $V=0.005$ calculated in EFM. (b) and (c) are the zoom in for the parameter range $R\in[80,85]$. Black circle denotes the overall $|V_{{\rm inter}}|$. Other symbols denote the contributions $|V_{\alpha\beta}|$ [\[]{}$\alpha\beta=$cc (red cross), cv (blue triangle), vc (magenta square), and vv (green diamond)[\]]{} to the intervalley coupling, for $L=0$ and 10 respectively. See equations , , and in the text for definitions. All lengths are in the unit of $a$ and energies in the unit of $t$.\[fig:Vvc-R\]](Vinter-R2){width="15cm"}
With the envelope functions given above, and the band-edge Bloch functions from the ABINIT package [@abinit1; @note2], we can calculate the numerical values for the intervalley coupling [\[]{}cf. equation ,
Figure \[fig:Vvc-R\] shows the intervalley coupling strength $|V_{{\rm inter}}|$ versus the radius of the potential $R$ for circular QDs with fixed smoothness ($L=0$ and 10) and depth ($V=0.005$) of potential [\[]{}cf. equation . The curve with $L=10$ is more than two orders of magnitude smaller than the one with $L=0$, which can also be seen in figure \[fig:Vvc-L\] and discussed in detail later. From figure \[fig:Vvc-R\](b) and (c), we can see that the dominant contribution to the intervalley coupling is from $|V_{{\rm cc}}|$, which is orders of magnitude larger than $|V_{{\rm vv}}|$, $|V_{{\rm cv}}|$, $|V_{{\rm vc}}|$ [\[]{}cf. equation , consistent with the fact that the wavefunction of confined electron is predominantly from the conduction band edge Bloch functions. Figures \[fig:Vvc-R\](b) and (c) also clearly show that $|V_{{\rm inter}}|$ oscillates vs $R$ in the scale of the lattice constant $a$. The oscillation with the size of QD is a generic feature of intervalley coupling, due to the large momentum space separation of the two valleys, which have been noted in previous works [@Boykin_Lee_2004_84_115__Valley; @Nestoklon_Ivchenko_2006_73_235334__Spin; @Friesen_Coppersmith_2007_75_115318__Valley]. Apart from these fast oscillations, the intervalley coupling strength decreases with the increase of $R$ in a nearly exponential way.
![The magnitude of the intervalley coupling $|V_{{\rm inter}}|$ vs $L$ for circular QDs with $R=80$, $V=0.005$ calculated in EFM. Inset shows the details of the main figure in the range of $L\in[5,10]$. All lengths are in the unit of $a$ and energies in the unit of $t$.\[fig:Vvc-L\]](Vinter-L-R80){width="8cm"}
![Intervalley coupling strength $|V_{{\rm inter}}|$ as a function of the potential well depth $V$ for circular QD with $R=80$, $L=0$ (red $\square$ and $\triangledown$, left axis) and $L=10$ (blue $\circ$ and $\vartriangle$, right axis) calculated in EFM. Results with (red $\square$, blue $\circ$ ) and without SOC (red $\triangledown$, blue $\vartriangle$) are compared. All lengths are in the unit of $a$ and energies in the unit of $t$. \[fig:Vvc-V\]](Vinter-V-R80-cmp){width="10cm"}
Figure \[fig:Vvc-L\] shows the $|V_{{\rm inter}}|$ as a function of $L$, the smoothness of the potential, for circular QDs with $R=80$ and $V=0.005$. Largest intervalley coupling is found at $L=0$ that corresponds to a circular square well, where the potential has a sudden jump at the QD boundary. It can be seen that $|V_{{\rm inter}}|$ decreases monotonically as $L$ increases up to a value of 5, and then it starts to oscillate rapidly around a constant value. The inset is the zoom in for $5\le L\le10$, which shows that the oscillation is also in the length scale of lattice constant, similar to the $|V_{{\rm inter}}|$ vs. $R$ relation.
Figure \[fig:Vvc-V\] shows the dependence of $|V_{{\rm inter}}|$ on the potential depth $V$, when $R=80$ and $L=0$ and 10. We can see that $|V_{{\rm inter}}|$ increases with increasing $V$ when $L=10$, and it first increases and then decreases with increasing $V$ when $L=0$. In this figure, we also compare the intervalley couplings calculated with and without spin-orbit coupling (SOC). For the calculation without SOC, we set $s=0$ in equation , and the plane-wave coefficients in equation are also calculated without including the SOC in the ABINIT package. In figure \[fig:Vvc-V\], it is clear that intervalley couplings with and without SOC coincide when $L=10$, and have small difference when $L=0$. We also compared the $|V_{{\rm inter}}|$ vs. $R$ curve (fixing $L=10$, $V=0.005$) and $|V_{{\rm inter}}|$ vs. $L$ curve (fixing $R=80$, $V=0.005$), for the calculations with and without SOC, and the differences are also negligible. This means that SOC has negligible effects on $|V_{{\rm inter}}|$. This is well expected as $|V_{{\rm inter}}|$ here is just the off-diagonal matrix element of the confinement potential, between the QD wavefunctions in the $K$ and $-K$ valleys. The SOC in the $K$ valleys is of the longitudinal form [@Xiao_Yao_2012_108_196802__Coupled], which does not affect the wavefunction. The SOC, however, will cause spin splittings and hence affect the spectral of quantum dot states in the $K$ and $-K$ valleys [\[]{}cf. figure \[fig:ene\](a)[\]]{}. When we consider the effect of intervalley coupling on the quantum dot states, the off-diagonal matrix element $|V_{{\rm inter}}|$ will then compete with the differences in these diagonal energies, and the SOC induced spin splitting will play an important role.
From the dependence of $|V_{{\rm inter}}|$ on $R$, $L$, and $V$ discussed above, we find that the intervalley coupling strength in a circular confinement potential varies over several orders of magnitude, depending on the length scale, smoothness, and depth of the confinement. The magnitude of the intervalley coupling is small on the whole, in the order of $\mu$eV or below. This is much smaller compared to intervalley coupling in QDs formed in silicon inversion layers [@Saraiva_Koiller_2009_80_81305__Physical; @Nestoklon_Ivchenko_2006_73_235334__Spin; @Koiller_Das_2004_70_115207__Shallow; @Goswami_Eriksson_2007_3_41__Controllable; @Friesen_Coppersmith_2007_75_115318__Valley; @Friesen_Coppersmith_2010_81_115324__Theory; @Culcer_Das_2010_82_155312__Quantum; @Boykin_Lee_2004_84_115__Valley].
Symmetry analysis for QDs with $C_{3}$ rotational symmetry\[sec:Sym\]
=====================================================================
In the above section, we have seen that the intervalley coupling is very sensitive to the geometry of the confinement potential, as evident from the fast oscillations of $|V_{{\rm inter}}|$ versus $R$ or $L$. Here, based on symmetry analysis, we will show that the intervalley coupling strength $|V_{{\rm inter}}|$ also depends on the position of the potential center, in a way that $V_{{\rm inter}}$ is largest when the potential is centered at Mo atom [\[]{}figure \[fig:well\](a)[\]]{} but vanishes if centered at S atom [\[]{}figure \[fig:well\](b)[\]]{}. To explain this, we analyze $V_{\alpha\beta}$ [\[]{}see equation by examining the behavior of $S_{\alpha\beta}(\Delta\bm{G})$ and $I_{\alpha\beta}(\Delta\bm{G})$ under the $C_{3}$ rotation.
Using the Fourier relation $c_{\alpha}^{\tau}(\bm{G})=\frac{1}{\Omega}\int_{\Omega}u_{\alpha}^{\tau}(\vr)e^{-i\bm{G}\cdot\vr}\,{\rm d}\vr$ in which $\Omega$ is the area of the 2D cell, $S_{\alpha\beta}(\Delta\bm{G})$ can be rewritten as $$\begin{aligned}
S_{\alpha\beta}(\Delta\bm{G}) & = & \frac{1}{\Omega}\int_{\Omega}u_{\alpha}^{+}(\vr)^{*}u_{\beta}^{-}(\vr)e^{-i\Delta\bm{G}\cdot\vr}\,{\rm d}\vr\label{eq:SdGa}\\
& = & \frac{1}{\Omega}\int_{\Omega}\varphi_{\alpha}^{+}(\vr)^{*}\varphi_{\beta}^{-}(\vr)e^{-i\bm{g}\cdot\vr}\,{\rm d}\vr,\ \ \ \ \ \ \label{eq:SdGb}\end{aligned}$$ where $\bm{g}\equiv\Delta\bm{G}-2\vK$. Equation shows that its integrand is a cell periodic function. This periodicity leads to the invariance of $S_{\alpha\beta}$ under the $C_{3}$ rotation of its integrand. For convenience, we rewrite $\mathbb{S}_{\alpha\beta}(\bm{g})\equiv S_{\alpha\beta}(\Delta\bm{G})$. Using the form of integrand in equation , we get $$\begin{aligned}
\mathbb{S}_{\alpha\beta}(\bm{g}) & = & \frac{1}{\Omega}\int_{\Omega}C_{3}\Big[\varphi_{\alpha}^{+}(\vr)^{*}\varphi_{\beta}^{-}(\vr)e^{-i\bm{g}\cdot\vr}\Big]\,{\rm d}\vr\nonumber \\
& = & \frac{1}{\Omega}\int_{\Omega}\big[C_{3}(\varphi_{\alpha}^{+})^{*}\big]\big[C_{3}\varphi_{\beta}^{-}\big]e^{-i\bm{g}\cdot C_{3}^{-1}\vr}\,{\rm d}\vr\nonumber \\
& = & (\gamma_{\alpha}^{+})^{*}\gamma_{\beta}^{-}\frac{1}{\Omega}\int_{\Omega}(\varphi_{\alpha}^{+})^{*}\varphi_{\beta}^{-}e^{-iC_{3}\bm{g}\cdot\vr}\,{\rm d}\vr\nonumber \\
& = & (\gamma_{\alpha}^{+})^{*}\gamma_{\beta}^{-}\mathbb{S}_{\alpha\beta}(C_{3}\bm{g}),\label{eq:SC3g}\end{aligned}$$ where $\gamma_{\alpha}^{\tau}\equiv[C_{3}\varphi_{\alpha}^{\tau}(\vr)]/\varphi_{\alpha}^{\tau}(\vr)$ is the eigenvalue of $C_{3}$ operation corresponding to the Bloch function $\varphi_{\alpha}^{\tau}(\vr)$.
For the integral $I_{\alpha\beta}$, we also rewrite it as $\mathbb{I}_{\alpha\beta}(\bm{g})\equiv I_{\alpha\beta}(\Delta\bm{G}-2\vK)$. Using the integral form of the Bessel function of order $n$ (integer) $J_{n}(x)=\frac{1}{2\pi}\int_{-\pi}^{\pi}e^{i(x\sin\phi-n\phi)}\,{\rm d}\phi$, we can simplify $\mathbb{I}_{\alpha\beta}(\bm{g})$ to single integrals as $$\mathbb{I}_{{\rm cc}}(\bm{g})=2\pi\int_{0}^{\infty}(b_{{\rm c}}^{+})^{*}U(r)b_{{\rm c}}^{-}J_{0}(|\bm{g}|r)r\,{\rm d}r,\label{eq:Icc}$$ $$\mathbb{I}_{{\rm cv}}(\bm{g})=-2\pi e^{-i\phi}\int_{0}^{\infty}(b_{{\rm c}}^{+})^{*}U(r)b_{{\rm v}}^{-}J_{1}(|\bm{g}|r)r\,{\rm d}r,$$ $$\mathbb{I}_{{\rm vc}}(\bm{g})=2\pi e^{-i\phi}\int_{0}^{\infty}(b_{{\rm v}}^{+})^{*}U(r)b_{{\rm c}}^{-}J_{1}(|\bm{g}|r)r\,{\rm d}r,$$ $$\mathbb{I}_{{\rm vv}}(\bm{g})=-2\pi e^{-i2\phi}\int_{0}^{\infty}(b_{{\rm v}}^{+})U(r)b_{{\rm v}}^{-}J_{2}(|\bm{g}|r)r\,{\rm d}r,\label{eq:Ivv}$$ where $\phi$ is the polar angle of the 2D vector $\bm{g}$. It is obvious that under $\bm{g}\rightarrow C_{3}\bm{g}$, the integral parts of equations – do not change, and hence the ratio $\eta_{\alpha\beta}\equiv\mathbb{I}_{\alpha\beta}(C_{3}\bm{g})/\mathbb{I}_{\alpha\beta}(\bm{g})$ is determined only by the prefactor in front of the integrals: $$\eta_{{\rm cc}}=1,\ \ \ \ \eta_{{\rm cv}}=\eta_{{\rm vc}}=e^{-i\frac{2\pi}{3}},\ \ \ \ \eta_{{\rm vv}}=e^{i\frac{2\pi}{3}}.\label{eq:etaxx}$$ Because the set of $\bm{g}$’s, $\mathbb{G}$, also has the $C_{3}$ symmetry, we can choose one third elements of $\mathbb{G}$, denoted as $\mathbb{G}_{\frac{1}{3}}$, such that $\mathbb{G}=\sum_{n=0}^{2}C_{3}^{n}\mathbb{G}_{\frac{1}{3}}$. Using equations and , we have $$\begin{aligned}
V_{\alpha\beta} & = & \sum_{\bm{g}\in\mathbb{G}_{\frac{1}{3}}}\sum_{n=0}^{2}\Big[\mathbb{S}_{\alpha\beta}(C_{3}^{n}\bm{g})\mathbb{I}_{\alpha\beta}(C_{3}^{n}\bm{g})\Big]\nonumber \\
& = & \sum_{\bm{g}\in\mathbb{G}_{\frac{1}{3}}}\sum_{n=0}^{2}\Big\{[(\gamma_{\alpha}^{+})^{*}\gamma_{\beta}^{-}]^{-n}\mathbb{S}_{\alpha\beta}(\bm{g})\eta_{\alpha\beta}^{n}\mathbb{I}_{\alpha\beta}(\bm{g})\Big\}\nonumber \\
& = & (1+\lambda_{\alpha\beta}+\lambda_{\alpha\beta}^{2})\sum_{\bm{g}\in\mathbb{G}_{\frac{1}{3}}}\mathbb{S}_{\alpha\beta}(\bm{g})\mathbb{I}_{\alpha\beta}(\bm{g}),\label{eq:Vxx}\end{aligned}$$ where $$\lambda_{\alpha\beta}=[(\gamma_{\alpha}^{+})^{*}\gamma_{\beta}^{-}]^{-1}\eta_{\alpha\beta}.\label{eq:lmdxx}$$ $\eta_{\alpha\beta}$ is already given in equation and the rest is to determine $\gamma_{\alpha}^{\tau}$.
For MoS$_{2}$ monolayer, the conduction and valence band edge Bloch functions at $\pm K$ points are formed predominantly by the $d_{0}$ (i.e. $d_{z^{2}}$) and $d_{\pm2}$ [\[]{}i.e. $\frac{1}{\sqrt{2}}(d_{x^{2}-y^{2}}\pm id_{xy})$[\]]{} orbitals of Mo atoms respectively [@Xiao_Yao_2012_108_196802__Coupled; @Liu_Xiao_2013_88_85433__Three; @note2]. There are two contributions to the eigenvalue $\gamma_{\alpha}^{\tau}$ of the $C_{3}$ rotation on the Bloch function $\varphi_{\alpha}^{\tau}(\vr)=e^{i\tau\vK\cdot\vr}u_{\alpha}^{\tau}(\vr)$ (details in Appendix \[sec:C3rot\]): (i) the rotation of atomic orbital around its own center, i.e. $C_{3}d_{0}(\vr)=d_{0}(\vr)$ and $C_{3}d_{\pm2}(\vr)=e^{\pm i\frac{2\pi}{3}}d_{\pm2}(\vr)$; (ii) the change of lattice phase, defined as the value of the planewave phase factor $e^{i\tau\vK\cdot\vr}$ at each lattice site (cf. figure \[fig:well\] for the case of $\tau=+1$). The change of the lattice phase factor under $C_{3}$ rotation depends on the rotation center. If the rotation center is at Mo atom as shown in figure \[fig:well\](a), the $C_{3}$ rotation does not change the lattice phase factors leaving only contribution (i) taking effect, and we obtain $$\gamma_{{\rm c}}^{\tau}=1\ \ \ \text{and}\ \ \ \gamma_{{\rm v}}^{\tau}=e^{i\tau\frac{2\pi}{3}}\ \ \ \ (\text{for Mo center}).\label{eq:C3Mo}$$ However, if the rotation center is at S atom, as shown in figure \[fig:well\](b), both contributions (i) and (ii) take effect and we have $$\gamma_{{\rm c}}^{\tau}=e^{i\tau\frac{2\pi}{3}}\ \ \ \text{and}\ \ \ \gamma_{{\rm v}}^{\tau}=e^{-i\tau\frac{2\pi}{3}}\ \ \ \ (\text{for S center}).\label{eq:C3S}$$ Putting equations and or into equation , we obtain that $\lambda_{\alpha\beta}=1$ for Mo centered potential and $\lambda_{\alpha\beta}=e^{-i\frac{2\pi}{3}}$ for S centered potential, and the final intervalley coupling is $$V_{{\rm inter}}=\begin{cases}
{\displaystyle 3\sum_{\alpha\beta}\sum_{\bm{g}\in\mathbb{G}_{\frac{1}{3}}}\mathbb{S}_{\alpha\beta}(\bm{g})\mathbb{I}_{\alpha\beta}(\bm{g})}, & \text{for Mo centered potential}\\
0, & \text{for S centered potential}
\end{cases}.\label{eq:VvcMoS}$$
From the above symmetry analysis, we conclude that the intervalley coupling induced by a circular confinement potential is sensitive to the center of the potential. The intervalley coupling is enhanced when the potential is centered at Mo atom, while it vanishes if the potential is centered at S atom. This is due to the dependence of $\gamma_{{\rm \alpha}}^{\tau}$, the eigenvalue of the Bloch function $\varphi_{\alpha}^{\tau}$ under the $C_{3}$ rotation, on the location of rotation center. Below, we show that the same conclusions can be drawn as long as the confinement potential has the $C_{3}$ symmetry, i.e. intervalley coupling is strongest (zero) if the center of the potential is at Mo (S) site.
Noncircular QD with $C_{3}$ symmetry
------------------------------------
Consider a noncircular QD confinement potential with $C_{3}$ symmetry only, $C_{3}U(\vr)=U(\vr)$. In such potential that lacks the circular symmetry, we do not have exact solution in general for the massive Dirac Fermion Hamiltonian in equation . Nevertheless, the numerical results presented in section \[sub:Numerical-results\] have well justified the effective mass approximation, where we can construct the QD electron wavefunction based on the conduction band edge Bloch function only. The ground-state envelope function $\psi_{{\rm c}}^{\tau}$ can be obtained by solving the following effective-mass Hamiltonian $$H_{{\rm em}}=-\frac{\hbar^{2}}{2m_{{\rm eff}}}\nabla^{2}+U(\vr),$$ where $m_{{\rm eff}}$ is the effective mass of conduction band at $\pm K$. With the same $m_{{\rm eff}}$ at $\pm K$, the envelope functions in the two valleys are the same: $\psi_{{\rm c}}^{+}=\psi_{{\rm c}}^{-}$. To analyze the symmetry, we need to know $\gamma_{{\rm c}}^{\tau}$ and $\eta_{{\rm cc}}$ [\[]{}see equations and . $\gamma_{{\rm c}}^{\tau}$ is independent of the shape of the QD, and equations and are still valid here. $\eta_{{\rm cc}}$ is determined by the symmetry of the envelope function $\psi_{{\rm c}}^{\tau}$. Since $U(\vr)$ is symmetric under $C_{3}$ rotation, the non-degenerate ground-state envelope function $\psi_{{\rm c}}^{\tau}$ in each valley has to be an eigenstate of $C_{3}$, i.e. $C_{3}\psi_{{\rm c}}^{\tau}=\rho\psi_{{\rm c}}^{\tau}$ where $|\rho|=1$. Using equation and $\rho^{*}\rho=1$, we have $$\begin{aligned}
\mathbb{I}_{{\rm cc}}(C_{3}\bm{g}) & = & \int\psi_{{\rm c}}^{+}(\vr)^{*}\psi_{{\rm c}}^{-}(\vr)e^{iC_{3}\bm{g}\cdot\vr}U(\vr)\,{\rm d}\vr=\int[\rho\psi_{{\rm c}}^{+}(\vr)]^{*}[\rho\psi_{{\rm c}}^{-}(\vr)]e^{i\bm{g}\cdot C_{3}^{-1}\vr}U(\vr)\,{\rm d}\vr\nonumber \\
& = & \int[C_{3}\psi_{{\rm c}}^{+}(\vr)]^{*}[C_{3}\psi_{{\rm c}}^{-}(\vr)]e^{i\bm{g}\cdot C_{3}^{-1}\vr}[C_{3}U(\vr)]\,{\rm d}\vr=\int C_{3}[\psi_{{\rm c}}^{+}(\vr)^{*}\psi_{{\rm c}}^{-}(\vr)e^{i\bm{g}\cdot\vr}U(\vr)]\,{\rm d}\vr\nonumber \\
& = & \int\psi_{{\rm c}}^{+}(\vr)^{*}\psi_{{\rm c}}^{-}(\vr)e^{i\bm{g}\cdot\vr}U(\vr)\,{\rm d}\vr=\mathbb{I}_{{\rm cc}}(\bm{g})\end{aligned}$$ Thus $\eta_{{\rm cc}}=1$, consistent with equation . The conclusion of equation is therefore valid for QD confinement potential with $C_{3}$ symmetry.
In summary, the band edge Bloch functions at $K$ and $-K$ have the $C_{3}$ rotational symmetry of the lattice. However, they transform differently when the rotation center is Mo site or S site, as shown in Figure 1a and 1b respectively. When the QD confinement potential also has the $C_{3}$ rotational symmetry about a S site, the rotational symmetry of the band edge Bloch functions about the S site leads to destructive interference and hence vanishing intervalley coupling. This is the physical origin of the dependence of intervalley coupling on the central position of QD potential.
Real-space tight-binding method\[sec:RSTB\]
===========================================
In the EFM, except for the circular shaped confinement potential, it is difficult to find the exact wavefunction of the bound states in the QD. To investigate the intervalley coupling in the more general case of the QDs with noncircular shape, we consider here the alternative approach of real-space tight-bind (RSTB) method. Using RSTB, we first calculate intervalley couplings of circular shaped QDs and compare with the results from EFM. The agreement between the two methods well justifies the validity of the RSTB method. Then we calculate intervalley couplings of noncircular shaped QDs and analyze the results. Since SOC has negligible effects on intervalley coupling, we did not take into account SOC in the RSTB calculation.
The RSTB model
--------------
We use supercell and periodic boundary condition to model a QD. The supercell is a square with side length $L_{{\rm sc}}$ (cf. figure \[fig:nonCirPot\]). Each supercell has $N=L_{{\rm sc}}*{\rm round}(2L_{{\rm sc}}/\sqrt{3})$ lattice points for primitive cells. All lengths are in unit of lattice constant $a$. We adopt the three-band tight-binding model developed in [@Liu_Xiao_2013_88_85433__Three], considering hoppings between nearest-neighbour Mo-$d_{z^{2}}$, $d_{xy}$, and $d_{x^{2}-y^{2}}$ orbitals. The coordinates of the lattice sites of Mo atoms are denoted as $\vr_{n}$ $(n=1,2,\cdots,N)$. The tight-binding Hamiltonian of the QD is
$$H=\sum_{i=1}^{N}\sum_{\alpha}\big[\epsilon_{\alpha}+U(\vr_{i})\big]c_{i\alpha}^{\dagger}c_{i\alpha}+\sum_{\langle i,j\rangle}\sum_{\alpha,\beta}t_{i\alpha,j\beta}c_{i\alpha}^{\dagger}c_{j\beta},$$
where $U(\vr_{i})$ is the QD confinement potential at $\vr_{i}$, $\alpha,\beta=d_{z^{2}},\, d_{xy},\, d_{x^{2}-y^{2}}$ are orbital indices, $c_{i\alpha}^{\dagger}$ is the creation operator for electron on site $i$ with orbital $\alpha$. $t_{i\alpha,j\beta}$ is the hopping between $\alpha$ orbital at position $i$ and $\beta$ orbital at position $j$, and $\langle,\rangle$ denotes summation over nearest-neighbor pairs only. In practice, we write the Hamiltonian $H$ in form of a $3N\times3N$ matrix. The part of $H$ involving hoppings between site $i$ and $j$ can be written as a $3\times3$ block $H_{ij}=h(\vr_{j}-\vr_{i})$, where $\vr_{j}-\vr_{i}$ is either $\bm{0}$ or the six nearest-neighbour vectors $\vR_{1}$ to $\vR_{6}$ defined in [@Liu_Xiao_2013_88_85433__Three]. Using the real hopping parameters $\{\epsilon_{1},\epsilon_{2},t_{0},t_{1},t_{2},t_{11},t_{12},t_{22}\}$ defined in [@Liu_Xiao_2013_88_85433__Three] (the GGA-version parameters therein), we have,
$$h(\bm{0})=\begin{bmatrix}\begin{array}{ccc}
\epsilon_{1} & 0 & 0\\
0 & \epsilon_{2} & 0\\
0 & 0 & \epsilon_{2}
\end{array}\end{bmatrix},\ \ \ \ \ \ \ h(\vR_{1})=\begin{bmatrix}\begin{array}{ccc}
t_{0} & t_{1} & t_{2}\\
-t_{1} & t_{11} & t_{12}\\
t_{2} & -t_{12} & t_{22}
\end{array}\end{bmatrix},$$
$$h(\vR_{2})=\begin{bmatrix}\begin{array}{ccc}
t_{0} & \frac{1}{2}\left(t_{1}-\sqrt{3}t_{2}\right) & -\frac{1}{2}\left(\sqrt{3}t_{1}+t_{2}\right)\\
-\frac{1}{2}\left(t_{1}+\sqrt{3}t_{2}\right) & \frac{1}{4}\left(t_{11}+3t_{22}\right) & -\frac{\sqrt{3}}{4}(t_{11}-t_{22})-t_{12}\\
\frac{1}{2}\left(\sqrt{3}t_{1}-t_{2}\right) & -\frac{\sqrt{3}}{4}(t_{11}-t_{22})+t_{12} & \frac{1}{4}\left(3t_{11}+t_{22}\right)
\end{array}\end{bmatrix},$$
$$h(\vR_{3})=\begin{bmatrix}\begin{array}{ccc}
t_{0} & -\frac{1}{2}\left(t_{1}-\sqrt{3}t_{2}\right) & -\frac{1}{2}\left(\sqrt{3}t_{1}+t_{2}\right)\\
\frac{1}{2}\left(t_{1}+\sqrt{3}t_{2}\right) & \frac{1}{4}\left(t_{11}+3t_{22}\right) & \frac{\sqrt{3}}{4}(t_{11}-t_{22})+t_{12}\\
\frac{1}{2}\left(\sqrt{3}t_{1}-t_{2}\right) & \frac{\sqrt{3}}{4}(t_{11}-t_{22})-t_{12} & \frac{1}{4}\left(3t_{11}+t_{22}\right)
\end{array}\end{bmatrix},$$
$$h(\vR_{4})=h(\vR_{1})^{\dagger},\ \ \ \ h(\vR_{5})=h(\vR_{2}){}^{\dagger},\ \ \ \ h(\vR_{6})=h(\vR_{3})^{\dagger}.$$
The $3N\times3N$ RSTB Hamiltonian matrix for a QD is then diagonalized to find the eigenstates and eigenenergies.
Unlike the EFM approach where the bound state wavefunctions are first given in each valley, and the intervalley coupling then calculated as the off-diagonal matrix element between them [\[]{}cf. equation , the eigenstates and eigenenergies of the RSTB Hamiltonian $H$ already include the effects of intervalley coupling. The intervalley coupling causes a fine splitting of the energy levels, which otherwise have the two-fold valley degeneracy in the absence of SOC. This energy splitting is just $2|V_{{\rm inter}}|$. Thus, the magnitude of the intervalley coupling $V_{{\rm inter}}$ is read out from the difference between the lowest two conduction-band energy eigenvalues of $H$ in the RSTB solution [@note1].
![Intervalley coupling strength $|V_{{\rm inter}}|$ of circular QDs ($L=0$) calculated with RSTB, when the supercell size $L_{{\rm sc}}$ used in the calculation is varied (cf. figure \[fig:nonCirPot\]). $|V_{{\rm inter}}|$ converges fast with increasing $L_{{\rm sc}}$. All lengths are in the unit of $a$. \[fig:Vvc-Lsc\]](Vinter-Lsc){width="8cm"}
Circular shaped QDs
-------------------
We study here circular shaped QDs using RSTB and compare with the results in EFM. First, we check the impact of supercell size on the intervalley coupling. Figure \[fig:Vvc-Lsc\] shows the $|V_{{\rm inter}}|$ vs. $L_{{\rm sc}}$ relation for different circular QDs. We can see that $|V_{{\rm inter}}|$ converges fast with increasing $L_{{\rm sc}}$, which means that the finite-size effects introduced by the method can be well eliminated by having large enough supercell. Figure \[fig:Vvc-Lsc\] also shows that $|V_{{\rm inter}}|$ converges faster with increasing $L_{{\rm sc}}$ for larger potential depth $V$, just as expected.
![Comparisons of $|V_{{\rm inter}}|$ of circular QDs calculated by the RSTB method and the EFM, in the absence of SOC. (a) $|V_{{\rm inter}}|$ vs $R$ ($L=10$, $V/t=0.005$); (b) $|V_{{\rm inter}}|$ vs $L$ ($R=80$, $V/t=0.005$); (c) $|V_{{\rm inter}}|$ vs $V$ ($R=80$, $L=0$ and 10). $L_{{\rm sc}}=240$ for all. All lengths are in the unit of $a$. \[fig:Vvc-cmp\]](cmpRSTB-EFM-b){width="15cm"}
$|V_{{\rm inter}}|$ calculated using RSTB method with a large supercell ($L_{{\rm sc}}=240$) is compared with the results from EFM in figure \[fig:Vvc-cmp\]. For the $|V_{{\rm inter}}|$ vs. $R$ dependence [\[]{}figure \[fig:Vvc-cmp\](a)[\]]{}, the $|V_{{\rm inter}}|$ vs. $L$ dependence [\[]{}figure \[fig:Vvc-cmp\](b)[\]]{}, and the $|V_{{\rm inter}}|$ vs. $V$ dependence [\[]{}figure \[fig:Vvc-cmp\](c)[\]]{}, the RSTB data, including the rapid local oscillations, agree quantitatively well with the EFM ones. The EFM results rely on the details of the first-principles wavefunctions of Bloch states from the ABINIT package, while the RSTB calculations are based on a few hopping matrix elements only. This **** agreement between the two entirely different approaches well justify both the RSTB method and EFM.
![Dependence of $|V_{{\rm inter}}|$ on the central position of the circular shaped QD potential, obtained in RSTB. (a) $|V_{{\rm inter}}|$ vs $R$ for the potentials centered at Mo-site (red circle) and S-site (blue square) respectively. $L=10$, $V=0.0055$eV, $L_{{\rm sc}}=240$. (b) $|V_{{\rm inter}}|$ as a function of the central position of the potential in a unit cell. $R=50$, $L=0$, $V=0.01$eV, $L_{{\rm sc}}=180$. All lengths are in the unit of $a$. \[fig:Vvc-cell\]](Vinter-SVCSW-cell){width="14cm"}
In figure \[fig:Vvc-cell\], we analyze the dependence of the intervalley coupling strength on the central position of the confinement potential. Figure \[fig:Vvc-cell\](a) shows $|V_{{\rm inter}}|$ vs. $R$ curves of the same circular shaped potential centered at a Mo or S site. Consistent with the symmetry analysis in Section \[sec:Sym\], $|V_{{\rm inter}}|$ is three orders of magnitude larger for the Mo-centered case compared to the S-centered ate, the latter can be regarded as the numerical errors from zero. Figure \[fig:Vvc-cell\](b) plots $|V_{{\rm inter}}|$ as a function of the central position of the confinement potential. Clearly, $|V_{{\rm inter}}|$ decreases when the potential center moves away from the Mo site and eventually vanishes when the potential center is at the S site.
Intervalley coupling in noncircular QD potentials with vertical wall ($L=0$)
----------------------------------------------------------------------------
![Schematics of noncircular QD potentials with triangular (a), hexagonal (b), and square shapes (c). The centers of the QD potentials are at the Mo sites (blue dots). $R$ is the effective radius. The outer box illustrates the supercell with size $L_{{\rm sc}}$ ($L_{{\rm sc}}=12$ here). All lengths are in the unit of $a$. \[fig:nonCirPot\]](noncir-pot){width="15cm"}
QDs can be formed by the lateral heterostructures between different monolayer TMDs. Currently such heterostructures are grown by chemical vapor deposition (CVD), where the shape of confinement is triangular [@Huang_Xu_2014____Lateral]. The lateral size of the confinement in available heterostructures is several $\mu$m, still too large for QDs. Nevertheless, growth of monolayer TMDs flakes with sizes ranging from tens of nm to tens of **$\mu$**m and with both triangular and hexagonal shapes have been reported by various groups **** [@Helveg_Besenbacher_2000_84_951__Atomic; @bollinger_one_dimensional_2001; @Zande_Hone_2013_12_554__Grains; @Peimyoo_Yu_2013_7_10985__Nonblinking; @Najmaei_Lou_2013_12_754__Vapour; @Cong_Yu_2014_2_131__Synthesis; @Wu_Xu_2013_7_2768__Vapor; @Gutierrez_Terrones_2013_13_3447__Extraordinary]. Smaller QDs formed by the lateral heterostructures can be well expected. In such heterostructures, the confinement potential is due to the band edge difference between the TMDs. Thus the potential well has a vertical wall at the QD boundary, corresponding to the $L=0$ limit discussed above. Here we use RSTB method to investigate intervalley coupling of these noncircular QDs. We studied QDs with triangular, hexagonal as well as the square shapes for comparison.
As shown in figure \[fig:nonCirPot\], the triangular, hexagonal, and square QDs are defined by the confinement potentials $U_{{\rm T}}(\vr)$, $U_{{\rm H}}(\vr)$, and $U_{{\rm S}}(\vr)$ respectively, where $U_{{\rm T/H/S}}$ equals to 0 within the QDs (yellow regions), and equals to the constant $V$ outside (white regions). The ’radius’ of the QDs is denoted as $R$ (see figure \[fig:nonCirPot\]). For the triangular and hexagonal QDs, we have chosen the orientation of the confinement potential such that the sides are all along the zigzag crystalline axes, corresponding to those realizable by the lateral heterostructures of different dichalcogenides. For the square QDs, two sides are along the zigzag and the others are along the armchair crystalline axes.
Figure \[fig:Vvc-R-THS\] plots the intervalley coupling as function of the QDs size, for three different potential depths$V=0.01\,$eV, $V=0.1\,$eV and $V=0.2\,$eV. With increasing $R$, the intervalley coupling strength has an overall trend to decrease, but with rapid local oscillations. These features are similar to the circular QDs (cf. figure \[fig:Vvc-R\]). Interestingly, for the triangular and hexagonal QDs, intervalley coupling is found to be much smaller with the larger potential jump $V$ at the QD boundary. In figure \[fig:Vvc-V-THS\], the dependence of the intervalley coupling on $V$ is investigated, in which all curves are non-monotonic. The non-monotonic dependence of intervalley coupling strength is also found for the circular QDs when $L=0$ (cf. figure \[fig:Vvc-V\]). We note that in heterostructures formed between different monolayer dichalcogenides (e.g. MoSe$_{2}$ and WSe$_{2}$), the conduction and valence band edge discontinuities are discovered to be in the range of 0.2–0.4 eV [@Rivera_Xu_2014___1403.4985_Observation]. For this range of values for $V$, the intervalley coupling is well negligible for the triangular and hexagonal QDs.
Comparing QDs of different shapes, the intervalley coupling in square QDs is orders of magnitude larger than that in the triangular and hexagonal QDs. Such a sharp difference is due to the relative orientation of the sides of the confinement potential to the crystalline axes. Inhomogeneous junctions along the zigzag (armchair) crystalline axes preserves the momentum along (perpendicular to) the lines connecting the two valleys in the momentum space. Thus, the zigzag junctions introduce the minimum intervalley coupling, while the armchair junctions cause maximum intervalley coupling. For the triangular and hexagonal QDs, all sides are along the zigzag crystalline axes, while for square QDs two sides are along the armchair directions and thus intervalley coupling is much larger. This is further verified by examine the change of the intervalley coupling when we rotate the QD confinement potential by an angle $\theta_{{\rm rot}}$ relative to the orientation defined in figure \[fig:nonCirPot\], as shown in figure \[fig:Vvc-R-THS-rot\]. For triangular and hexagonal QDs, we find the intervalley coupling increases with $\theta_{{\rm rot}}$ significantly, and reaches a maximum at $\theta_{{\rm rot}}=30^{\circ}$ where the intervalley coupling becomes comparable with the square QDs. Note that at $\theta_{{\rm rot}}=30^{\circ}$, the sides of the confinement potential of the triangular and hexagonal QDs are along the armchair crystalline axes [\[]{}cf. figure \[fig:Vvc-R-THS-rot\](a) and (b)[\]]{}.
![$|V_{{\rm inter}}|$ vs $R$ relations for triangular QDs (a), hexagonal QDs (b), and square QDs (c), calculated using RSTB, for $V=0.01$eV (red circle), 0.1 eV (green square), and 0.2 eV (blue triangle) respectively. Supercell size is $L_{{\rm sc}}=240$. All lengths are in the unit of $a$. \[fig:Vvc-R-THS\]](Vinter-R-THR-a){width="15cm"}
![$|V_{{\rm inter}}|$ vs $V$ relations for triangular (red), hexagonal (green), and square (blue) QDs, calculated using RSTB. $L_{{\rm sc}}=240$ is used. All lengths are in the unit of $a$. \[fig:Vvc-V-THS\]](Vinter-V-THR){width="8cm"}
![Size and orientation dependence of $|V_{{\rm inter}}|$ for triangular QDs (a), hexagonal QDs (b), and square QDs (c), calculated using RSTB. The orientation of the QD potentials relative to the lattice is defined by the rotation angle $\theta_{{\rm rot}}$. $\theta_{{\rm rot}}=0^{\circ}$ corresponds to the configurations shown in figure \[fig:nonCirPot\]. The orientations with $\theta_{{\rm rot}}=15^{\circ}$ and $\theta_{{\rm rot}}=30^{\circ}$ are shown in the right panels here. For QDs of each shape, $|V_{{\rm inter}}|$ vs $R$ relations are compared for the orientations $\theta_{{\rm rot}}=0^{\circ}$, $\theta_{{\rm rot}}=15^{\circ}$, and $\theta_{{\rm rot}}=30^{\circ}$. For square QDs, $\theta_{{\rm rot}}=0^{\circ}$ and $\theta_{{\rm rot}}=30^{\circ}$ correspond to the same configuration. $V=0.01$eV and $L_{{\rm sc}}=240$. All lengths are in the unit of $a$. \[fig:Vvc-R-THS-rot\]](Vinter-R-THR-b){width="15cm"}
![$|V_{{\rm inter}}|$ as a function of the central position of the potential in a unit cell for triangular QDs (a), hexagonal QDs (b), and square QDs (c), calculated using RSTB. $R=40$, $V=0.01$eV, and $L_{{\rm sc}}=120$ are used. Resolution of the maps is 27$\times$27. All lengths are in the unit of $a$, and the unit of energy is eV. \[fig:cell-THS\]](cell-THR){width="12cm"}
Figure \[fig:cell-THS\] shows the dependence of $|V_{{\rm inter}}|$ on the central position of the confinement potential in a unit cell for the three kinds of noncircular QDs. It is clear that $|V_{{\rm inter}}|$ strongly depends on the potential center. For triangular and hexagonal QDs with the $C_{3}$ rotational symmetry, the dependence is similar to the circular QDs [\[]{}cf. figure \[fig:Vvc-cell\](b)[\]]{}, i.e. intervalley coupling is strongest if the potential is centered at a Mo site, but vanishes if the potential is centered at a S site. Such behavior is absent for the square QDs that lacks the $C_{3}$ rotational symmetry.
Intervalley coupling in smooth noncircular QD potential
-------------------------------------------------------
Finally, we examine square shaped QDs but with a smooth slope for the potential change on the boundary (i.e. finite $L$), which represents a general situation for QDs defined by patterned electrodes. The calculated intervalley coupling strength $|V_{{\rm inter}}|$ as a function of QD size $R$ is shown in figure \[fig:VvcR-RndSqu\]. The peak value of intervalley coupling decreases nearly exponentially with the QD size. The dependence on the smoothness $L$ is similar to the circular shaped QDs (cf. figure \[fig:Vvc-L\]), i.e. $|V_{{\rm inter}}|$ drops fast by several orders of magnitude when $L$ increases from 0 to 5, and becomes largely independent of $L$ for $L>0$. From figure \[fig:VvcR-RndSqu\], we can see that the $|V_{{\rm inter}}|$ curves as functions of $R$ for $L=10$, $L=20$, and $L=30$ are basically the same, except for the location of some fine dips.
![Intervalley coupling strength for square-shaped QDs. Potentials with smooth walls [\[]{}finite $L$, see the inset of (a) for the schematic illustration[\]]{} of different slops are compared with the one with vertical wall. (a) and (b) for the orientations $\theta_{{\rm rot}}=0^{\circ}$ and $\theta_{{\rm rot}}=15^{\circ}$ respectively. Red square symbol is for $L=$10, green upward triangle for $L=$20, blue downward triangle for $L=$ 30, and black circle for $L=0$. $V=0.02$ eV. $L_{{\rm sc}}=240$ for $R\in[50,90]$ and $L_{{\rm sc}}=360$ for $R\in[90,150]$. All lengths are in the unit of $a$, and the unit of energy is eV. \[fig:VvcR-RndSqu\]](Vinter-R-RndSqu-with-inset){width="14cm"}
conclusions and discussions\[sec:Conclusions\]
==============================================
To conclude, we have investigated intervalley coupling in QDs defined by confinement potentials of various shapes on extended TMD monolayer. The numerical results obtained using two completely different approaches agree well with each other. For confinement potentials with the $C_{3}$ rotational symmetry, the intervalley coupling is maximized (zero) if the potential is centered at a M (X) site. Comparing smooth confinement potentials with slopping walls and sharp confinement potentials with vertical walls, the intervalley coupling in the latter is much stronger. When the length scales of the QDs vary, the intervalley coupling exhibits fast oscillations where the maxima can be two orders of magnitude larger than the neighboring minima, and the envelop of the maxima has an overall nearly exponential decay with the increase of the QDs size. When all parameters are comparable, the intervalley coupling can be smaller by several orders of magnitude in circular shaped QDs, and in triangular and hexagonal shaped QDs with all sides along the zigzag crystalline axes. For all QDs studied, the largest intervalley coupling is upper bounded by 0.1 meV, found for small QDs with diameters of 20 nm and with sharp confinement potentials (i.e. vertical walls).
The intervalley coupling in these monolayer TMD QDs is much smaller compared to that in graphene nanoribbon QDs [@Trauzettel_Burkard_2007_3_192__Spin] and in silicon QDs [@Saraiva_Koiller_2009_80_81305__Physical; @Koiller_Das_2004_70_115207__Shallow; @Goswami_Eriksson_2007_3_41__Controllable; @Friesen_Coppersmith_2010_81_115324__Theory; @Culcer_Das_2010_82_155312__Quantum; @Friesen_Coppersmith_2007_75_115318__Valley; @Nestoklon_Ivchenko_2006_73_235334__Spin; @Boykin_Lee_2004_84_115__Valley]. In QDs formed in 2D crystals, intervalley coupling mainly occurs at the boundary where the translational invariance is lost, thus the coupling strength is proportional to the probability distribution of the electron at the QD boundary. In the TMD QDs studied here, the electron wavefunction has vanishing amplitude at the QD boundary [\[]{}cf. green shaded region in figure \[fig:ene\](b)[\]]{}. This is a characteristic of the QDs defined by confinement potentials on an otherwise extended crystalline monolayer, as opposed to the nanoribbon QDs [@Trauzettel_Burkard_2007_3_192__Spin]. The geometry here is also qualitatively different from the QDs formed in silicon inversion layers. In those silicon QDs, the two valleys are separated by a wavevector along $z$-direction (perpendicular to the inversion layer), and their coupling is caused by the sharp confinement with a length scale of nm in the $z$-direction [@Saraiva_Koiller_2009_80_81305__Physical; @Koiller_Das_2004_70_115207__Shallow; @Goswami_Eriksson_2007_3_41__Controllable; @Friesen_Coppersmith_2010_81_115324__Theory; @Culcer_Das_2010_82_155312__Quantum; @Friesen_Coppersmith_2007_75_115318__Valley; @Nestoklon_Ivchenko_2006_73_235334__Spin; @Boykin_Lee_2004_84_115__Valley]. Such confinement strongly squeezes the wavefunction, giving rise to its large amplitude at the interface where valley hybridization occurs, and hence there is a stronger intervalley coupling in the order of meV. In contrast, here it is the much larger lateral size $R$ of the QDs that determines the wavefunction amplitude at the QD boundary. We can expect that intervalley coupling is generically small in QDs defined by confinement potentials on otherwise extended monolayer.
Because of the very small intervalley coupling in the monolayer TMD QDs, valley hybridization is then well quenched by the much stronger spin-valley coupling of the electron in monolayer TMDs. As a result, the QDs can well inherit the valley physics of the 2D bulk such as the valley optical selection rules, making the valley pseudospin of single confined electron a perfect candidate of quantum bit. The sensitive dependence of intervalley coupling strength on the central position and the lateral length scales of the confinement potentials further makes possible the tuning of the intervalley coupling by external controls. The qualitative conclusions here are also applicable to QDs formed by confinement potentials on other 2D materials with finite gap.
The work was supported by the Croucher Foundation under the Croucher Innovation Award, the Research Grant Council of HKSAR under Grant No. HKU705513P and HKU9/CRF/13G (G.B.L., H.P., and W.Y.); the NSFC with Grant No. 11304014, the National Basic Research Program of China 973 Program with Grant No. 2013CB934500 and the Basic Research Funds of Beijing Institute of Technology with Grant No. 20131842001 and 20121842003 (G.B.L.); the MOST Project of China with Grants Nos. 2014CB920903 and 2011CBA00100, the NSFC with Grant Nos. 11174337 and 11225418, the Specialized Research Fund for the Doctoral Program of Higher Education of China with Grants No. 20121101110046 (Y.Y.).
The eigenvalue of Bloch function under $C_{3}$ rotation\[sec:C3rot\]
====================================================================
Here we give a derivation of $\gamma_{{\rm \alpha}}^{\tau}$, the eigenvalue of the Bloch function $\varphi_{\alpha}^{\tau}$ under the $C_{3}$ rotation. We first express $\varphi_{\alpha}^{\tau}$ in terms of linear combination of atomic orbitals $d_{\alpha}$ ($\alpha=$c, v), $$\varphi_{\alpha}^{\tau}(\vr)=\frac{1}{\sqrt{N}}\sum_{\vR}e^{i\tau K\cdot(\vR+\bm{\delta})}d_{\alpha}(\vr-\vR-\bm{\delta}),$$ in which $\vR$ is lattice vector, $\bm{\delta}$ is the position of Mo atom in the unit cell, $d_{{\rm c}}=d_{0}=d_{z^{2}}$, and $d_{{\rm v}}=d_{\tau2}=\frac{1}{\sqrt{2}}(d_{x^{2}-y^{2}}+i\tau d_{xy})$. According the definition of $\gamma_{\alpha}^{\tau}\equiv[C_{3}\varphi_{\alpha}^{\tau}(\vr)]/\varphi_{\alpha}^{\tau}(\vr)$ and using the orthogonality of $C_{3}$ ($\vk\cdot\vr=C_{3}\vk\cdot C_{3}\vr$), we have $$\begin{aligned}
C_{3}\varphi_{\alpha}^{\tau}(\vr) & = & \frac{1}{\sqrt{N}}\sum_{\vR}e^{i\tau K\cdot(\vR+\bm{\delta})}d_{\alpha}(C_{3}^{-1}\vr-\vR-\bm{\delta})=\frac{1}{\sqrt{N}}\sum_{\vR}e^{i\tau C_{3}K\cdot C_{3}(\vR+\bm{\delta})}d_{\alpha}(C_{3}^{-1}(\vr-C_{3}(\vR+\bm{\delta}))).\end{aligned}$$ Summing over all Mo positions $\{\vR+\bm{\delta}\}$ is equivalent to summing over all positions $\{C_{3}(\vR+\bm{\delta})\}$ when the rotation center locates at either Mo or S site (see figure \[fig:well\]). Accordingly, with the substitution $C_{3}(\vR+\bm{\delta})\rightarrow(\vR+\bm{\delta})$, the above equation becomes $$C_{3}\varphi_{\alpha}^{\tau}(\vr)=\frac{1}{\sqrt{N}}\sum_{\vR}e^{i\tau C_{3}K\cdot(\vR+\bm{\delta})}d_{\alpha}(C_{3}^{-1}(\vr-\vR-\bm{\delta})).\label{eq:C3phi1}$$ Because $C_{3}$ is an element of the wave-vector group of $\tau K$, we have $e^{i\tau C_{3}K\cdot\vR}=e^{i\tau K\cdot\vR}$ and consequently $$e^{i\tau C_{3}K\cdot(\vR+\bm{\delta})}=e^{i\tau K\cdot\vR}e^{i\tau C_{3}K\cdot\bm{\delta}}=e^{i\tau K\cdot\vR}e^{i\tau K\cdot C_{3}^{-1}\bm{\delta}}=e^{i\tau K\cdot(C_{3}^{-1}\bm{\delta}-\bm{\delta})}e^{i\tau K\cdot(\vR+\bm{\delta})}.\label{eq:etoC3K}$$ Define $\gamma_{d_{\alpha}}\equiv[C_{3}d_{\alpha}(\vr)]/d_{\alpha}(\vr)=d_{\alpha}(C_{3}^{-1}\vr)/d_{\alpha}(\vr)$, we have $$d_{\alpha}(C_{3}^{-1}(\vr-\vR-\bm{\delta}))=\gamma_{d_{\alpha}}d_{\alpha}(\vr-\vR-\bm{\delta}).\label{eq:gda}$$ Plugging equations and into , we obtain $$C_{3}\varphi_{\alpha}^{\tau}(\vr)=\frac{1}{\sqrt{N}}\sum_{\vR}e^{i\tau K\cdot(C_{3}^{-1}\bm{\delta}-\bm{\delta})}e^{i\tau K\cdot(\vR+\bm{\delta})}\gamma_{d_{\alpha}}d_{\alpha}(\vr-\vR-\bm{\delta})=e^{i\tau K\cdot(C_{3}^{-1}\bm{\delta}-\bm{\delta})}\gamma_{d_{\alpha}}\varphi_{\alpha}^{\tau}(\vr)$$ and therefore $$\gamma_{\alpha}^{\tau}=e^{i\tau K\cdot(C_{3}^{-1}\bm{\delta}-\bm{\delta})}\gamma_{d_{\alpha}},\label{eq:gat}$$ in which the factor $\gamma_{d_{\alpha}}$ comes from the rotation of atomic orbital around its own center and the factor $e^{i\tau K\cdot(C_{3}^{-1}\bm{\delta}-\bm{\delta})}$ comes from the change of the lattice phase of Mo atom after rotation. According to equation , when the rotation center locates at Mo [\[]{}figure \[fig:well\](a)[\]]{}, we have $\bm{\delta}=0$ and hence $\gamma_{\alpha}^{\tau}=\gamma_{d_{\alpha}}$ [\[]{}equation ; when the rotation center locates at S [\[]{}figure \[fig:well\](b)[\]]{}, we have $\bm{\delta}=(1,\frac{1}{\sqrt{3}})a$ and hence $\gamma_{\alpha}^{\tau}=e^{i\tau\frac{2\pi}{3}}\gamma_{d_{\alpha}}$ [\[]{}equation .
[^1]: [email protected]
|
---
abstract: 'We discuss the finite-size effects on the chiral phase transition in QCD. We employ the Nambu-Jona-Lasinio model and calculate the thermodynamic potential in the mean-field approximation. Finite-size effects on the thermodynamic potential are taken into account by employing the multiple reflection expansion. The critical temperature is lowered and the order of the phase transition is changed form first to second as the size of the system of interest is reduced.'
author:
- 'O. Kiriyama'
- 'T. Kodama'
- 'T. Koide'
title: 'Finite-size effects on the QCD phase diagram'
---
Introduction
============
The phase structure of QCD at high temperature $T$ and/or quark chemical potential $\mu$ has attracted a great deal of interest in cosmology, compact stars and heavy-ion collisions. During the last decades, significant advances have been made in our understandings of the phase structure of hot and/or dense quark matter. At the present time, it is widely accepted that the QCD vacuum undergoes a phase transition (into a chirally symmetric phase or a color-superconducting phase) at sufficiently high $T$ and $\mu$.
The studies of the phase structure of QCD so far have been exclusively devoted to homogeneous quark matter in bulk. However, the QGP phase created in relativistic heavy-ion collisions has a finite size and it is not obvious that the size is large enough to allow to apply the thermodynamic limit. If the size of the system of interest is not large enough, we need to take account of the deviations from thermodynamic calculations. For instance, the fluctuations of order parameter, induced by the finite-size effects, would lower the critical temperature and change the order of phase transition.
There are many works which concern with finite-size effects on hadron physics; lattice QCD calculations [Fuku1,Fuku2,Bail,Gopie,QCDSF,Koma,Orht,SS]{}, the finite-size scaling analyses [@FSScailing1; @FSScailing2; @FSScailing3], plasmon [@True], baryon number fluctuations [@Singh], and so on [Tho,Davi,Mus,Rischke,Kim,Zak1,Zak2,Sto,Amore1,He,Kogut,Wong,Fraga]{}. In particular, the finite-size effects on the QCD phase diagram are studied in [@He] and [@Kogut]. In the former, however, the discussion is limited to (2+1)-dimensional systems and in the latter, the phase transition discussed is not related directly with the manifestation of chiral symmetry.
In this paper, we discuss the finite-size effects to the chiral phase transition. For the purpose of illustrating our approach, we adopt the two-flavor Nambu–Jona-Lasinio (NJL) model as an effective model of QCD. The finite-size effects are taken into account by using the multiple reflection expansion (MRE) [@BB; @BJ; @Mad3]. Then, we study the size dependence of the thermodynamic potential in the mean-field approximation.
In the MRE approximation, the finite-size effects are included in terms of the modified density of states. The MRE has been used to calculate the thermodynamic quantities (such as the energy per baryon and the free energy) of finite lumps of quark matter [@Mad4]. As far as overall structures are concerned, the results are in good agreement with those of the mode-filling calculations with the MIT bag wave functions. We emphasize that our model calculation differs from the finite-size scaling analyses (in which finite-size effects act as an external field; more specifically, mass) and the studies in a finite box in that the present model is essentially based on the MIT bag wave functions. However, it should be noted that the relevance of the MRE approximation is still inconclusive when it is applied to nonperturbative calculations [@KH; @K1; @YHT; @K2].
This paper is organized as follows. In section 2, we apply the MRE approximation to the NJL model and calculate the $T$-$\mu$ phase diagram. However, the result is unacceptable because the order of chiral phase transition is changed from second to first. In section 3, we reconsider the application of the MRE to the chiral phase transition. Summary and concluding remarks are given in section 4.
simple application of MRE
=========================
We consider up and down quarks to be massless and start with the following $\mathrm{SU}(2)_L\times\mathrm{SU}(2)_R$ symmetric NJL Lagrangian, $$\begin{aligned}
\mathcal{L}=\bar{q}i\gamma^{\mu}\partial_{\mu}q +G\left[(\bar{q}q)^2+(\bar{q}i\gamma_5\vec{\tau}q)^2\right],\end{aligned}$$ where $q$ denotes the quark field with two flavors $(N_f=2)$ and three colors $(N_c=3)$ and $G$ is the coupling constant. The Pauli matrices $\vec{\tau}$ act in the flavor space. The ultraviolet cutoff $\Lambda_{\mbox{\scriptsize UV}} = 0.65$ GeV and the coupling constant $G=5.01~\mathrm{GeV}^{-2}$ are determined so as to reproduce the pion decay constant $f_{\pi}=93$ MeV and the chiral condensate $\langle \bar{q}q \rangle = (-250~\mathrm{MeV})^3$ in the chiral limit [@Kle; @HK].
In the mean-field approximation, the thermodynamic potential per unit volume $\omega$ is given by $$\begin{aligned}
\omega = \frac{m^2}{4G}-\nu\int^{\Lambda_{\mbox{\scriptsize UV}}}_0
dk\rho(k)\left\{ E_k +\frac{1}{\beta}\ln\left[1+e^{-\beta(E_k + \mu)}\right] \left[1 + e^{-\beta(E_k - \mu)}\right] \right\}, \label{eqn:TDP1}\end{aligned}$$ where $\nu=2N_cN_f$, $\beta=1/T$ and $E_k = \sqrt{k^2 + m^2}$ with $m=-2G\langle\bar{q}q\rangle$ being the dynamically generated quark mass. The density of states $\rho(k)$ is given by $$\begin{aligned}
\rho(k) = \frac{k^2}{2\pi^2}.\end{aligned}$$
Now we consider a quark matter confined in a given sphere that is embedded in the hadronic vacuum and do not impose the pressure balance condition between interior of the sphere and external vacuum. We would rather fix the radius of the sphere and examine the phase diagram. To incorporate finite-size effects into the thermodynamic potential, we replace the density of states $\rho (k)$ with the MRE density of states $\rho_{\mbox{\scriptsize
MRE}}(k,m,R)$, $$\begin{aligned}
\rho_{\mbox{\scriptsize MRE}}(k,m,R) = \frac{k^2}{2\pi^2} \left[1 + \frac{6\pi^2}{k R}f_S\left(\frac{k}{m}\right) + \frac{12 \pi^2}{(kR)^2} f_C\left(\frac{k}{m}\right) \right], \label{eqn:MRE}\end{aligned}$$ where $R$ is the radius of the sphere and the functions, $$\begin{aligned}
f_S (x) &=& -\frac{1}{8\pi}\left( 1 - \frac{2}{\pi}\arctan x \right), \\
f_C (x) &=& \frac{1}{12\pi^2}\left[ 1-\frac{3}{2}x\left( \frac{\pi}{2} -
\arctan x \right) \right],\end{aligned}$$ correspond respectively to the surface and the curvature contributions to the fermionic density of states. The parameter $m$ in Eq. (\[eqn:MRE\]) should be identified with the mass of quarks in the interior of the sphere. It should be also noted that the precise expression of the function $f_C(x)$ has not been derived within the MRE framework. Here we used the assumption proposed in Ref. [@Mad3]. We show the momentum dependence of the MRE density of states in Fig. \[fig1\]. One can see that the MRE density of states is suppressed by the finite-size effects and the suppression is more pronounced at low momenta. It can also be seen that the MRE density of states takes unphysical negative values at low momenta because of the neglect of the higher order terms in $1/R$. Thus, we shall introduce the infrared cutoff $\Lambda_{\mbox{\scriptsize IR}}$ that is defined by the largest zero of $\rho_{\mbox{\scriptsize MRE}}(\Lambda_{IR},m,R)$, as is shown in Fig. \[fig1\].
Thus, within the MRE approximation, the thermodynamic potential including the finite-size effects is expressed as follows, $$\begin{aligned}
\omega = \frac{m^2}{4G}-\nu\int^{\Lambda_{\mbox{\scriptsize UV}}}_{\Lambda_{\mbox{\scriptsize IR}}} dk\rho_{\mbox{\scriptsize MRE}}\left\{E_k + \frac{1}{\beta}\ln\left[1+e^{-\beta(E_k + \mu)}\right] \left[1 + e^{-\beta(E_k - \mu)}\right] \right\}. \label{eqn:TDP2}\end{aligned}$$
To obtain the phase diagram we minimize Eq. (\[eqn:TDP2\]) with respect to $m$. When the value of $m$ at the minimum is zero, the system is in the chiral symmetric phase (QGP) and if the value of $m$ is not zero, the system is in the broken symmetry phase (hadronic). We then find the phase diagram given in Fig. \[fig2\]. The outermost curve corresponds to $R=\infty $ (thermodynamical limit). In this limit, of course our approach recovers the known results[@AY]. The order of the chiral phase transition is second order at small chemical potential and as the chemical potential increases, the order of the phase transition is changed form second to first at the tricritical point. In this curve the parts corresponding to the second order and the first order transitions are indicated by the dotted and solid lines, respectively.
To see the effect of finite size in this approach, we show the critical temperature lines for $R=$ $100$, $50$, and $30$ fm (from outside to inside). One can see that the critical temperatures are lowered as $R$ is decreased. This behavior is consistent with the enhancement of large fluctuations induced by the finite-size effects. However, we note that the tricritical point which exists at $R=\infty $ disappears for all finite values of $R.$ At any value of chemical potential, the order of phase transition becomes the first one and the second order phase transition disappears. This is because, as we will see below, the MRE cannot describe the second order phase transition.
These results obtained by the simple application of the MRE to the thermodynamic potential are physically unreasonable for the following reasons. First, the reduction of the critical temperature seems to be too strong. Second, more critical one from physical point of view, is the disappearance of the second order phase transition. In general, large fluctuations of the order parameter could increase the order of the phase transition, for instance, from first to second. Therefore, it is unnatural that the second order phase transition was converted to the first order one. This unphysical behavior indicates the necessity of some modification of the present simple application of the MRE to the NJL model.
To clarify the problem mentioned above, let us take a closer look on the self-consistency condition (SCC) including the MRE density of states. The derivative of $\omega $ with respect to $m$ is given by $$\begin{aligned}
\frac{\partial \omega }{\partial m} &=&\frac{m}{2G}-\nu \int_{\Lambda _{\mbox{\scriptsize IR}}}^{\Lambda _{\mbox{\scriptsize UV}}}dk\frac{\partial
\rho _{\mbox{\scriptsize MRE}}}{\partial m}\left\{ E_{k}+\frac{1}{\beta }\ln \left[ 1+e^{-\beta (E_{k}+\mu )}\right] \left[ 1+e^{-\beta (E_{k}-\mu )}\right] \right\} \nonumber \\
&&-\nu \int_{\Lambda _{\mbox{\scriptsize IR}}}^{\Lambda _{\mbox{\scriptsize
UV}}}dk\rho _{\mbox{\scriptsize MRE}}\frac{m}{E_{k}}\left[ 1-\frac{1}{e^{\beta (E_{k}+\mu )}+1}-\frac{1}{e^{\beta (E_{k}-\mu )}+1}\right]
\label{eqn:mscc}\end{aligned}$$\[the term proportional to $\partial \Lambda _{\mbox{\scriptsize IR}}/\partial m$ vanishes by virtue of ${\rho_{{\mbox{\scriptsize MRE}}}}(\Lambda _{\mbox{\scriptsize IR}},m,R)=0$\]. For the second order phase transition to exist, it is necessary that this derivative vanishes at $m=0.$ However, as has been pointed out in Ref. [@KH], $m=0$ is not the solution to the SCC because of $$\lim_{m\rightarrow 0}\frac{\partial \rho _{\mbox{\scriptsize MRE}}}{\partial
m}<0,$$ so that $$\left. \frac{\partial \omega }{\partial m}\right\vert _{m=0}>0$$ except for $R=\infty .$ That is, the thermodynamic potential has always positive gradient at $m=0$, hence the thermodynamic potential (\[eqn:TDP2\]) cannot describe the second order phase transition. In Fig. \[fig3\], the thermodynamic potential at $(T,\mu )=(100,0)$ MeV is plotted as a function of $m$ for $R=\infty $, $20$ and $10$ fm.
Modification of the density of states
=====================================
In the proceeding section, we examined the $T$-$\mu$ phase diagram, looking for the minimum of the thermodynamic potential. This sort of the application of the MRE density of states has been employed in Refs. [@KH; @YHT]. The second order phase transition suddenly disappears soon after taking the finite-size effect into account. Thus, the phase diagram with finite size does not converge to that with infinite size even if the radius $R$ is increased. It follows that we cannot take the thermodynamic limit in the MRE approximation. Moreover, fluctuations due to the finite-size effects is expected to change the order of phase transition from first to second. Hence, it is unacceptable that the finite size effect is taken account of the MRE approximation employed in the proceeding section. In this section, we reconsider the application of the MRE to resolve this discrepancy.
The reason why the second order chiral phase transition disappeared in the previous section is the nonexistence of the trivial solution to the SCC. The disappearance is due to the mass dependence contained in the MRE density of states which prevents the disappearance of the second order phase transition is due to the mass dependence contained in the MRE density of states. As is shown in the last of the proceeding section, the SCC does not have the trivial solution because of the mass dependence.
However, there can be another scenario for the application of the MRE. According to the paper by Bailin and Bloch[@BB], the parameter contained in the MRE density of states is not the mass of particle inside the cavities but a parameter which characterizes the penetration of the wave function of the quark matter confined in a finite volume into the hadronic environment. In the following, we impose specific boundary conditions on quarks and study the cases in which the MRE density of states does not depend on the dynamical quark mass. In this scenario, the mass parameter contained in the MRE density of states is replaced by the parameter $\kappa$ which is generally independent of the dynamical quark mass. Then, the SCC is given by $$\begin{aligned}
\frac{m}{2G} = \nu\int^{\Lambda_{\mbox{\scriptsize UV}}}_{\Lambda_{\mbox{\scriptsize IR}}}dk \rho_{\mbox{\scriptsize MRE}}(k,\kappa,R) \frac{m}{E_k} \left[1-\frac{1}{e^{\beta(E_k + \mu)}+1} +\frac{1}{e^{\beta(E_k -
\mu)}+1} \right].\end{aligned}$$ We do not have the derivative term of the MRE density of states and the SCC obviously has the trivial solution.
For instance, let us consider the Dirichlet boundary condition by setting $\kappa =\infty $[@BB]. Then, we obtain $$\begin{aligned}
\lim_{\kappa \rightarrow \infty }f_{S}(k/\kappa ) &=&-\frac{1}{8\pi }, \\
\lim_{\kappa \rightarrow \infty }f_{C}(k/\kappa ) &=&\frac{1}{12\pi ^{2}},\end{aligned}$$and the infrared cutoff is given by $$\Lambda _{\mbox{\scriptsize IR}}=\frac{3\pi +\sqrt{9\pi ^{2}-64}}{8R}\simeq
\frac{1.8}{R}.$$
In Fig. \[fig4\], we show the corresponding phase diagram. The solid and dotted lines represents the critical lines of the first order and second order phase transitions, respectively. These lines represent the critical lines of (a) $R=\infty $, (b) $20$, (c) $10$ and (d) $5.7$ fm from the out-most one, respectively. Compared to the proceeding result, the finite-size effects are suppressed.
The dashed line represents the trajectory of the tricritical point as a function of $R$. One can easily see that the temperature and the chemical potential of the tricritical point are reduced as the $R$ is decreased. Finally, the tricritical point disappears at $R=12$ fm and hence the region of the first order phase transition vanishes. It should be noted that the phase diagram calculated with the MRE density of states converges to that in the infinite size in the limit of $R
\longrightarrow \infty$, and is consistent with the fact of the increase of fluctuations due to the finite size effects. Thus, this approach seems to be more plausible than the proceeding approximation.
Note that even the second order phase transition disappears at the radius smaller than $R=5.7$ fm. This is because of the existence of the infrared cutoff and we will find the phase transition at smaller $R$ by taking account of the higher order terms of the MRE density of state.
If this boundary condition is applicable for the real physical situation created in relativistic heavy ion collisions, then the equation of state of the matter changes with the expansion of the system. At the early stage of creation of the hot matter at the center, the system size is the order of $10 $ fm and the critical temperature at $\mu =0$ is around $130$ MeV, but after the system expands and reaches the hadronic freezeout stage, then the system size is much larger and the critical temperature almost recovers the thermodynamical limit. Therefore, the hydrodynamical description of such a situation requires an equation of state which reflects the above finite-size effect.
The physical environment consistent with the MIT bag model is considered to be realized by setting the Neumann boundary condition, $\kappa=0$, $$\begin{aligned}
\lim_{\kappa \rightarrow 0}f_S (k/\kappa) &=& 0, \\
\lim_{\kappa \rightarrow 0}f_C (k/\kappa) &=& -\frac{1}{24\pi^2},\end{aligned}$$ and the infrared cut off is $$\begin{aligned}
\Lambda_{\mbox{\scriptsize IR}} = \frac{1}{\sqrt{2}R} .\end{aligned}$$
In Fig. \[fig5\], we show the corresponding phase diagram. The lines correspond to the critical lines for the cases of (a) $R=\infty$, (b) $20$, (c) $10$, (d) $2$ and (e) $0.9$ fm from the out-most one, respectively. The finite-size effects are more suppressed, because the surface term vanishes in this boundary condition, and hence the difference of the critical lines between $R=\infty$ and $R=20$ fm in Fig. \[fig4\] is negligible. The remarkable reduction of the critical temperature is observed at smaller size than $R=2$ fm. The trajectory of the tricritical point denoted by the dashed line changes steeper than the case of the Dirichlet boundary condition. In this boundary condition, the tricritical point vanishes at $R=1.2$ fm and the second order phase transition disappears at the radius smaller than $R=0.9$ fm.
Contrary to the previous case, results from this boundary condition indicate that the finite size system with $R=20$ fm is large enough to take the thermodynamic limit. In this case, it will be reasonable to apply the usual equation of state which is given by the thermodynamical limit in hydrodynamical models to describe the heavy-ion collision processes.
So far, we have discussed the behavior of the thermodynamic potential. However, it is not easy to see how the second order phase transition is affected by the finite-size effects. For the purpose of looking at characteristic changes of the second order transition, we calculate the specific heat defined by $$\begin{aligned}
C &=& -T\left(\frac{\partial^2 \Omega}{\partial T^2}\right)_{\mu}.\end{aligned}$$ Note that the specific heat shows discontinuity at the critical point of the second order phase transition.
Figures 6 and 7 show the specific heat of the systems for the cases of $\kappa = 0$ and $\kappa = \infty$, respectively. Here, we chose $\mu=0.1$ GeV for a common quark chemical potential and examined the cases of $R=100$ fm and $R=10$ fm. In both cases, the specific heats jump at the corresponding critical temperatures. The jump of the specific heat still exists after including the finite-size effects. The result indicates that the second order phase transitions remain second order. However, the gap of the specific heat is decreased by the finite-size effects.
summary and concluding remarks
==============================
In this paper, wee have discussed the finite-size effects on the chiral phase transition in QCD in the context of the two-flavor Nambu-Jona-Lasinio model in the mean-field approximation. To incorporate finite-size effects into the thermodynamic potential, we employed the multiple reflection expansion (MRE), where finite-size effects are included by the modification of the density of state.
We applied the MRE approximation in different two ways. One is that the mass parameter contained in the MRE density of state is identified as the dynamical quark mass as is done in [@KH; @YHT]. In this case it is shown that only the first order phase transition can take place for any finite size and the critical point disappears.
The other is that we identify the massparameter as the inverse of the logarithmic derivative of the wave function at the surface, following the discussion by Balian and Bloch[@BB]. This corresponds to the situation that our system is confined in a finite volume by an external potential and the boundary condition is fixed independently from the order parameter of the system.. Then, the order of chiral phase transition changes from the second to the first as the chemical potential increases. The temperature and the chemical potential of at the critical point become smaller as the system size decreases. This behavior is consistent with the fact that fluctuations is enhanced by the finite-size effects. However, quantitatively, we found that the two extreme boundary conditions lead to substantially different results. For the Dirichlet boundary condition, the finite-size effect is still large. If this is the case, the critical temperature changes from $T_{c} \sim 130$ MeV to $T_{c} \sim 180$ MeV at zero chemical potential if the size of the system expands from $10$ fm to $50$ fm. Such effects should reflect on the hydrodynamical collective observables, i.e., the $v_{2}$. On the other hand, for the Neumann boundary condition, the finite-size effects are not so visible and the equation of state is almost the same as that obtained from the thermodynamical limit.
It should be noted that the MRE calculation will lose its validity when the size of our system is very small since we have ignored the higher order terms of the MRE approximation. Moreover, the finite-size effects considered in the MRE approximation is only the reduction of the density of state. However, in general, we can expect other influences due to the finite-size effects; for example, the energy eigenvalue of particles will be also affected.
We calculated the thermodynamic potential in the mean-field approximation, although it is known that large fluctuations near phase transitions invalidate the mean-field approximation. However, the QCD critical temperature in the mean-field approximation is qualitatively coincide with that of the lattice QCD calculations. Thus, the results obtained in this work would be still qualitatively reliable, unless the fluctuations are extraordinary enhanced by the finite-size effects.
Finally, we comment on the outlook for future studies. In this paper, we studied the static behavior of chiral symmetry. However, the finite-size effects would also affect the dynamical behaviors. For instance, the life time of fluctuations near phase transitions increases because of the critical slowing down (CSD)[@Koide1; @Koide2]. The CSD is due to the large fluctuations, and hence the finite-size effects will enhance the CSD.
The authors thank C. E. Aguiar and T. H. Elze for fruitful discussion and comments. The authors acknowledge to financial support of FAPESP, FAPERJ, CNPq and CAPES/PROBRAL.
[99]{} M. Fukugita, H. Mino, M. Okawa, and A. Ukawa, Phys. Rev. Lett. **65**, 816 (1990).
S. Aoki, et al., Phys. Rev. D**68**, 054502 (2003).
G. S. Bail and K. Schilling, Phys. Rev. D **46**, 2636 (1992).
A. Gopie and M. C. Ogilvie, Phys. Rev. D**59**, 034009 (1999).
A. A. Khan et al., Nucl. Phys. B **689**, 175 (2004).
Y. Koma and M. Koma, Nucl. Phys. B **713**, 575 (2005).
B. Orht, T. Lippert, and K. Schilling, Phys. Rev. D **72**, 014503 (2005).
K. Sasaki and S. Sasaki, Phys. Rev. D**72**, 034502 (2005).
H. Meyer-Ortmanns, Rev. Mod. Phys. **68**, 473 (1996).
J. Engels, S. Holtmann, T. Mendes, and T. Schulze, Phys. Lett. B **514**, 299 (2001).
P. H. Damgaard, P. Hernandez, K. Jansen, M. Laine, and L. Lellouch, Nucl. Phys. B **656**, 226 (2003).
T. L. Trueman, Phys. Rev. D**45**, 2125 (1992).
C. P. Singh, B. K. Patra, and S. Uddin, Phys. Rev. D **49**, 4023 (1994).
T.-H. Else, and W. Greiner, Phys. Lett. B **179**, 385 (1986).
N. J. Davidson, R. M. Quick, H. G. Miller, and A. Plastino, Phys. Lett. B**264**, 243 (1991).
A. Ansari and M. G. Mustafa, Nucl. Phys. A **539**, 752 (1992).
B. Q. Ba, Q. R. Zhang, D. H. Rischke, and W. Greiner, Phys. Lett. B **315**, 29 (1993).
D. K. Kim, Y. D. Han, and I. G. Koh, Phys. Rev. D **49**, 6943 (1994).
B. G. Zakharov, JETP letters, **65**, 615 (1997).
B. G. Zakharov, JETP letters, **80**, 67 (2004).
E. E. Zabrodin, L. V. Bravina, H. Stöcker, and W. Greiner, Phys. Rev. C **59**, 894 (1999).
P. Amore, M. C. Birse, J. A. McGovern, and N. R. Walet, Phys. Rev. D **65**, 074005 (2002).
Y. B. He, W. Q. Chao, C. S. Gao, and X. Q. Li, Phys. Rev. C **54**, 857 (1996).
J. B.Kogut and C. G. Strouthos, Phys. Rev. D **63**, 054502 (2001).
R.C. Wang and C-Y. Wang, Phys. Rev. D**38**, 348 (1988).
E. Fraga and R. Venugopalan, Physica A**345**, 121 (2004).
R. Balian and C. Bloch, Ann. Phys. (N.Y.) **60**, 401 (1970).
M.S. Berger and R.L. Jaffe, Phys. Rev. C **35**, 213 (1987); **44**, 566(E) (1991).
J. Madsen, Phys. Rev. D **50**, 3328 (1994).
See, for example, J. Madsen, astro-ph/9809032.
O. Kiriyama and A. Hosaka, Phys. Rev. D **67**, 085010 (2003).
O. Kiriyama, hep-ph/0401075 (to be published in Int. J. Mod. Phys. A).
S. Yasui, A. Hosaka, and H. Toki, Phys. Rev. D **71**, 4009 (2005).
O. Kiriyama, Phys. Rev. D **72**, 054009 (2005).
M. Asakawa and K. Yazaki, Nucl.Phys. [**A504**]{} 668 (1989).
S. P. Klevansky, Rev. Mod. Phys. **64**, 649 (1992).
T. Hstsuda and T. Kunihiro, Phys. Rep. **247** 221 (1994).
T. Koide and M. Maruyama, hep-ph/0308025.
T. Koide and M. Maruyama, Nucl. Phys. A**742**, 95 (2004).
|
---
abstract: |
We construct an effective action for Polyakov loops using the eigenvalues of the Polyakov loops as the fundamental variables. We assume $%
Z(N)$ symmetry in the confined phase, a finite difference in energy densities between the confined and deconfined phases as $T\rightarrow 0$, and a smooth connection to perturbation theory for large $T$. The low-temperature phase consists of $N-1$ independent fields fluctuating around an explicitly $Z(N)$ symmetric background. In the low-temperature phase, the effective action yields non-zero string tensions for all representations with non-trivial $N$-ality. Mixing occurs naturally between representations of the same $N$-ality. Sine-law scaling emerges as a special case, associated with nearest-neighbor interactions between Polyakov loop eigenvalues.
author:
- 'Peter N. Meisinger'
- 'Michael C. Ogilvie'
title: 'Polyakov Loop Models, Z(N) Symmetry, and Sine-Law Scaling'
---
Effective actions for the Polyakov loop are directly relevant to the phase diagram and equation of state for QCD and related theories [@Pisarski:2000eq; @Dumitru:2001xa; @Meisinger:2001cq; @Dumitru:2003hp; @Meisinger:2003id]. There is also good reason to believe that Polyakov loop effects play an important role in chiral symmetry restoration [@Gocksch:1984yk; @Dumitru:2000in; @Fukushima:2002ew; @Fukushima:2003fm; @Mocsy:2003qw]. Furthermore, there may be a close relationship between the correct effective action and the underlying mechanisms of confinement [@Meisinger:1997jt; @Meisinger:2002ji; @Kogan:2002yr; @Mocsy:2003tr; @Mocsy:2003un].
In this letter, we construct a general effective action for the Polyakov loop, making as few assumptions as possible. Our goal is a comprehensive model which describes both the confined and deconfined phases of $SU(N)$ pure gauge theories at finite temperature, as well as the properties of the deconfining phase transition. We will describe below the general form of such an effective action, using the Polyakov loop eigenvalues as natural variables. Effective actions of this type have previously been constructed for QCD at high temperature [@Gross:1980br; @Weiss:1980rj; @Weiss:1981ev], QCD in two dimensions[@Minahan:1993np; @Polychronakos:1999sx], and for lattice gauge theories at strong coupling[@Polonyi:1982wz; @Ogilvie:1983ss; @Green:1983sd; @Drouffe:1984hb].
The Polyakov loop $P(x)$ is the natural order parameter for the deconfinement transition. It is defined as a path-ordered exponential $P(x)=\emph{P}\exp \left[ i\int_{0}^{\beta }d\tau
A_{0}\left( \tau ,x\right) \right] $ where $x\,$is a $d$-dimensional spatial vector and $\beta \,$is the inverse temperature. The confined phase has a $%
Z(N)$ symmety that implies $\left\langle Tr_{F}\,\,P^{k}\left( x\right)
\right\rangle =0$ for all $n$ not divisible by $N$: $k|N\neq 0$. For these powers of the Wilson line, the asymptotic behavior $$\left\langle Tr_{F}P^{k}\left( x\right) Tr_{F}P^{+k}\left( y\right)
\right\rangle \propto \exp \left[ -\beta \sigma _{k}\left| x-y\right|
\right]$$ is observed in lattice simulatiosn as $\left| x-y\right| \rightarrow \infty $. The string tension $\sigma _{k}$ is in general temperature-dependent. Similar behavior is also seen for the two point functions associated with irreducible representations of the gauge group $\left\langle Tr_{R}P\left(
x\right) Tr_{R}P^{+}\left( y\right) \right\rangle $. Every irreducible representation $R$ has an associated $N$-ality $k_{R}$ such that $%
Tr_{R}P\rightarrow z^{k_{R}}Tr_{R}P$ under a global $Z(N)$ transformation, with $k_{R}\in \{0,..,N-1\}$ and $z\in Z(N)$. Any representation with $k_{R}\neq 0$ gives an operator $Tr_{R}P$ which is an order parameter for the spontaneous breaking of the global $Z(N)$ symmetry.
It is widely held that the asymptotic string tension depends only on the $N$-ality of the representation. In fact, lattice data from simulations of four dimension $SU(3)$ gauge theory do not yet show this behavior [@Deldar:1999vi; @Bali:2000un]. Instead, the string tension $\sigma _{R}$ associated with a representation $R$ scales approximately as $$\sigma _{R}=\frac{C_{R}}{C_{F}}\sigma _{F}$$ where $C_{R}$ is the quadratic Casimir invariant for the representation $R$. This behavior can be rigourously demonstrated in two-dimensional gauge theories. For simplicity, we will refer to this as Casimir scaling. We use the term $Z(N)$ scaling to descibe the asymptotic behavior
$$\sigma _{R}=\frac{k\left( N-k\right) }{N-1}\sigma _{F}$$
where $k$ is the $N$-ality of the representation $R$. As we will show below, $Z(N)$ scaling is obtained from Casimir scaling at large distances if there is mixing between representations of the same $N$-ality. If $Z(N)$ scaling holds, there are $\lbrack N/2\rbrack $ distinct asymptotic string tensions, where $\lbrack N/2\rbrack $ is the largest integer less than or equal to $%
N/2$. For the first $\left[ N/2\right] $ antisymmetric representations made by stacking boxes in Young tableux, Casimir and $Z(N)$ scaling are identical. Another possible scaling law for the string tensions is sine-law scaling
$$\sigma _{R}=\frac{\sin \left( \frac{\pi k}{N}\right) }{\sin \left( \frac{\pi
}{N}\right) }\sigma _{F}$$
which has been shown to occur in softly broken $\emph{N}=2$ super Yang-Mills theories[@Douglas:1995nw] and in MQCD[@Hanany:1997hr].
In a gauge in which $A_{0}$ is time independent and diagonal, we may write $%
P $ in the fundamental representation as $$P_{jk}=\exp \left( i\theta _{j}\right) \delta _{jk}$$ where we shall refer to the $N\,$ numbers $\theta _{j}$ as the eigenvalues. They are not independent because $\det \left( P\right) =1$ implies $$\sum_{j}\theta _{j}=0\,mod\,2\pi \rm{.}$$ The information in the different representations is redundant. All the information is contained in the $N-1$ independent eigenvalues of $P$.
Without spatial gauge fields, the only gauge-invariant operators we can construct are class functions of $P$, depending solely on its eigenvalues. Thus the effective action should take $P$ to be in the Cartan, or maximally commuting, subgroup $U(1)^{N-1}$. An effective action constructed from $%
SU(N) $ matrices would introduce spurious Goldstone bosons associated with the off-diagonal components of $P$ in the deconfined phase. On the other hand, the simplest Landau-Ginsburg models of the $SU(2)$ and $SU(3)\,$deconfining transitions have used $Tr_{F}P$ as the basic field. These two cases are special because $Tr_{F}P$ specifies the Polyakov loops in all other representations. In $SU(4)$, there are sets of eigenvalues for which $$Tr_{F}P=0 \quad\rm{and}\quad Tr_{F}P^{2}=0$$ consistent with a $Z(4)$ symmetry, and another set for which $$Tr_{F}P=0 \quad\rm{and}\quad Tr_{F}P^{2}\neq 0$$ consistent with a $Z(2)$ symmetry. Thus knowledge of $Tr_{F}P$ alone is insufficient for $N>3$[@Meisinger:2001cq]. From the characteristic polynomial, one may show that the eigenvalues of a special unitary matrix are determined by the set $\left\{
Tr_{F}P^{k}\right\} $ with $k=1..N-1$, and of course *vice versa*.
In both the confined and deconfined phases, we would like our effective theory to proceed from a classical field configuration which has the symmetries of the phase. If we denote that field cofiguration as $P_{0}$ in the confined phase, $Z(N)\,$symmetry requires that $Tr_{F}P_{0}^{k}=0$ for all $k$ not divisible by $N$. Enforcing this requirement for $k=1$ to $%
N-1$ leads to a unique set of eigenvalues via the characteristic equation $%
z^{N}+\left( -1\right) ^{N}=0$, but it is instructive to derive the set another way. For temperatures $T$ below the deconfinement transition $T_{d}$, center symmetry is unbroken. Unbroken center symmetry implies that $zP_{0}$ is equivalent to $P_{0}$ after an $SU(N)$ transformation:
$$zP_{0}=gP_{0}g^{+}\rm{.}$$
This condition in turn implies $Tr_{R}P_{0}=0$ for all representations $R$ with non-zero $N$-ality, which means that all representations with non-zero $%
N$-ality are confined. The most general form for $P_{0}$ may be given as $%
hdh^{+}$, where $h\in SU(N)$, and $d$ is the diagonal element of $SU(N)$ of the form $$d=w\,\,diag\left[ z,z^{2},..,z^{N}=1\right]$$ where $z$ is henceforth $\exp \left( 2\pi i/N\right) $, the generator of $%
Z(N)$. The phase $w$ ensures that $d$ has determinant $1$, and is given by $%
w=\exp \left[ -\left( N+1\right) \pi i/N\right] $. Strictly speaking, $w$ is required only for $N$ even, but it is convenient to use it consistently. We will henceforth identify $P_{0}$ with $d$. Another useful representation is $%
\left( P_{0}\right) _{jk}=\delta _{jk}\exp \left[ i\theta _{j}^{0}\right] $ where $$\theta _{j}^{0}==\frac{\pi }{N}\left( 2j-N-1\right) \rm{.}$$ Thus the $Z(N)$-symmetric arrangement of eigenvalues is uniform spacing around the unit circle. This is consistent with the known large-$N$behavior of soluble models in the confined phase[@Aharony:2003sx].
Assume that $P_{0}$ is the global minimum of the potential $V$ associated with the effective action for temperatures less than the deconfining temperature $T_{d}$. Because the gauge fields transform as the adjoint representation, $V$ is a class function depending only on representations of zero $N$-ality. It is thus a function only of the differences in eigenvalues $\theta _{j}-\theta _{k}$, with complete permutation symmetry as well. In the low-temperature, confining phase, we consider small fluctuations about $P_{0}$, defining $\theta _{j}=\theta
_{j}^{0}+\delta \theta _{j}$. Although this approximation may not be [*a priori*]{} valid, the assumption that fluctuations are small can be justified in the large-$N$ limit. For small fluctuations $$Tr_{F}P^{k}=\sum_{n=1}^{N}w^{k}z^{kn}e^{ik\delta \theta _{n}}\simeq
\sum_{n=1}^{N}w^{k}z^{kn}ik\,\delta \theta _{n}$$ provided $k$ is not divisible by $N$. Note that $Tr_{F}P^{k}$ takes the form of a discrete Fourier transform in eigenvalue space. We define the Fourier transform of the fields $\phi _{n}$ as $$\phi _{k}=\sum_{n=1}^{N}z^{kn}\delta \theta _{n}$$ so $Tr_{F}P^{k}=ikw^{k}\phi _{k}$. We will show below that the fields $\phi
_{k}$ are the normal modes of the $Z(N)$-symmetric phase in a quadratic approximation. The mode $\phi _{N}\equiv \phi _{0}\,$ is identically zero for $SU(N)$, and from the reality of $\theta $, we have $\phi _{N-n}=\left(
\phi _{n}\right) ^{\ast }$. In the case where $k$ is divisible by $N$, $%
Tr_{F}P^{k}$ has a leading constant behavior of $Nw^{k}$, and the term linear in $\phi $ vanishes. The adjoint Polyakov loop operator is approximately $Tr_{A}P=Tr_{F}P\,\,Tr_{F}P^{+}-1\simeq \phi _{1}\phi
_{1}^{\ast }-1$. The operator $\phi _{1}\phi _{1}^{\ast }$ has a non-zero vacuum expectation value, and should couple to scalar glueball states.
Operators with the same $N$-ality generally give inequivalent expressions when written in terms of the the $\phi $ variables. For example, $%
Tr_{F}P^{k}\propto \phi _{k}$ and $\left( Tr_{F}P\right) ^{k}\propto
\left( \phi _{1}\right) ^{k}$. Group characters $\chi _{R}(P)$ are represented as sums of terms with the same $N$-ality, but with different mode content. For example, in $SU(4)$, the $\mathbf{10}$ and $\mathbf{6}$ representations are given by $$\chi _{S,A}=\frac{1}{2}\left[ \left( Tr_{F}P\right) ^{2}\pm
Tr_{F}P^{2}\right] \simeq \frac{1}{2}\left[ \pm 2\phi _{2}+i\phi
_{1}^{2}\right] \rm{.}$$ As we discuss below in detail, these different combinations of fields will in general produce several different excitations, and only the lightest states will dominate at large distances.
The form of the effective action is fixed at high temperature by perturbation theory. The form of the effective action at high temperatures can be written as [@Bhattacharya:1990hk; @Bhattacharya:1992qb] $$S_{eff}=\beta \int d^{3}x\left[ T^{2}Tr_{F}\,\left( \nabla \theta \right)
^{2}+V_{1L}(\theta )\right].$$ The kinetic term is obtained from the underlying gauge action via $\frac{1}{2}Tr_{F}\,F_{\mu
\nu }^{2}\rightarrow \frac{1}{2}2Tr_{F}\,\left( \nabla A_{0}\right)
^{2}\rightarrow T^{2}Tr_{F}\,\left( \nabla \theta \right) ^{2}$. The potential $V_{1L}(\theta )$ is obtained from one-loop perturbation theory. For our purposes, it is conveniently expressed as [@Gross:1980br; @Weiss:1980rj; @Weiss:1981ev] $$\begin{aligned}
V_{1L}(\theta ) &=&-\sum_{n=1}^{\infty }\frac{2}{\pi ^{2}}\frac{T^{4}}{n^{4}}%
\left[ \left| TrP^{n}\right| ^{2}-1\right] \\
&=&-\sum_{n=1}^{\infty }\frac{2}{\pi ^{2}}\frac{T^{4}}{n^{4}}\left[
N-1+\sum_{j\neq k}\cos \left( n\left( \theta _{j}-\theta _{k}\right) \right)
\right].\end{aligned}$$ This series can be summed to a closed form in terms of the 4th Bernoulli polynomial. The complete one-loop expression has been obtained recently [@Diakonov:2003yy; @Diakonov:2003qb; @Diakonov:2004kc]; the complete kinetic term has a $\theta $-dependent factor in front of the derivatives.
There are $N$ equivalent solutions of the form $$\theta _{j}^{(p)}=\frac{2\pi p}{N}$$ related by $Z(N)$ symmetry breaking. All of these solutions break $Z(N)$ symmetry, with $Tr_{F}P=N\exp \left( 2\pi ip/N\right) $. For these values of $\theta $, we recover the standard black-body result for the free energy.
A sufficiently general form of the action at all temperatures has the form $$S_{eff}=\beta \int d^{3}x\left[ \kappa T^{2}Tr_{F}\,\left( \nabla \theta
\right) ^{2}+V(\theta )\right]$$ where $\kappa $ is a temperature dependent correction to the kinetic term, and $V$ is a function only of the adjoint eigenvalues $\theta _{j}-\theta
_{k}$. More complicated derivative terms can be added as necessary.
We assume that there is a finite free energy density difference associated with different values of $P$ as $T\rightarrow 0$. Because the eigenvalues are dimensionless, this requires terms in the potential with coefficients proportional to $\left( mass\right) ^{4}\,$as $T\rightarrow 0$. We can expand the potential to quadratic order around $P_{0}$$$V\left( \theta \right) \simeq V(\theta ^{0})+\sum_{j,k}\frac{1}{2}\left[
\frac{\partial ^{2}V}{\partial \theta _{j}\partial \theta _{k}}\right]
_{\theta ^{0}}\delta \theta _{j}\delta \theta _{k}$$ where the coefficient in the expansion depends only on $\left| j-k\right| $. The quadratic piece is thus diagonalized by Fourier transform, and we can write $$V\left( \theta \right) \simeq V(\theta ^{0})+\sum_{n=1}^{N-1}M_{n}^{4}\phi
_{n}\phi _{N-n}\rm{.}$$ Similarly, the kinetic term becomes $$\kappa T^{2}Tr_{F}\,\left( \nabla \theta \right) ^{2}=\frac{\kappa T^{2}}{N}%
\sum_{n=1}^{N-1}\left( \nabla \phi _{n}\right) \left( \nabla \phi
_{N-n}\right) \rm{.}$$ Once an ordering of eigenvalues is chosen, $Z(N)$ symmetry is expressed as a discrete translation symmetry in eigenvalue space. If we write the higher-order parts of $S_{eff}$ in terms of the Fourier modes $\phi _{n}$, each interaction will respect global conservation of $N$-ality. For example, in $SU(4)$, an interaction of the form $\phi _{1}^{2}\phi _{2}$ is allowed, but not $\phi _{1}^{2}\phi _{2}^{2}$.
The confining behavior of Polyakov loop two-point functions at low temperatures, for all representations of non-zero $N$-ality, is natural in the effective model. If the interactions are neglected, we can calculate the behavior of Polyakov loop two-point functions at low temperatures from the quadratic part of $S_{eff}$. We have for large distances $$\left\langle Tr_{F}P^{n}\left( x\right) Tr_{F}P^{+n}\left( y\right)
\right\rangle \propto \left\langle \phi _{n}\left( x\right) \phi _{n}^{\ast
}\left( y\right) \right\rangle \propto \exp \left[ -\frac{\sigma _{n}}{T}%
\left| x-y\right| \right]$$ where $\sigma _{n}\left( T\right) =\sqrt{NM_{n}^{4}(T)/\kappa (T)}$ is identified as the string tension for the $n$’th mode at temperature $T$. Interactions may cause the physical string tensions to be significantly different from the tree-level result, but the general field-theoretic framework remains in any case. Of course, $\phi _{N-n}=\left( \phi
_{n}\right) ^{\ast }$ implies $\sigma _{n}\left( T\right) =\sigma _{N-n}(T)$. The number of different string tensions is $\left[ N/2\right] $, the greatest integer less than or equal to $N/2$. The zero-temperature string tension is given at tree level by $$\sigma _{n}^{2}\left( 0\right) =NM_{n}^{4}(0)/\kappa (0)$$
This formalism gives a natural mechanism for the transition from Casimir scaling to scaling based on $N$-ality at large distances. As we have noted above, there are many composite operators with the same $N$-ality. For example, in $SU(8)$, the operators $\phi _{1}^{4}$, $\phi _{1}^{2}\phi _{2}$, $\phi _{2}^{2}$, $\phi _{1}\phi _{3}$, and $\phi _{4}$ all have $N$-ality $%
4$. Naively, the string tensions associated are $4\sigma _{1}$, $2\sigma
_{1}+\sigma _{2}$, $2\sigma _{2}$, $\sigma _{1}+\sigma _{3}$, and $\sigma
_{4}$, respectively. However, there is no symmetry principle prohibiting mixing of operators of the same $N$-ality. If there is mixing, then only the lightest string tension will be observed at large distances. Interactions between modes will lead to such mixing, and the operators $Tr_{F}P^{k}$ will exhibit a more complicated behavior.
Mixing between Polyakov loop operators of the same $N$-ality is easily understood using character expansion techniques applied to lattice gauge theories. For $d>2$, there are strong-coupling diagrams in which the sheet between two Polyakov loops in a representation $R\,$split into a bubble with sheets of representation $R_{1}$ and $R_{2}$. Such strong-coupling graphs are non-zero when $R\subset R_{1}\otimes R_{2}$; a necessary but not sufficient condition is that $R$ and $R_{1}\otimes R_{2}$ have the same $N$-ality. The same graph couples two representations $R$ and $R^{\prime }$ of the same $N$-ality via the process $R\rightarrow R_{1}+R_{2}\rightarrow
R^{\prime }$. The case $d=2$ is special: in the continuum, there are no gauge vector boson degrees of freedom, and the theory can be solved exactly, yielding Casimir scaling. The lattice theory is also exactly solvable; it reduces to a $d=1$ spin chain model, and there are no bubble diagrams as in higher dimensions. Although the string tensions associated with a given lattice action do not in general give Casimir scaling, the fixed-point lattice action in two dimensions does yield results identical to the continuum[@Menotti:1981ry].
It is easy to see that the $\left[ N/2\right] $ string tensions $%
\sigma _{1},\sigma _{2},..,\sigma _{\left[ N/2\right] }$ are all set independently within the class of effective models. A **minimal** model for the confined phase exhibiting this behavior is $$V=\sum_{k=1}^{\left[ N/2\right] }\frac{M_{k}^{4}}{k^{2}}%
Tr_{F}P^{k}Tr_{F}P^{+k}$$ $\,$where the $M_{k}$ are arbitrary. The $k$’th term in the sum forces $%
Tr_{F}P^{k}=0$, and gives rise to a mass for the mode $\phi _{k}$.
Sine-law scaling arises naturally from a nearest-neighbor interaction in the space of Polyakov loop eigenvalues. Consider the class of potentials with pairwise interactions between the eigenvalues $$V_{2}=\sum_{j,k}v\left( \theta _{j}-\theta _{k}\right) .$$ as obtained, for example, by two-loop perturbation theory[@KorthalsAltes:1993ca]. An elementary calculation shows that at tree level $$\sigma _{n}=\sqrt{\frac{2}{\kappa }\sum_{j=0}^{N-1}v^{\left( 2\right)
}\left( \frac{2\pi j}{N}\right) \sin ^{2}\left( \frac{\pi nj}{N}\right) }
\label{eq:dispersion}$$ where $v^{(2)}$ is the second derivative of $v$. This master formula relates the string tensions to the underlying potential. It is essentially the dispersion relation for a linear chain with arbitrary translation-invariant quadratic couplings: nearest-neighbor, next-nearest neighbor, *et cetera*. If the sum is dominated by the $j=1$ and $j=N-1\,$terms, representing a nearest-neighbor interaction in the space of eigenvalues, then we recover sine-law scaling $$\sigma _{n}\simeq \sqrt{\frac{4}{\kappa }v^{\left( 2\right) }\left( \frac{%
2\pi }{N}\right) }\sin \left( \frac{\pi n}{N}\right) \rm{.}$$ There is a large class of potential which will give this behavior. $Z(N)$ scaling can be obtained by a very small admixture of other components of $%
v^{\left( 2\right) }$, as we show below.
The string tension associated with different $N$-alities has been measured in $d=3$ and $4$ dimensions for $N=4$ and $6$, and in $d=4$ for $N=8$ [@Lucini:2001nv; @Lucini:2002wg; @DelDebbio:2001sj; @DelDebbio:2003tk; @Lucini:2004my]. We examine the simulation data by inverting equation (\[eq:dispersion\]) above to give a measure of the relative strength of the couplings $v^{\left( 2\right)
}\left( 2\pi j/N\right) $. We normalize the result of this inversion such that the sum of the independent couplings adds to one, and the results are shown in Table \[Table1\]. Sine-law scaling corresponds to a value of $1\,$for $j=1$, and $0$ for the other $\left[ N/2\right] -1$ independent couplings. For $%
Z(N)$ scaling, the large-$N$ limit gives $v^{\left( 2\right) }\left( 2\pi
j/N\right) \propto 1/j^{4}$, yielding the result shown in the table. Note that the difference between sine-law scaling and $Z(N)$ scaling remains small but finite, even as $N$ goes to infinity. The three-dimensional simulation results clearly favor $Z(N)$ scaling. The two sets of four-dimensional simulation results for $SU(4)$ and $SU(6)$ agree within errors, but one set lies systematically closer to the sine-law predictions, while the other set of results does not really favor either theoretical prediction. The $SU(8)$ results nominally favor the sine-law prediction. However, smaller error bars will be necessary to differentiate between sine-law and $Z(N)$ scaling in four dimensions.
$d$ $j=1$ $j=2$ $j=3$ $j=4$
----------------------------------------------- ----- ----------- ------------- ------------ ------------
$SU(4)$[@Lucini:2001nv; @Lucini:2002wg] 3 0.957(6) 0.043(2)
$SU(4)$[@Lucini:2004my] 4 0.968(16) 0.032(7)
$SU(4)$[@DelDebbio:2001sj; @DelDebbio:2003tk] 4 0.992(25) 0.008(5)
$SU(4)\,\,Z(N)$ any 0.9412 0.0588
$SU(6)$[@Lucini:2001nv; @Lucini:2002wg] 3 0.930(5) 0.065(8) 0.004(5)
$SU(6)$[@Lucini:2004my] 4 0.960(16) 0.045(18) -0.005(12)
$SU(6)$[@DelDebbio:2001sj; @DelDebbio:2003tk] 4 0.996(40) 0.0003(218) 0.004(26)
$SU(6)\,\,Z(N)$ any 0.9266 0.0618 0.0116
$SU(8)$[@Lucini:2004my] 4 1.028(22) -0.067(24) 0.047(32) -0.009(22)
$SU(8)\,\,Z(N)$ any 0.9249 0.0583 0.0130 0.0037
$Z(N)$ $N\rightarrow \infty $ any 0.9239 0.0577 0.0114 0.0036
sine Law any 1 0 0 0
: [\[Table1\]]{}Relative strength of couplings $v^{\left( 2\right)}\left( 2\pi j/N\right) $
The ultimate origin of confinement influences the form of the potential $V$, and the behavior of the different string tensions are derived in turn from $V$. In previous work on phenomenological models of the gluon equation of state[@Meisinger:2001cq; @Meisinger:2003id], we considered models of the form $$f=-p=V(\theta )-2\int \frac{d^{3}k}{\left( 2\pi \right) ^{3}}Tr_{A}\ln
\left[ 1-Pe^{-\beta \omega _{k}}\right]$$ where $V$ is a phenomenologically chosen potential whose role is to favor confinement at low temperature. We studied two physically motivated potentials which reproduce $SU(3)$ thermodynamics well. Both give a second-order deconfining transition for $SU(2)$, and a first-order transition for $SU(N)$ with $N\geq 3$ in accord with simulation results. One potential is a quadratic function of the eigenvalues $$V_{A}=v_{A}\sum_{\alpha =2}^{N}\sum_{\beta =1}^{\alpha -1}\left( \theta
_{\alpha }-\theta _{\beta }\right) \left( \theta _{\alpha }-\theta _{\beta
}-2\pi \right)$$ which appears as an $O(m^{2}T^{2})$ term in the high temperature expansions at one-loop. This potential leads to $\sigma _{k}^{A}=\sigma _{1}$ for every $N$-ality. The other potential is the logarithm of Haar measure $$V_{B}=v_{B}\sum_{\alpha =2}^{N}\sum_{\beta =1}^{\alpha -1}\ln \left[ 1-\cos
\left( \theta _{\alpha }-\theta _{\beta }\right) \right] \rm{.}$$ which is motivated by the appearance of Haar measure in the functional integral. This term is cancelled out in perturbation theory for flat space[@Weiss:1980rj], but not in other geometries[@Aharony:2003sx]. The potential $V_{B}$ was first studied by Dyson in his fundamental work on random matrices[@Dyson:1962], and leads to $$\sigma _{k}^{B}=\sqrt{\frac{k\left( N-k\right) }{N-1}}\sigma _{1}$$ which one might call ”square root of Z(N) scaling”. Although these two models are not consistent with the lattice simulation data for $\sigma _{k}$ for $N>3$, they remain viable phenomenological forms for $N=2$ and $3$. It is interesting to note that there is a well-studied potential which gives $%
Z(N)$ scaling[@Calogero:1978]. It is the integrable Calogero-Sutherland-Moser potential $$V_{Z}\left( \theta \right) =\sum_{j\neq k}\frac{\lambda }{\sin ^{2}\left(
\frac{\theta _{j}-\theta _{k}}{2}\right) }$$ which has been associated with two-dimensional gauge theory[@Minahan:1993np; @Polychronakos:1999sx].
The effective action has domain wall solutions in the deconfined phase, generalizing the behavior seen in the perturbative, high-temperature form of the effective action[@Bhattacharya:1990hk; @Bhattacharya:1992qb]. In this limit, the $Z(N)$ symmetry breaks spontaneously, with $N$ equivalent vacua characterized by $P=z^{n}I$. There is a surface tension associated with one-dimensional kink solutions of the effective equations of motion which interpolate between different phases. There are $\left[ \frac{N}{2}\right]$ different surface tensions $\rho _{k}$, each associated with the kink solution connecting the $n=0$ phase with the $n=k$ phase. Giovannangeli and Korthals Altes[@Giovannangeli:2001bh] have given an argument valid at high temperature indicating that the surface tensions $\rho _{k}$ obey $Z(N)$ scaling. This argument can be extended with minor modifications to arbitrary potentials of the form $V_2$, giving semiclassically $$\rho _{k}=\frac{k\left( N-k\right) }{N-1}\rho _{1}.$$ in the entire deconfined phase. A similar argument has been given for the string tension in the confined phase of $2+1$-dimensional Polyakov models, which consist of gauge fields coupled to scalars in the adjoint representation[@Kogan:2001vz]. In this case, the effective potential has the form $V_{2}$ when written in terms of dual variables. It is natural to speculate that there is a class of self-dual effective models in two spatial dimensions with identical string-tension scaling laws in both phases.
We have constructed the most general class of effective actions which can be built from the eigenvalues of the Polyakov loop. $Z(N)$ symmetry requires a symmetric distribution of eigenvalues in the confined phase, leading to a special role for $Z(N)$ Fourier modes. Mode mixing provides a natural crossover from Casimir scaling to scaling based on $N$-ality. There is a natural association between sine-law scaling and strong nearest-neighbor interactions between eigenvalues, and sine-law scaling and $Z(N)$ scaling are in some sense quite close. Nevertheless, string tensions in different $N$-ality sectors are determined by the parameters of the effective action, and are *a priori* arbitrary. It is clear from studies of different lattice gauge actions in two dimensions that the ratios of lattice string tensions need not be universal. Only near the continuum limit are two-dimensional string tension ratios universal. It remains mysterious why the wide class of models so far studied have not provided us with a wider variety of string tension scaling laws. We do not know if additional fields of zero N-ality can change string tension scaling in the continuum, and there has not been much exploration of the possible effect of fields that preserve a non-trivial subgroup of $Z(N)$. Studies of such models might give insight into the connection between confinement mechanisms and string-tension scaling laws.
MCO is grateful to the U.S. Dept. of Energy for financial support.
[10]{}
R. D. Pisarski, Phys. Rev. D [**62**]{}, 111501 (2000) \[arXiv:hep-ph/0006205\]. A. Dumitru and R. D. Pisarski, Phys. Lett. B [**525**]{}, 95 (2002) \[arXiv:hep-ph/0106176\]. P. N. Meisinger, T. R. Miller and M. C. Ogilvie, Phys. Rev. D [**65**]{}, 034009 (2002) \[arXiv:hep-ph/0108009\]. A. Dumitru, Y. Hatta, J. Lenaghan, K. Orginos and R. D. Pisarski, Phys. Rev. D [**70**]{}, 034511 (2004) \[arXiv:hep-th/0311223\]. P. N. Meisinger, M. C. Ogilvie and T. R. Miller, Phys. Lett. B [**585**]{}, 149 (2004) \[arXiv:hep-ph/0312272\].
A. Gocksch and M. Ogilvie, Phys. Rev. D [**31**]{}, 877 (1985). A. Dumitru and R. D. Pisarski, Phys. Lett. B [**504**]{}, 282 (2001) \[arXiv:hep-ph/0010083\]. K. Fukushima, Phys. Lett. B [**553**]{}, 38 (2003) \[arXiv:hep-ph/0209311\]. K. Fukushima, Phys. Rev. D [**68**]{}, 045004 (2003) \[arXiv:hep-ph/0303225\]. A. Mocsy, F. Sannino and K. Tuominen, Phys. Rev. Lett. [**92**]{}, 182302 (2004) \[arXiv:hep-ph/0308135\].
P. N. Meisinger and M. C. Ogilvie, Phys. Lett. B [**407**]{}, 297 (1997) \[arXiv:hep-lat/9703009\]. P. N. Meisinger and M. C. Ogilvie, Phys. Rev. D [**66**]{}, 105006 (2002) \[arXiv:hep-ph/0206181\]. I. I. Kogan, A. Kovner and J. G. Milhano, JHEP [**0212**]{}, 017 (2002) \[arXiv:hep-ph/0208053\]. A. Mocsy, F. Sannino and K. Tuominen, Phys. Rev. Lett. [**91**]{}, 092004 (2003) \[arXiv:hep-ph/0301229\]. A. Mocsy, F. Sannino and K. Tuominen, JHEP [**0403**]{}, 044 (2004) \[arXiv:hep-ph/0306069\].
D. J. Gross, R. D. Pisarski and L. G. Yaffe, Rev. Mod. Phys. [**53**]{}, 43 (1981). N. Weiss, Phys. Rev. D [**24**]{}, 475 (1981). N. Weiss, Phys. Rev. D [**25**]{}, 2667 (1982). J. A. Minahan and A. P. Polychronakos, Phys. Lett. B [**312**]{}, 155 (1993) \[arXiv:hep-th/9303153\]. A. P. Polychronakos, arXiv:hep-th/9902157.
J. Polonyi and K. Szlachanyi, Phys. Lett. B [**110**]{}, 395 (1982). M. Ogilvie, Phys. Rev. Lett. [**52**]{}, 1369 (1984). F. Green and F. Karsch, Nucl. Phys. B [**238**]{}, 297 (1984). J. M. Drouffe, J. Jurkiewicz and A. Krzywicki, Phys. Rev. D [**29**]{}, 2982 (1984). S. Deldar, Phys. Rev. D [**62**]{}, 034509 (2000) \[arXiv:hep-lat/9911008\]. G. S. Bali, Phys. Rev. D [**62**]{}, 114503 (2000) \[arXiv:hep-lat/0006022\]. M. R. Douglas and S. H. Shenker, Nucl. Phys. B [**447**]{}, 271 (1995) \[arXiv:hep-th/9503163\]. A. Hanany, M. J. Strassler and A. Zaffaroni, Nucl. Phys. B [**513**]{}, 87 (1998) \[arXiv:hep-th/9707244\].
O. Aharony, J. Marsano, S. Minwalla, K. Papadodimas and M. Van Raamsdonk, arXiv:hep-th/0310285. T. Bhattacharya, A. Gocksch, C. Korthals Altes and R. D. Pisarski, Phys. Rev. Lett. [**66**]{}, 998 (1991). T. Bhattacharya, A. Gocksch, C. Korthals Altes and R. D. Pisarski, Nucl. Phys. B [**383**]{}, 497 (1992) \[arXiv:hep-ph/9205231\]. D. Diakonov and M. Oswald, Phys. Rev. D [**68**]{}, 025012 (2003) \[arXiv:hep-ph/0303129\]. D. Diakonov and M. Oswald, Phys. Rev. D [**70**]{}, 016006 (2004) \[arXiv:hep-ph/0312126\]. D. Diakonov and M. Oswald, arXiv:hep-ph/0403108. P. Menotti and E. Onofri, Nucl. Phys. B [**190**]{}, 288 (1981). C. P. Korthals Altes, Nucl. Phys. B [**420**]{}, 637 (1994) \[arXiv:hep-th/9310195\]. B. Lucini and M. Teper, Phys. Rev. D [**64**]{}, 105019 (2001) \[arXiv:hep-lat/0107007\]. B. Lucini and M. Teper, Phys. Rev. D [**66**]{}, 097502 (2002) \[arXiv:hep-lat/0206027\].
L. Del Debbio, H. Panagopoulos, P. Rossi and E. Vicari, JHEP [**0201**]{}, 009 (2002) \[arXiv:hep-th/0111090\]. L. Del Debbio, H. Panagopoulos and E. Vicari, JHEP [**0309**]{}, 034 (2003) \[arXiv:hep-lat/0308012\]. B. Lucini, M. Teper and U. Wenger, JHEP [**0406**]{}, 012 (2004) \[arXiv:hep-lat/0404008\]. F. L. Dyson, J. Math. Phys. [**3**]{}, 140 (1962).
F. Calogero and A. Perelomov, Commun. Math. Phys. [**59**]{}, 109 (1978).
P. Giovannangeli and C. P. Korthals Altes, Nucl. Phys. B [**608**]{}, 203 (2001) \[arXiv:hep-ph/0102022\]. I. I. Kogan, A. Kovner and B. Tekin, JHEP [**0105**]{}, 062 (2001) \[arXiv:hep-th/0104047\].
|
---
abstract: 'We prove a universal property of Deligne’s category $\uRep^{ab}(S_d)$. Along the way, we classify tensor ideals in the category $\uRep(S_d)$.'
address:
- 'J.C.: Department of Mathematics, University of Oregon, Eugene, OR 97403, USA'
- 'V.O.: Department of Mathematics, University of Oregon, Eugene, OR 97403, USA'
author:
- Jonathan Comes
- Victor Ostrik
title: 'On Deligne’s category $\uRep^{ab}(S_d)$.'
---
Introduction
============
Let $F$ be a field of characteristic zero and let $I$ be a finite set. Let $S_I$ be the symmetric group of the permutations of $I$ and let ${\operatorname{Rep}}(S_I)$ be the category of finite dimensional $F-$linear representations of $S_I$ considered as a symmetric tensor category. Let $X_I\in {\operatorname{Rep}}(S_I)$ be the space of $F-$valued functions on $I$ with an obvious action of $S_I$. The object $X_I$ with pointwise operations has a natural structure of associative commutative algebra with unit $1_{X_I}$ in the category ${\operatorname{Rep}}(S_I)$. We have a morphism ${\text{Tr}}: X_I \to F$ defined as a trace of the operator of left multiplication; clearly the map $X_I {\otimes}X_I \to F$ given by $x\otimes y\mapsto {\text{Tr}}(xy)$ is a non-degenerate pairing. Finally, ${\text{Tr}}(1_{X_I})=\dim (X_I)=|I|$ where $|I|\ge 0$ is the cardinality of $I$.
Now let $G$ be a finite group acting on $d-$dimensional associative commutative unital algebra $T$ over $F$ such that the pairing ${\text{Tr}}(xy)$ is non-degenerate. It is easy to see[^1] that there exists a finite set $I$ with $|I|=d$ and an essentially unique tensor functor $F: {\operatorname{Rep}}(S_I)\to {\operatorname{Rep}}(G)$ such that $F(X_I)\simeq T$ (isomorphism of $G-$algebras); in this sense the category ${\operatorname{Rep}}(S_I)$ is universal category (in the realm of representation categories of finite groups) with object $X_I$ as above.
{#Tabc}
Now for arbitrary symmetric tensor category ${\mathcal{T}}$ one can consider objects $T\in {\mathcal{T}}$ satisfying the following:
\(a) $T$ has a structure of associative commutative algebra (given by the multiplication map $\mu_T : T{\otimes}T\to T$) with unit (given by the map $1_T: {\mathbf{1}}\to T$);
\(b) The object $T$ is rigid. Moreover if we define the map ${\text{Tr}}: T\to {\mathbf{1}}$ as a composition $$T{\stackrel{{\text{id}}_T{\otimes}coev_T}{\longrightarrow}} T{\otimes}T{\otimes}T^*{\stackrel{\mu_T{\otimes}{\text{id}}_{T^\ast}}{\longrightarrow}} T{\otimes}T^*\simeq T^*{\otimes}T{\stackrel{ev_T}{\longrightarrow}} {\mathbf{1}},$$ then the pairing $T\otimes T{\stackrel{\mu_T}{\longrightarrow}} T{\stackrel{{\text{Tr}}}{\longrightarrow}} {\mathbf{1}}$ is non-degenerate, that is it corresponds to an isomorphism $T\simeq T^*$ under the identification ${\operatorname{Hom}}(T\otimes T,{\mathbf{1}})={\operatorname{Hom}}(T,T^*)$;
\(c) We have $\dim(T)=t\in F$ (equivalently ${\text{Tr}}(1_T)=t$).
For an arbitrary $t\in F$ Deligne defined in [@Del07] a symmetric tensor category $\uRep(S_t)$ with a distinguished object $X$ which is universal in the following sense:
\[uRepuni\] [*([@Del07 Proposition 8.3])*]{} Let ${\mathcal{T}}$ be a Karoubian symmetric tensor category over $F$. The functor ${\mathcal}{F}\mapsto {\mathcal}{F}(X)$ is an equivalence of the category of braided tensor functors $\uRep(S_t)\to {\mathcal{T}}$ with the category of objects $T\in {\mathcal{T}}$ satisfying [*(a), (b), (c)*]{} above and their isomorphisms.
Note that for $t=d\in {{\mathbb Z}}_{\ge 0}$ Proposition \[uRepuni\] applied to $T=X_I$ (with $|I|=d$) produces a canonical functor $\uRep(S_d)\to {\operatorname{Rep}}(S_d)$ (where $S_d:=S_I$). It is known (see [@Del07 Théorème 6.2]) that this functor is surjective on Hom’s. Moreover, the morphisms sent to zero by this functor are precisely the so-called negligible morphisms (see [@Del07 §6.1]).
{#section-1}
The category $\uRep(S_t)$ is a Karoubian category; it is not abelian for $t=d\in {{\mathbb Z}}_{\ge 0}$. Remarkably, in [@Del07 Proposition 8.19] Deligne defined an [*abelian*]{} symmetric tensor category $\uRep^{ab}(S_d)$ and a fully faithful braided tensor functor $\uRep(S_d)\to \uRep^{ab}(S_d)$.[^2] The main goal of this paper is to prove a certain universal property of the category $\uRep^{ab}(S_d)$ conjectured in [@Del07 Conjecture 8.21].
To state this property we need to use the language of algebraic geometry [*within*]{} an abelian symmetric tensor category ${\mathcal{T}}$ (see [@Del90]). Namely, for an object $T\in {\mathcal{T}}$ satisfying (a), (b), (c) above we can talk about the (affine) ${\mathcal{T}}-$scheme ${{\bf I}}:={\text{Spec}}(T)$ and the affine group scheme $S_{{\bf I}}$ of its automorphisms, see [@Del07 §8.10]. Furthermore, assume that the category ${\mathcal{T}}$ is [*pre-Tannakian*]{} (see §\[preT\] below), that is it satisfies finiteness conditions from [@Del90 2.12.1]. Recall that in this case a [*fundamental group*]{} of ${\mathcal{T}}$ is defined in [@Del90 §8.13]. This is an affine group scheme $\pi \in {\mathcal{T}}$ which acts functorially on any object of ${\mathcal{T}}$ and this action is compatible with a formation of tensor products. In particular, the action of $\pi$ on $T$ gives a homomorphism ${\varepsilon}: \pi \to S_{{\bf I}}$. Let ${\operatorname{Rep}}(S_{{\bf I}})$ be the category of representations of $S_{{\bf I}}$ (see [@Del07 §8.10]) and let ${\operatorname{Rep}}(S_{{\bf I}},{\varepsilon})$ be the full subcategory of ${\operatorname{Rep}}(S_{{\bf I}})$ consisting of such representations $\rho : S_{{\bf I}}\to GL(V)$ that the action $\rho \circ {\varepsilon}$ of $\pi$ on $V$ coincides with the canonical action (see [@Del07 §8.20]). ${\operatorname{Rep}}(S_{{\bf I}},{\varepsilon})$ is an abelian symmetric tensor category and $T$ is one of its objects. It follows that the functor ${\mathcal}{F}: \uRep(S_t)\to {\mathcal{T}}$ constructed in Proposition \[uRepuni\] factorizes as $\uRep(S_t)\xrightarrow{{\mathcal}{F}_T} {\operatorname{Rep}}(S_{{\bf I}},{\varepsilon})\to {\mathcal{T}}$ where the functor ${\mathcal}{F}_T$ is constructed by applying Proposition \[uRepuni\] to $T\in {\operatorname{Rep}}(S_{{\bf I}},{\varepsilon})$ and ${\operatorname{Rep}}(S_{{\bf I}},{\varepsilon})\to {\mathcal{T}}$ is the forgetful functor. Here is the main result of this paper:
\[main\] [*(cf. [@Del07 8.21.2])*]{} Let ${\mathcal{T}}$ be a pre-Tannakian category and $T\in {\mathcal{T}}$ be an object satisfying [*(a), (b), (c)*]{} from §\[Tabc\] with $t=d\in {{\mathbb Z}}_{\ge 0}\subset F$. Then the category ${\operatorname{Rep}}(S_{{\bf I}},{\varepsilon})$ endowed with the functor ${\mathcal}{F}_T: \uRep(S_d)\to {\operatorname{Rep}}(S_{{\bf I}},{\varepsilon})$ is equivalent to one of the following:
[*(a)*]{} ${\operatorname{Rep}}(S_d)$ together with the functor $\uRep(S_d)\to {\operatorname{Rep}}(S_d)$ from §\[Tabc\];
[*(b)*]{} $\uRep^{ab}(S_d)$ together with the fully faithful functor $\uRep(S_d)\to \uRep^{ab}(S_d)$ above.
We note that a similar (and easier) statement holds true for $t\not \in {{\mathbb Z}}_{\ge 0}$, see [@Del07 Corollary B2].
{#section-2}
The forgetful functor ${\operatorname{Rep}}(S_{{\bf I}},{\varepsilon})\to {\mathcal{T}}$ above is an exact braided tensor functor. Thus Theorem \[main\] implies that for a pre-Tannakian category ${\mathcal{T}}$ a braided tensor functor ${\mathcal}{F}: \uRep(S_d)\to {\mathcal{T}}$ either factorizes through $\uRep(S_d)\to {\operatorname{Rep}}(S_d)$ or extends to an exact tensor functor $\uRep^{ab}(S_d)\to {\mathcal{T}}$. A crucial step in our proof of Theorem \[main\] is a construction of pre-Tannakian category ${\mathcal{K}}_d^0$ and fully faithful embedding $\uRep(S_d)\subset {\mathcal{K}}_d^0$ such that we have the following [*extension property*]{}: a tensor (not necessarily braided) functor $\uRep(S_d)\to {\mathcal{T}}$ either factorizes through $\uRep(S_d)\to {\operatorname{Rep}}(S_d)$ or extends to an exact tensor functor ${\mathcal{K}}_d^0\to {\mathcal{T}}$, see §\[extension\]. Then we use general properties of the fundamental groups from [@Del90 §8] in order to prove that ${\mathcal{K}}_d^0$ satisfies the universal property as in Theorem \[main\] and, in fact, is equivalent to $\uRep^{ab}(S_d)$.
The following analogy plays a significant role in the proof of Theorem \[main\]. Let $TL(q)$ be the [*Temperley-Lieb category*]{}, see e.g. [@GW §A1]. Assume that $q$ is a nontrivial root of unity. It is well known that the category $TL(q)$ is tensor equivalent to the category of [*tilting modules*]{} over quantum $SL(2)$, see e.g. [@O proof of Theorem 2.4]. Thus $TL(q)$ is a Karoubian tensor category (braided but not symmetric) endowed with a fully faithful functor to the abelian tensor category ${\mathcal{C}}_q$ of finite dimensional representations of quantum $SL(2)$. On the other hand there exists a well known semisimple tensor category $\bar {\mathcal{C}}_q$ and a full tensor functor $TL(q)\twoheadrightarrow \bar {\mathcal{C}}_q$, see e.g. [@A §4]. We consider the diagram $\bar {\mathcal{C}}_q\twoheadleftarrow TL(q)\subset {\mathcal{C}}_q$ as a counterpart of the diagram ${\operatorname{Rep}}(S_d)\twoheadleftarrow \uRep(S_d)\subset \uRep^{ab}(S_d)$.
The main technical result of [@O] states that tensor functors $TL(q)\to {\mathcal{D}}$ to certain abelian tensor categories ${\mathcal{D}}$ factorize either through $TL(q)\to \bar {\mathcal{C}}_q$ or through $TL(q)\subset {\mathcal{C}}_q$ (see [@O §2.6]) which is reminiscent of the extension property of the category ${\mathcal{K}}_d^0$ above, see also [@O Remark 2.10]. Thus in the construction of ${\mathcal{K}}_d^0$ we follow the strategy from [@O] with crucial use of information from [@CO]. Namely, we find ${\mathcal{K}}_d^0$ inside the homotopy category of $\uRep(S_d)$ as a heart of a suitable [*$t-$structure*]{} (see §\[deft\]). The definition of the $t-$structure is based on Lemma \[Delemma\] (due to P. Deligne) and almost immediately implies the extension property of the category ${\mathcal{K}}_d^0$ mentioned above. However, the verification of the axioms of a $t-$structure is quite nontrivial. To do this we use a decomposition of the category $\uRep(S_d)$ into [*blocks*]{} described in [@CO Theorem 5.3]. We provide a blockwise description of the $t-$structure above in §\[tblock\]. We then observe that the description above coincides with the description of a well known $t-$structure on the blocks of the Temperley-Lieb category.
Acknowledgments
---------------
This paper owes its existence to Pierre Deligne who explained a proof of Lemma \[Delemma\], which is crucial to this paper, to the second named author when he was visiting Institute for Advanced Study. Both authors are happy to express their deep gratitude to him and to the Institute for Advanced Study which made this interaction possible. The authors are also very grateful to Alexander Kleshchev who initiated this project. We also thank Michael Finkelberg and Friedrich Knop for their interest in this work and Darij Grinberg for his detailed comments. The work of the second named author was partially supported by the NSF grant DMS-0602263.
Preliminaries
=============
Tensor categories terminology {#preT}
-----------------------------
In this paper a [*tensor (or monoidal) category*]{} is a category with a tensor product functor endowed with an associativity constraint and a unit object ${\mathbf{1}}$, see e.g. [@BK Definition 1.1.7]. Recall that a tensor category is called [*rigid*]{} if any object admits both a left and right dual, see [@BK Definition 2.1.1]. A [*braided tensor category*]{} is a tensor category equipped with a braiding, see [@BK Definition 1.2.3]. A [*symmetric tensor category*]{} is a braided tensor category such that the square of the braiding is the identity.
Recall that $F$ is a fixed field of characteristic zero. All categories and functors considered in this paper are going to be $F-$linear. So, an [*$F-$linear tensor category*]{} (or [*tensor category over $F$*]{}) is a tensor category which is $F-$linear (but not necessarily additive) and such that the tensor product functor is $F-$bilinear. A [*Karoubian tensor category*]{} over $F$ is an $F-$linear tensor category which is Karoubian as an $F-$linear category (i.e. it is additive and every idempotent endomorphism is a projection to a direct summand). A *tensor ideal* ${\mathcal}{I}$ in a tensor category ${\mathcal}{T}$ consists of subspaces ${\mathcal}{I}(X,Y)\subset{\operatorname{Hom}}_{\mathcal}{T}(X,Y)$ for every $X,Y\in{\mathcal}{T}$ such that (i) $h\circ g\circ f\in{\mathcal}{I}(X, W)$ whenever $f\in{\operatorname{Hom}}_{\mathcal}{T}(X, Y), g\in{\mathcal}{I}(Y,Z), h\in{\operatorname{Hom}}_{\mathcal}{T}(Z,W)$, and (ii) $f\otimes{\text{id}}_Z\in{\mathcal}{I}(X\otimes Z, Y\otimes Z)$ whenever $f\in{\mathcal}{I}(X, Y)$. For example, if the category ${\mathcal}{T}$ has a well defined trace the collection of *negligible morphisms*[^3] forms a tensor ideal, see [@GW §A1.3].
Finally we say that an $F-$linear symmetric tensor category ${\mathcal{T}}$ is [*pre-Tannakian*]{} if the following conditions are satisfied:
\(a) all Hom’s are finite dimensional vector spaces over $F$ and ${\text{End}}({\mathbf{1}})=F$;
\(b) ${\mathcal{T}}$ is an abelian category and all objects have finite length;
\(c) ${\mathcal{T}}$ is rigid.
In the terminology of [@Del90] a pre-Tannakian category is the same as a “catégorie tensorielle" (see [@Del90 §2.1]) satisfying a finiteness assumption [@Del90 2.12.1]. This is precisely the class of tensor categories over $F$ for which a [*fundamental group*]{} (see [@Del90 §8]) is defined.
The category $\uRep(S_t)$
-------------------------
We recall here briefly the construction of the category $\uRep(S_t)$ following [@CO §2]. We refer the reader to [*loc. cit.*]{} and [@Del07 §8] for much more detailed exposition.
### The category $\uRep_0(S_t)$
Let $A$ be a finite set. A [*partition*]{} $\pi$ of $A$ is a collection of nonempty subsets $\pi_i\subset A$ such that $A=\sqcup_i\pi_i$ (disjoint union); the subsets $\pi_i$ are called [*parts*]{} of the partition $\pi$. We say that partition $\pi$ is [*finer*]{} than partition $\mu$ of the same set if any part of $\pi$ is a subset of some part of $\mu$. For three finite sets $A, B, C$ and the partitions $\pi$ of $A\sqcup B$ and $\mu$ of $B\sqcup C$ we define the partition $\mu \star \pi$ of $A\sqcup B\sqcup C$ as the finest partition such that parts of $\pi$ and $\mu$ are subsets of its parts. The partition $\mu \star \pi$ induces a partition $\mu \cdot \pi$ of $A\sqcup C$ such that parts of $\mu \cdot \pi$ are nonempty intersections of parts of $\mu \star \pi$ with $A\sqcup C\subset A\sqcup B\sqcup C$; we also define an integer $\ell(\mu,\pi)$ which is the number of parts of $\mu \star \pi$ contained in $B$.
Given $t\in F$, we define the $F-$linear symmetric tensor category $\uRep_0(S_t)$ as follows:
Objects: finite sets; object corresponding to a finite set $A$ is denoted $[A]$.
Morphisms: ${\operatorname{Hom}}([A],[B])$ is the $F-$linear span of partitions of $A\sqcup B$; composition of morphisms represented by partitions $\pi \in {\operatorname{Hom}}([A],[B])$ and $\mu \in {\operatorname{Hom}}([B],[C])$ is $t^{\ell(\mu,\pi)}\mu \cdot \pi \in {\operatorname{Hom}}([A],[C])$.
Tensor product: disjoint union (see [@CO Definition 2.15]); unit object is $[\varnothing]$; tensor product of morphisms, associativity and commutativity constraints are the obvious ones (see [@CO §2.2]).
The category $\uRep_0(S_t)$ has a distinguished object $[pt]$ where $pt$ is a one-element set. The object $[pt]$ has a natural structure of commutative associative algebra in $\uRep_0(S_t)$ where the multiplication (resp. unit) map is given by the partition of $pt \sqcup pt \sqcup pt$ (resp. $pt$) consisting of one part. It is immediate to check that the object $[pt]$ satisfies conditions (a), (b), (c) from §\[Tabc\]. Moreover, we have the following universal property:
\[uni0\] Let ${\mathcal{T}}$ be an $F-$linear symmetric tensor category. The functor from the category of braided tensor functors ${\mathcal}{F}: \uRep_0(S_t)\to {\mathcal{T}}$ to the category of objects $T\in {\mathcal{T}}$ satisfying (a), (b), (c) from §\[Tabc\] and their isomorphisms, which sends ${\mathcal}{F}\mapsto {\mathcal}{F}([pt])$ and sends natural transformations $(\eta:{\mathcal}{F}\to{\mathcal}{F'})\mapsto\eta_{[pt]}$ is an equivalence of categories.
*Sketch of proof.* We restrict ourselves by a description of the inverse functor on objects; for more details see [@Del07 §8]. So assume that $T\in {\mathcal{T}}$ satisfies (a), (b), (c) from \[Tabc\]. We define ${\mathcal}{F}([A])=T^{{\otimes}A}$ (here $T^{{\otimes}A}$ is a tensor product of copies of $T$ labeled by elements of $A$; since the category ${\mathcal{T}}$ is symmetric this is well defined). The tensor structure on the functor ${\mathcal}{F}$ will be given by the obvious isomorphisms $T^{{\otimes}A\sqcup B}=T^{{\otimes}A}{\otimes}T^{{\otimes}B}$. It remains to define ${\mathcal}{F}$ on the morphisms. Observe that a morphism from ${\operatorname{Hom}}([A],[B])$ represented by a partition $\pi$ of $A\sqcup B$ is a tensor product of morphisms corresponding to partitions with precisely one part $\pi ={\otimes}_i \pi_i$. Thus it is sufficient to define ${\mathcal}{F}(\pi)$ only for $\pi$ consisting of one part $A\sqcup B$. In this case we set ${\mathcal}{F}(\pi)=T^{{\otimes}A}\to T\to T^{{\otimes}B}$ where the first map is the multiplication morphism $T^{{\otimes}A}\to T$ and the second one is the dual to the multiplication morphism $T^{{\otimes}B}\to T$ where $T$ and $T^*$ are identified via (b) from §\[Tabc\]. One verifies that the assumptions (a), (b), (c) from §\[Tabc\] ensure that the tensor functor ${\mathcal}{F}$ is well defined. $\square$
### The categories $\uRep(S_t)$ and $\uRep^{ab}(S_d)$ {#222}
(cf. [@Del07 Définition 2.17] or [@CO Definition 2.19]) The category $\uRep(S_t)$ is the Karoubian (or pseudo-abelian) envelope[^4] of the category $\uRep_0(S_t)$.
It follows immediately from Proposition \[uni0\] that the category $\uRep(S_t)$ has universal property from Proposition \[uRepuni\]. We now use this universal property to construct Deligne’s category $\uRep^{ab}(S_d)$ from the introduction.
It is known (see [@Del07 Théorème 2.18] or [@CO Corollary 5.21]) that the category $\uRep(S_t)$ is semisimple (and hence pre-Tannakian) for $t\not \in {{\mathbb Z}}_{\ge 0}$. In particular, the category $\uRep(S_{-1})$ is pre-Tannakian, so its fundamental group $\pi$ is defined. For any $d\in {{\mathbb Z}}_{\ge 0}$ we can consider the commutative associative algebra with non-degenerate trace pairing $T_d\in \uRep(S_{-1})$ which is a direct sum of $[pt]$ and $d+1$ copies of the algebra ${\mathbf{1}}=[\varnothing]$. Clearly, $\dim(T_d)=d$, so we can use Proposition \[uRepuni\] to construct a symmetric tensor functor $\uRep(S_d)\to \uRep(S_{-1})$. Using the general properties of the fundamental group we get a factorization of this functor as $\uRep(S_d)\to {\operatorname{Rep}}(S_{{\bf I}},{\varepsilon})\to \uRep(S_{-1})$ (here ${{\bf I}}={\text{Spec}}(T_d)$ and ${\varepsilon}:\pi \to S_{{\bf I}}$ is the canonical homomorphism). It is clear that the category ${\operatorname{Rep}}(S_{{\bf I}},{\varepsilon})$ is pre-Tannakian; it is proved in [@Del07 Proposition 8.19] that the functor $\uRep(S_d)\to {\operatorname{Rep}}(S_{{\bf I}},{\varepsilon})$ is fully faithful. We set $\uRep^{ab}(S_d):=
{\operatorname{Rep}}(S_{{\bf I}},{\varepsilon})$; as explained above this is a pre-Tannakian category and we have a fully faithful braided tensor functor $\uRep(S_d)\to \uRep^{ab}(S_d)$.
\[tensornonzero\] The existence of the embedding $\uRep(S_t)\subset \uRep^{ab}(S_t)$ implies that $Y_1{\otimes}Y_2\ne 0$ for nonzero objects $Y_1, Y_2\in \uRep(S_t)$ (this is true in any abelian rigid tensor category with simple unit object). The same result can be proved directly as follows. Given finite sets $A$ and $B$, it follows from the definition of tensor products that the obvious map ${\text{End}}([A]){\otimes}{\text{End}}([B])\to {\text{End}}([A]{\otimes}[B])={\text{End}}([A\sqcup B])$ is injective. Since any indecomposable object of $\uRep(S_t)$ is the image of a primitive idempotent $e\in {\text{End}}([A])$ for some finite set $A$, see e.g. [@CO Proposition 2.20], it follows that the tensor product of two nonzero morphisms in $\uRep(S_t)$ is nonzero. The statement for objects follows by considering their identity morphisms.
### Indecomposable objects of the category $\uRep(S_t)$
The indecomposable objects of the category $\uRep(S_t)$ are classified up to isomorphism in [@CO Theorem 3.3]. The isomorphism classes are labeled by the Young diagrams of all sizes in the following way. Let $\lambda$ be a Young diagram of size $n=|\lambda|$ and let $y_\lambda$ be the corresponding primitive idempotent in $FS_n$, the group algebra of the symmetric group[^5]. The symmetric braiding gives rise to an action of $S_n$ on $[pt]^{\otimes n}$; let $[pt]^\lambda$ denote the image of $y_\lambda\in{\text{End}}([pt]^{\otimes n})$. For any Young diagram $\lambda$ of size $|\lambda|$ there is a unique indecomposable object $L(\lambda)\in \uRep(S_t)$ characterized by the following properties:
\(a) $L(\lambda)$ is not a direct summand of $[pt]^{{\otimes}k}$ for $k<|\lambda|$;
\(b) $L(\lambda)$ is a direct summand (with multiplicity 1) of $[pt]^\lambda$.
It is proved in [@CO Theorem 3.3] that the indecomposable objects $L(\lambda)$ are well defined up to isomorphism, and any indecomposable object of $\uRep(S_t)$ is isomorphic to precisely one $L(\lambda)$.
### Blocks of the category $\uRep(S_t)$ {#StBlocks}
Let ${\mathcal{A}}$ be a Karoubian category such that any object decomposes into a finite direct sum of indecomposable objects. The set of isomorphism classes of indecomposable objects of ${\mathcal{A}}$ splits into [*blocks*]{} which are equivalence classes of the weakest equivalence relation for which two indecomposable objects are equivalent whenever there exists a nonzero morphism between them. We will also use the term block to refer to a full subcategory of ${\mathcal{A}}$ generated by the indecomposable objects in a single block.
The main result of [@CO] is the description of blocks of the category $\uRep(S_t)$. We describe the results of [*loc. cit.*]{} here. We will represent a Young diagram $\lambda$ as an infinite non-increasing sequence $(\lambda_1,\lambda_2,\ldots)$ of nonnegative integers such that $\lambda_k=0$ for some $k>0$, see [@CO §1.1]. For a Young diagram $\lambda$ and $t\in F$ we define a sequence $\mu_\lambda (t)=(t-|\lambda|, \lambda_1-1, \lambda_2-2,\ldots)$.
\[CObl\] [*([@CO Theorem 5.3])*]{} The objects $L(\lambda)$ and $L(\lambda')$ of $\uRep(S_t)$ are in the same block if and only if $\mu_\lambda(t)$ is a permutation of $\mu_{\lambda'}(t)$.
Let ${\mathcal{B}}$ be the set of blocks of the category $\uRep(S_t)$; for any $\mathsf{b}\in {\mathcal{B}}$ let us denote by $\uRep_{\mathsf{b}}(S_t)$ the corresponding subcategory of $\uRep(S_t)$; we have a decomposition $\uRep(S_t)=\oplus_{\mathsf{b}\in {\mathcal{B}}}\uRep_{\mathsf{b}}(S_t)$.
\[StBlockProp\] Let $\mathsf{b}\in {\mathcal{B}}$. One of the following holds:
\(i) $\mathsf{b}$ is semisimple (or trivial): the category $\uRep_{\mathsf{b}}(S_t)$ is equivalent to the category ${\operatorname{Vec}}_F$ of finite dimensional $F-$vector spaces as an additive category. We will denote by $L=L(\mathsf{b})$ the unique indecomposable object of this block. Then $\dim(L)=0$, or, equivalently, ${\text{id}}_L$ is negligible.
\(ii) $\mathsf{b}$ is non-semisimple (or infinite). In this case the additive category $\uRep_{\mathsf{b}}(S_t)$ is described in [@CO §6] (in particular, it does not depend on a choice of non-semisimple block $\mathsf{b}$). There is a natural labeling of indecomposable objects of the category $\uRep_{\mathsf{b}}(S_t)$ by nonnegative integers; we will denote these objects by $L_0, L_1, \ldots$. Then $\dim(L_i)=0$ for $i>0$ and $\dim(L_0)\ne 0$, that is ${\text{id}}_{L_i}$ is negligible if and only if $i>0$.
Further, it is shown in [@CO] that for any $t\in F$ there are infinitely many semisimple blocks and finitely many (precisely the number of Young diagrams of size $t$) non-semisimple blocks. In particular, for $t\not \in {{\mathbb Z}}_{\ge 0}$ all blocks are semisimple (hence the category $\uRep(S_t)$ is semisimple).
Temperley-Lieb category {#TLcat}
-----------------------
The results on the category $\uRep(S_t)$ in many respects are parallel to the results on the Temperley-Lieb category $TL(q)$. We recall the definition and some properties of this category here.
\[TLdef\] (see e.g. [@GW §A1.2]) Let $q$ be a nonzero element of an algebraic closure of $F$ such that $q+q^{-1}\in F$. We define the $F-$linear tensor category $TL_0(q)$ as follows:
Objects: finite subsets of ${{\mathbb R}}$ considered up to isotopy; we will denote the object corresponding to the set $A$ by $\langle A\rangle$.
Morphisms: ${\operatorname{Hom}}(\langle A\rangle ,\langle B\rangle )$ is the $F-$linear span of one dimensional submanifolds of ${{\mathbb R}}\times [0,1]$ with boundary $A\sqcup B$ where $A\subset {{\mathbb R}}\times 0$ and $B\subset {{\mathbb R}}\times 1$ (such submanifolds are called embedded unoriented [*bordisms*]{} from $A$ to $B$) modulo the relation $[bordism \sqcup circle]=(q+q^{-1})[bordism]$; composition is given by juxtaposition.
Tensor product: disjoint union (write ${{\mathbb R}}={{\mathbb R}}_{<0}\sqcup 0 \sqcup {{\mathbb R}}_{>0}$ and identify ${{\mathbb R}}_{<0}$ and ${{\mathbb R}}_{>0}$ with ${{\mathbb R}}$); the unit object is $\langle \varnothing\rangle$; tensor product of morphisms and associativity constraint are the obvious ones.
Next we define the category $TL(q)$ as the Karoubian envelope of the category $TL_0(q)$. The category $TL(q)$ has a universal property (see e.g. [@O Theorem 2.4]) but we don’t need it here. The indecomposable objects of the category $TL(q)$ are labeled by nonnegative integers: for any $i\in {{\mathbb Z}}_{\ge 0}$ there is a unique indecomposable object $V_i$ which is a direct summand (with multiplicity 1) of $\langle pt\rangle^{\otimes i}$ but is not a direct summand of $\langle pt\rangle^{\otimes k}$ whenever $k<i$.
The category $TL(q)$ is semisimple for generic values of $q$; more precisely the category $TL(q)$ is not semisimple precisely when exists a positive integer $l$ such that $1+q^2+\ldots +q^{2l}=0$ (we will denote the smallest such integer by $l_q$). Assume that the category $TL(q)$ is not semisimple. Then we have a full tensor functor $TL(q)\to \bar {\mathcal{C}}_q$ and a fully faithful tensor functor $TL(q)\to {\mathcal{C}}_q$ where $\bar {\mathcal{C}}_q$ is a semisimple tensor category (sometimes called the “Verlinde category") and ${\mathcal{C}}_q$ is the abelian tensor category of finite dimensional representations of quantum $SL(2)$, see e.g. [@O Theorem 2.4].
The blocks of the category $TL(q)$ are well known. Similarly to the case of the category $\uRep(S_d)$ there are infinitely many semisimple blocks (which are equivalent to the category ${\operatorname{Vec}}_F$ as an additive category) and finitely many (precisely $l_q$) non-semisimple blocks. The following observation is very important for this paper:
\[TLblocks\] [*([@CO Remark 6.5])*]{} All non-semisimple blocks of the category $TL(q)$ are equivalent as additive categories. Moreover, they are equivalent to the category $\uRep_{\mathsf{b}}(S_d)$ where $\mathsf{b}$ is any non-semisimple block of the category $\uRep(S_d)$.
\[TLlabel\] We can transport a labeling of indecomposable objects of $\uRep_{\mathsf{b}}(S_d)$ (see Proposition \[StBlockProp\] (ii)) to a non-semisimple block of the category $TL(q)$ via the equivalence of Proposition \[TLblocks\] (it is easy to see that the resulting labeling does not depend on a choice of the equivalence).
Recall that the category $TL(q)$ has a natural spherical structure and so the dimensions $\dim_{TL(q)}(Y)$ of objects $Y\in TL(q)$ are defined, see e.g. [@GW §A1.3]. The following result is well known, see e.g. [@A (1.6) and Proposition 3.5]:
\[TLneglig\] Let $L$ be a unique indecomposable object in a semisimple block of $TL(q)$. Then $\dim_{TL(q)}(L)=0$. For a non-semisimple block we have $\dim_{TL(q)}(L_i)=0$ for $i>0$ and $\dim_{TL(q)}(L_0)\ne 0$ where $L_i$ are indecomposable objects in this block labeled as in Remark \[TLlabel\] $\square$
Tensor ideals and the object $\Delta \in \uRep(S_d)$
====================================================
In this section we define objects $\Delta_n\in\uRep(S_t)$ for $n\in{\mathbb{Z}}_{\geq0}$ and $t\in F$. We then give $\Delta_n$ the structure of a commutative associative algebra in $\uRep(S_t)$ and study many $\Delta_n$-modules. Finally, using our results on the objects $\Delta_n$, we classify tensor ideals in $\uRep(S_d)$ when $d$ is a nonnegative integer. Before defining the objects $\Delta_n$ we prove the following easy observation which will be used throughout this section.
\[polyt\] Suppose $A_0,\ldots, A_n$ and $B_0,\ldots, B_m$ are finite sets with $A_0=B_0$ and $A_n=B_m$. Suppose further that $f_i$ (resp. $g_i$) is an $F$-linear combination of partitions of $A_{i-1}\sqcup A_i$ (resp. $B_{i-1}\sqcup B_i$) whose coefficients do not depend on $t$ for all $1\leq i\leq n$ (resp. $1\leq i\leq m$). If $f_n\cdots f_1=g_m\cdots g_1$ in $\uRep_0(S_t)$ for infinitely many values of $t\in F$, then $f_n\cdots f_1=g_m\cdots g_1$ in $\uRep_0(S_t)$ for all $t\in F$.
For each $t\in F$ and partition $\pi$ of $A_0\sqcup A_n=B_0\sqcup B_m$, let $a_\pi(t)\in F$ (resp. $b_\pi(t)\in F$) be such that $f_n\cdots f_1=\sum_\pi a_\pi(t) \pi$ (resp. $g_m\cdots g_1=\sum_\pi b_\pi(t)\pi$) in $\uRep_0(S_t)$ where the sum is taken over all partitions $\pi$ of $A_0\sqcup A_n=B_0\sqcup B_m$. Then $f_n\cdots f_1=g_m\cdots g_1$ in $\uRep_0(S_t)$ if and only if $a_\pi(t)=b_\pi(t)$ for all $\pi$. By the definition of composition in $\uRep_0(S_t)$, both $a_\pi(t)$ and $b_\pi(t)$ are polynomials in $t$ for each $\pi$. The result follows since a polynomial in $t$ is determined by finitely many values of $t$.
The objects $\Delta_n\in \uRep(S_t)$
------------------------------------
Suppose $n$ is a nonnegative integer and let $A_n=\{i~|~1\leq i\leq n\}$. Consider the endomorphism $x_n=x_{{\text{id}}_n}:[A_n]\to[A_n]$ in $\uRep_0(S_t)$ (see [@CO Equation (2.1)]).
\[xidemp\] $x_n$ is an idempotent which is equal to its dual for all $n\geq 0$.
The fact that $x_n^\ast=x_n$ follows from the definition of $x_n$. By Proposition \[polyt\], it suffices to show $x_n$ is an idempotent in $\uRep_0(S_t)$ for infinitely many values of $t$. It follows from [@CO Theorem 2.6 and Equation (2.2)] that $x_n$ is an idempotent in $\uRep_0(S_t)$ whenever $t$ is an integer greater than $2n$.
Since $\uRep(S_t)$ is a Karoubian category (i.e. $\uRep(S_t)$ contains images of idempotents) the following definition is valid.
Let $\Delta_n\in\uRep(S_t)$ denote the image of the idempotent $x_n$.[^6]
Note that the commutative associative algebra structure on $[pt]$ extends in an obvious way to a commutative associative algebra structure on $[A_n]\cong[pt]^{\otimes n}$. Let $\mu_n:[A_n]\otimes[A_n]\to[A_n]$ and $1_n:{\bf 1}\to[A_n]$ denote the multiplication and unit maps respectively.
\[deltalg\] The multiplication map $x_n\mu_n(x_n\otimes x_n):\Delta_n\otimes\Delta_n\to\Delta_n$ gives $\Delta_n$ the structure of a commutative associative algebra in $\uRep(S_t)$ with unit given by $x_n1_n:{\bf 1}\to\Delta_n$.
We are required to show the following equalities hold in $\uRep_0(S_t)$: $$\label{D1}
x_n\mu_n(x_n\mu_n(x_n\otimes x_n)\otimes x_n)=x_n\mu_n(x_n\otimes(x_n\mu_n(x_n\otimes x_n)),$$ $$\label{D2}
x_n\mu_n(x_n1_n\otimes x_n)=x_n=x_n\mu_n(x_n\otimes x_n1_n),$$ $$\label{D3}
x_n\mu_n(x_n\otimes x_n)\beta_{n,n}(x_n\otimes x_n)=x_n\mu_n(x_n\otimes x_n),$$ where $\beta_{n,n}:A_n\otimes A_n\to A_n\otimes A_n$ is the braiding morphism (see for example [@CO §2.2]). By Proposition \[polyt\], it suffices to show (\[D1\]), (\[D2\]), (\[D3\]) hold for infinitely many values of $t$.[^7] Using [@CO Theorem 2.6 and Equation (2.2)] it is easy to show (\[D1\]), (\[D2\]), (\[D3\]) hold whenever $t$ is a sufficiently large integer.
By Proposition \[deltalg\] we can consider the category $\Delta_n$-mod of all left $\Delta_n$-modules.
Some $\Delta_n$-modules
-----------------------
Suppose $j$ is a nonnegative integer with $1\leq j\leq n$. Give a finite set $X$, let $\Theta_X^j:{\operatorname{Hom}}_{\uRep(S_t)}( A_n, X)\to{\operatorname{Hom}}_{\uRep(S_t)}(A_{n+1}, X)$ and $\Theta^X_j:{\operatorname{Hom}}_{\uRep(S_t)}(X, A_n)\to{\operatorname{Hom}}_{\uRep(S_t)}(X, A_{n+1})$ be the $F$-linear maps defined on partitions as follows: if $\pi$ is a partition of $X\sqcup A_n$, then $\Theta_X^j(\pi)=\Theta^X_j(\pi)$ is the unique partition of $X\sqcup A_{n+1}$ which restricts to $\pi$ and has $j$ and $n+1$ in the same part. Now let $\Theta_j:{\text{End}}_{\uRep(S_t)}(A_n)\to{\text{End}}_{\uRep(S_t)}(A_{n+1})$ be the $F$-linear map $\Theta_j=\Theta_{A_n}^j\circ\Theta^{A_n}_j$. It is easy to check that $\Theta_j$ is an injective (non-unital) $F$-algebra homomorphism for each $1\leq j\leq n$. In particular, by Proposition \[xidemp\], $x_{n,j}:=\Theta_j(x_n)$ is an idempotent for each $j$.
Let $\Delta_n(j)\in\uRep(S_t)$ denote the image of $x_{n, j}$.
Next we give $\Delta_n(j)$ the structure of a $\Delta_n$-module. Let $\alpha=x_{n,j}\Theta^{A_n}_j(x_n)x_n:\Delta_n\to\Delta_n(j)$ and $\beta=x_{n,j}\Theta_j(\mu_n)(x_{n,j}\otimes x_{n,j}):\Delta_n(j)\otimes\Delta_n(j)\to\Delta_n(j)$. Finally, let $\phi=\beta(\alpha\otimes x_{n,j}):\Delta_n\otimes\Delta_n(j)\to\Delta_n(j)$.
\[Deltaj\]
\(1) The map $\phi$ gives $\Delta_n(j)$ the structure of a $\Delta_n$-module.
\(2) The map $ x_{n,j}\Theta_j^{A_n}({\text{id}}_{A_n})x_n:\Delta_n\to\Delta_n(j)$ is an isomorphism of $\Delta_n$-modules with inverse $x_n\Theta^j_{A_n}({\text{id}}_{A_n}) x_{n,j}$.
For part (1) we are required to show the following equation holds in $\uRep_0(S_t)$: $$\label{Dj1}
\begin{array}{l}
x_{n,j}\Theta_j(\mu_n)(( x_{n,j}\Theta^{A_n}_j(x_n)x_n\mu_n(x_n\otimes x_n))\otimes x_{n,j})\\
= x_{n,j}\Theta_j(\mu_n)( x_{n,j}\Theta^{A_n}_j(x_n)x_n\otimes( x_{n,j}\Theta_j(\mu_n)(( x_{n,j}\Theta^{A_n}_j(x_n)x_n)\otimes x_{n,j}))).
\end{array}$$ For part (2) we are required to show the following equations hold in $\uRep_0(S_t)$: $$\label{Dj2}
x_{n,j}\Theta_j^{A_n}({\text{id}}_{A_n})x_n\Theta^j_{A_n}({\text{id}}_{A_n}) x_{n,j}=x_{n,j},$$ $$\label{Dj3}
x_n\Theta^j_{A_n}({\text{id}}_{A_n}) x_{n,j}\Theta_j^{A_n}({\text{id}}_{A_n})x_{n}=x_{n}.$$ Now use Proposition \[polyt\] and [@CO Theorem 2.6 and Equation (2.2)].
Next, we give the object $\Delta_{n+1}$ the structure of a $\Delta_n$-module. To do so, set $\psi=x_{n+1}(\mu_n\otimes{\text{id}}_{[pt]})(x_n\otimes x_{n+1}):\Delta_n\otimes\Delta_{n+1}\to\Delta_{n+1}$.
The map $\psi$ gives $\Delta_{n+1}$ the structure of a $\Delta_n$-module.
We are required to show the following equation holds in $\uRep_0(S_t)$: $$\label{Dplus1}\begin{array}{l}
x_{n+1}(\mu_n\otimes{\text{id}}_{[pt]})((x_n\mu_n(x_n\otimes x_n))\otimes x_{n+1})\\
=x_{n+1}(\mu_n\otimes{\text{id}}_{[pt]})(x_n\otimes(x_{n+1}(\mu_n\otimes{\text{id}}_{[pt]})(x_n\otimes x_{n+1})).
\end{array}$$ Now use Proposition \[polyt\] and [@CO Theorem 2.6 and Equation (2.6)].
The following lemma will be important for us later.
\[Deltapt\] $\Delta_n\otimes[pt]\cong\Delta_{n+1}\oplus\Delta_n(1)\oplus\cdots\oplus\Delta_n(n)$ in the category $\Delta_n$-mod.
First, using Proposition \[polyt\] and [@CO Theorem 2.6 and Equation (2.6)] it is easy to show that the following identities hold in $\uRep_0(S_t)$: $$\label{ortho}
\begin{array}{ll}
x_n\otimes{\text{id}}_{[pt]}=x_{n+1}+\sum\limits_{1\leq j\leq n}x_{n, j},\\
x_{n,j}x_{n+1}=0=x_{n+1}x_{n,j} & (1\leq j\leq n),\\
x_{n,j}x_{n,k}=\delta_{j,k}x_{n,j} & (1\leq j,k\leq n).
\end{array}$$ Next, define $\Psi:\Delta_n\otimes[pt]\to\Delta_{n+1}\oplus\Delta_n(1)\oplus\cdots\oplus\Delta_n(n)$ by $$\Psi=\left[
\begin{array}{c}
x_{n+1}(x_n\otimes {\text{id}}_{[pt]})\\
x_{n,1}(x_n\otimes {\text{id}}_{[pt]})\\
\vdots\\
x_{n,n}(x_n\otimes {\text{id}}_{[pt]})
\end{array}
\right].$$ Using (\[ortho\]) is is easy to check that $\Psi$ is an isomorphism in $\uRep(S_t)$ with inverse $$\Psi^{-1}=\left[
\begin{array}{cccc}
(x_n\otimes {\text{id}}_{[pt]})x_{n+1} &
(x_n\otimes {\text{id}}_{[pt]})x_{n,1}&
\cdots&
(x_n\otimes {\text{id}}_{[pt]})x_{n,n}
\end{array}
\right].$$ It remains to show that $\Psi$ and $\Psi^{-1}$ are are morphisms in the category $\Delta_n$-mod. Showing $\Psi$ is a morphism in $\Delta_n$-mod amounts to showing the following equations hold in $\uRep_0(S_t)$: $$\label{Psi}
\begin{array}{l}
x_{n+1}(\mu_n\otimes{\text{id}}_{[pt]})(x_n\otimes(x_{n+1}(x_n\otimes{\text{id}}_{[pt]})))=x_{n+1}((x_n\mu_n(x_n\otimes x_n))\otimes{\text{id}}_{[pt]}),\\
x_{n,j}\Theta_j(\mu_n)((x_{n,j}\Theta^{A_n}_j(x_n)x_n)\otimes(x_{n,j}(x_n\otimes{\text{id}}_{[pt]})))\\
~\hspace{1.65in}=x_{n,j}((x_n\mu_n(x_n\otimes x_n))\otimes{\text{id}}_{[pt]})\qquad(1\leq j\leq n).
\end{array}$$ To show the equations in (\[Psi\]) hold, use Proposition \[polyt\] and [@CO Theorem 2.6 and Equation (2.6)]. The proof for $\Psi^{-1}$ is similar.
The category $\uRep^{\Delta_n}(S_t)$
------------------------------------
Let $\Delta_n$-mod$_0$ denote the full subcategory of $\Delta_n$-mod such that a $\Delta_n$-module $M$ is in $\Delta_n$-mod$_0$ if and only if $M\cong\Delta_n\otimes Y$ in $\Delta_n$-mod for some $Y\in\uRep(S_t)$. Let $\uRep^{\Delta_n}(S_t)$ denote the Karoubian envelope of $\Delta_n\text{-mod}_0$. The advantage of working in $\uRep^{\Delta_n}(S_t)$ rather than in the category $\Delta_n$-mod is that we can give $\uRep^{\Delta_n}(S_t)$ the structure of a tensor category with relative ease. Indeed, given $M, M'\in\Delta_n$-mod$_0$ we know $M\cong \Delta_n\otimes Y$ and $M'\cong\Delta_n\otimes Y'$ as $\Delta_n$-modules for some $Y, Y'\in\uRep(S_t)$. Set $M\otimes_{\Delta_n}M':=\Delta_n\otimes Y\otimes Y'$. Given $N, N'\in\Delta_n$-mod$_0$ with $N\cong \Delta_n\otimes Z$ and $N'\cong \Delta_n\otimes Z'$ and morphisms $f\in{\operatorname{Hom}}_{\Delta_n\text{-mod}_0}(M, N)$ and $g\in{\operatorname{Hom}}_{\Delta_n\text{-mod}_0}(M', N')$, write $\tilde{f}:\Delta_n\otimes Y{\stackrel{\cong}{\longrightarrow}} M{\stackrel{f}{\longrightarrow}}N{\stackrel{\cong}{\longrightarrow}}\Delta_n\otimes Z$ and $\tilde{g}:\Delta_n\otimes Y'{\stackrel{\cong}{\longrightarrow}} M'{\stackrel{g}{\longrightarrow}}N'{\stackrel{\cong}{\longrightarrow}}\Delta_n\otimes Z'$. Define $f\otimes_{\Delta_n}g:M\otimes_{\Delta_n} M'\to N\otimes_{\Delta_n} N'$ to be the composition $M\otimes_{\Delta_n} M'=\Delta_n\otimes Y\otimes Y'{\stackrel{\tilde{f}\otimes{\text{id}}_{Y'}}{\longrightarrow}}\Delta_n\otimes Z\otimes Y'{\stackrel{\sim}{\longrightarrow}}\Delta_n\otimes Y'\otimes Z{\stackrel{\tilde{g}\otimes{\text{id}}_Z}{\longrightarrow}}\Delta_n\otimes Z'\otimes Z{\stackrel{\sim}{\longrightarrow}}\Delta_n\otimes Z\otimes Z'=N\otimes_{\Delta_n}N'$. It is easy to check that $\otimes_{\Delta_n}:\Delta_n\text{-mod}_0\times\Delta_n\text{-mod}_0\to\Delta_n\text{-mod}_0$ is a bifunctor which (with the obvious choice of constraints) makes $\Delta_n$-mod$_0$ into a rigid symmetric tensor category. The tensor structure on $\Delta_n$-mod$_0$ extends in an obvious way to make $\uRep^{\Delta_n}(S_t)$ a rigid symmetric tensor category too.
Notice that $\Delta_{n+1}$ is an object in $\uRep^{\Delta_n}(S_t)$. Indeed, by Lemma \[Deltapt\], the $\Delta_n$-module $\Delta_{n+1}$ is the image of an idempotent of the form $\Delta_n\otimes[pt]\to\Delta_n\otimes[pt]$. This idempotent is an element of ${\text{End}}_{\Delta_n\text{-mod}_0}(\Delta_n\otimes[pt])$; hence its image is an object in the Karoubian category $\uRep^{\Delta_n}(S_t)$. The next two propositions concern the structure of $\Delta_{n+1}\in\uRep^{\Delta_n}(S_t)$. We start by computing its dimension:
\[Deltadim\] $\dim_{\uRep^{\Delta_n}(S_t)}(\Delta_{n+1})=t-n$.
First, by Lemma \[Deltapt\] and Proposition \[Deltaj\](2), $$\dim_{\uRep^{\Delta_n}(S_t)}(\Delta_{n+1})=\dim_{\uRep^{\Delta_n}(S_t)}(\Delta_n\otimes[pt])-n\dim_{\uRep^{\Delta_n}(S_t)}(\Delta_n).$$ Now, consider the tensor functor $\Delta_n\otimes -:\uRep(S_t)\to\uRep^{\Delta_n}(S_t)$. Since tensor functors preserve dimension, $\dim_{\uRep^{\Delta_n}(S_t)}(\Delta_n)=\dim_{\uRep(S_t)}([\varnothing])=1$ and $\dim_{\uRep^{\Delta_n}(S_t)}(\Delta_n\otimes[pt])=\dim_{\uRep(S_t)}([pt])=t$.
Our next aim is to show $\Delta_{n+1}\in\uRep^{\Delta_n}(S_t)$ satisfies (a) and (b) from §\[Tabc\]. To do so, let $inc:\Delta_{n+1}\to\Delta_n\otimes[pt]$ and $proj:\Delta_n\otimes[pt]\to\Delta_{n+1}$ denote the morphisms in $\uRep^{\Delta_n}(S_t)$ determined by Lemma \[Deltapt\]. Moreover, let $m:(\Delta_n\otimes[pt])\otimes_{\Delta_n}(\Delta_n\otimes[pt])\to\Delta_n\otimes[pt]$ denote the morphism $\Delta_n\otimes[pt]\otimes[pt]{\stackrel{{\text{id}}_{\Delta_n}\otimes\mu_1}{\longrightarrow}}\Delta_n\otimes[pt]$. Now consider the following morphisms: $$\label{multD}
\Delta_{n+1}\otimes_{\Delta_n}\Delta_{n+1}{\stackrel{inc\otimes_{\Delta_n}inc}{\longrightarrow}}(\Delta_n\otimes[pt])\otimes_{\Delta_n}(\Delta_n\otimes[pt]){\stackrel{m}{\longrightarrow}}\Delta_n\otimes[pt]{\stackrel{proj}{\longrightarrow}}\Delta_{n+1},$$ $$\label{unitD}
\Delta_n{\stackrel{{\text{id}}_{\Delta_n}\otimes 1_1}{\longrightarrow}}\Delta_n\otimes[pt]{\stackrel{proj}{\longrightarrow}}\Delta_{n+1}.$$
\[abDelta\] With the multiplication and unit maps given by (\[multD\]) and (\[unitD\]) respectively, $\Delta_{n+1}\in\uRep^{\Delta_n}(S_t)$ satisfies (a) and (b) from §\[Tabc\].
Write $\mu_{\Delta_{n+1}}$ and $1_{\Delta_{n+1}}$ for the morphisms given by (\[multD\]) and (\[unitD\]) respectively. First, it is easy to see that $m$ (resp. ${\text{id}}_{\Delta_n}\otimes 1_1$) is a morphism of $\Delta_n$-modules. Hence, $\mu_{\Delta_{n+1}}$ (resp. $1_{\Delta_{n+1}}$) is a morphism of $\Delta_n$-modules too. Now, to show $\Delta_{n+1}$ satisfies (a) from §\[Tabc\] we must show the following equations hold in $\uRep^{\Delta_n}(S_t)$: $$\label{a}\begin{array}{c}
\mu_{\Delta_{n+1}}(\mu_{\Delta_{n+1}}\otimes_{\Delta_{n}}{\text{id}}_{\Delta_{n+1}})=\mu_{\Delta_{n+1}}({\text{id}}_{\Delta_{n+1}}\otimes_{\Delta_{n}}\mu_{\Delta_{n+1}}),\\
\mu_{\Delta_{n+1}}(1_{\Delta_{n+1}}\otimes_{\Delta_{n}}{\text{id}}_{\Delta_{n+1}})={\text{id}}_{\Delta_{n+1}}=\mu_{\Delta_{n+1}}({\text{id}}_{\Delta_{n+1}}\otimes_{\Delta_{n}}1_{\Delta_{n+1}}),\\
\mu_{\Delta_{n+1}}\beta_{\Delta_{n+1},\Delta_{n+1}}=\mu_{\Delta_{n+1}},
\end{array}$$ where $\beta_{\Delta_{n+1},\Delta_{n+1}}:\Delta_{n+1}\otimes_{\Delta_n}\Delta_{n+1}\to\Delta_{n+1}\otimes_{\Delta_n}\Delta_{n+1}$ denotes the braiding morphism. To do so, first notice that by (\[ortho\]) the morphisms $proj, inc$, and ${\text{id}}_{\Delta_{n+1}}$ are all given by $x_{n+1}$. Let $\tau$ (resp. $\nu$) denote the identity morphism on $\Delta_{n+1}\otimes_{\Delta_n}\Delta_{n+1}$ (resp. $\Delta_{n+1}\otimes_{\Delta_n}\Delta_{n+1}\otimes_{\Delta_n}\Delta_{n+1}$). Then, by the definition of $\otimes_{\Delta_n}$, we have the following realizations of $\tau$ and $\nu$ as morphisms in $\uRep_0(S_t)$: $$\label{taunu}
\begin{array}{l}
\tau=(x_n\otimes\beta_{1,1})(x_{n+1}\otimes{\text{id}}_{[pt]})(x_n\otimes\beta_{1,1})(x_{n+1}\otimes{\text{id}}_{[pt]}),\\
\nu=(x_n\otimes\beta_{1,2})(x_{n+1}\otimes{\text{id}}_{[pt]\otimes[pt]})(x_n\otimes\beta_{2,1})(\tau\otimes{\text{id}}_{[pt]}),
\end{array}$$ where $\beta_{n,m}:A_n\otimes A_m\to A_m\otimes A_n$ denotes the braiding morphism in $\uRep_0(S_t)$ for each $n,m\geq 0$. Moreover, $$\label{1mb}
1_{\Delta_{n+1}}=x_{n+1}(x_n\otimes 1_1),\quad\mu_{\Delta_{n+1}}=x_{n+1}(x_n\otimes\mu_1)\tau,\quad \beta_{\Delta_{n+1},\Delta_{n+1}}=\tau(x_n\otimes\beta_{1,1})\tau.$$ Thus, showing the equations in (\[a\]) hold in $\uRep^{\Delta_n}(S_t)$ amounts to showing the following equations hold in $\uRep_0(S_t)$: $$\label{azero}
\begin{array}{l}
x_{n+1}(x_n\otimes\mu_1)\tau(x_n\otimes\beta_{1,1})(x_{n+1}\otimes{\text{id}}_{[pt]})(x_n\otimes\beta_{1,1})((x_{n+1}(x_n\otimes\mu_1)\tau)\otimes{\text{id}}_{[pt]})\nu=\\
x_{n+1}(x_n\otimes\mu_1)\tau(x_n\otimes\beta_{1,1})((x_{n+1}(x_n\otimes\mu_1)\tau)\otimes{\text{id}}_{[pt]})(x_n\otimes\beta_{1,2})(x_{n+1}\otimes{\text{id}}_{[pt]\otimes[pt]})\nu,\\
x_{n+1}(x_n\otimes\mu_1)\tau(x_n\otimes\beta_{1,1})(x_{n+1}\otimes{\text{id}}_{[pt]})(x_n\otimes\beta_{1,1})(x_{n+1}(x_n\otimes 1_1)\otimes{\text{id}}_{[pt]})\\
=x_{n+1}=
x_{n+1}(x_n\otimes\mu_1)\tau(x_{n+1}\otimes\beta_{1,1})((x_{n+1}(x_n\otimes 1_1))\otimes{\text{id}}_{[pt]})x_{n+1},\\
x_{n+1}(x_n\otimes\mu_1)\tau(x_n\otimes\beta_{1,1})\tau=x_{n+1}(x_n\otimes\mu_1)\tau.
\end{array}$$ All equations above are straightforward to check using Proposition \[polyt\] and [@CO Theorem 2.6 and Equation (2.6)]. Thus $\Delta_{n+1}$ satisfies part (a) from §\[Tabc\].
To show $\Delta_{n+1}$ satisfies part (b) from §\[Tabc\], first notice that $\Delta_{n+1}\in\uRep^{\Delta_n}(S_t)$ is self dual (because the morphism $x_{n+1}$ is self dual). Hence, we are required to show that the following morphism is invertible in $\uRep^{\Delta_n}(S_t)$: $$\label{b}
(({\text{Tr}}~\mu_{\Delta_{n+1}})\otimes_{\Delta_n}{\text{id}}_{\Delta_{n+1}})({\text{id}}_{\Delta_{n+1}}\otimes_{\Delta_n}coev_{\Delta_{n+1}}):\Delta_{n+1}\to\Delta_{n+1},$$ where the morphism ${\text{Tr}}:\Delta_{n+1}\to\Delta_n$ is defined in §\[Tabc\](b). In fact, we claim the morphism in (\[b\]) is equal to the identity morphism ${\text{id}}_{\Delta_{n+1}}$. To prove this claim, first notice that $${\text{Tr}}=ev_{\Delta_{n+1}}\beta_{\Delta_{n+1}, \Delta_{n+1}}(\mu_{\Delta_{n+1}}\otimes_{\Delta_{n}}{\text{id}}_{\Delta_{n+1}})({\text{id}}_{\Delta_{n+1}}\otimes_{\Delta_{n}}coev_{\Delta_{n+1}}).$$ Also, $ev_{\Delta_{n+1}}=x_n(x_n\otimes ev_{[pt]})\tau$ and $coev_{\Delta_{n+1}}=\tau(x_n\otimes coev_{[pt]})x_n$. Hence, using (\[taunu\]), (\[1mb\]), and the definition of $\otimes_{\Delta_n}$, we can realize the morphism in (\[b\]) as a morphism in $\uRep_0(S_t)$. Now use Proposition \[polyt\] and [@CO Theorem 2.6 and Equation (2.6)] to show that this morphism is equal to $x_{n+1}$.
Deligne’s lemma {#DLsec}
---------------
Fix an integer $d\geq0$. Set $\Delta=\Delta_{d+1}\in\uRep(S_d)$ and $\Delta^+=\Delta_{d+2}\in\uRep^\Delta(S_d)$. By Proposition \[Deltadim\], $\dim_{\uRep^\Delta(S_d)}(\Delta^+)=-1$. Hence, by Propositions \[uRepuni\] and \[abDelta\], there exists a tensor functor ${\mathcal}{F}_\Delta:\uRep(S_{-1})\to\uRep^\Delta(S_d)$ with ${\mathcal}{F}_\Delta([pt])=\Delta^+$. Let ${\underline{\operatorname{Res}}}^{S_d}_{S_{-1}}$ denote the tensor functor $\uRep(S_d)\to\uRep(S_{-1})$ described in §\[222\], i.e. the functor prescribed by Proposition \[uRepuni\] with ${\underline{\operatorname{Res}}}_{S_{-1}}^{S_d}([pt])=[pt]\oplus[\varnothing]^{\oplus d+1}$. Then we have the following
\[Delemma\] The functor $\Delta\otimes-:\uRep(S_d)\to\uRep^\Delta(S_d)$ is isomorphic to the composition ${\mathcal}{F}_\Delta\circ{\underline{\operatorname{Res}}}^{S_d}_{S_{-1}}$.
Both $\Delta\otimes-$ and ${\mathcal}{F}_\Delta\circ{\underline{\operatorname{Res}}}^{S_d}_{S_{-1}}$ are tensor functors which map $[pt]\in\uRep(S_d)$ to an object isomorphic to $\Delta^+\oplus\Delta^{\oplus d+1}\in\uRep^\Delta(S_d)$ (see Propositions \[Deltaj\](2) and \[Deltapt\]). Hence, by Proposition \[uRepuni\], they are isomorphic.
The following corollary to Deligne’s lemma will be used in the next section to classify tensor ideals in $\uRep(S_d)$.
\[idid\] Every nonzero tensor ideal in $\uRep(S_d)$ contains a nonzero identity morphism.
Suppose ${\mathcal}{I}$ is a nonzero tensor ideal in $\uRep(S_d)$. Since tensor ideals are closed under composition, it suffices to show that ${\mathcal}{I}$ contains a morphism which has a nonzero isomorphism as a direct summand. Let $f$ be a nonzero morphism in ${\mathcal}{I}$. Then, by Remark \[tensornonzero\], ${\text{id}}_\Delta\otimes f$ is also a nonzero morphism in ${\mathcal}{I}$. By Lemma \[Delemma\] ${\text{id}}_\Delta\otimes f={\mathcal}{F}_\Delta(f')$ for some nonzero morphism $f'$ in $\uRep(S_{-1})$. Since $\uRep(S_{-1})$ is semisimple (see [@Del07 Théorème 2.18] or [@CO Corollary 5.21]) it follows that $f'$ (and therefore ${\mathcal}{F}_\Delta(f')$) is the direct sum of isomorphisms and zero morphisms.
Tensor ideals in $\uRep(S_d)$
-----------------------------
In this section we use results from [@CO] along with Corollary \[idid\] to classify tensor ideals in $\uRep(S_d)$ for arbitrary $d\in{\mathbb{Z}}_{\geq0}$.[^8] We begin by introducing an equivalence class on Young diagrams:
Consider the weakest equivalence relation on the set of all Young diagrams such that $\lambda$ and $\mu$ are equivalent whenever the indecomposable object $L(\lambda)$ is a direct summand of $L(\mu)\otimes[pt]$ in $\uRep(S_d)$. When $\lambda$ and $\mu$ are in the same equivalence class we write $\lambda\stackrel{d}{\approx}\mu$.
The following proposition contains enough information on the equivalence relation $\stackrel{d}{\approx}$ for us to classify tensor ideals in $\uRep(S_d)$.
\[green\] Assume $d$ is a nonnegative integer and $\lambda, \mu$ are Young diagrams.
\(1) A nonzero morphism of the form $L(\lambda)\to L(\mu)$ is a negligible morphism in $\uRep(S_d)$ if and only if $L(\lambda)$ or $L(\mu)$ is not the minimal indecomposable object in an infinite block of $\uRep(S_d)$.
\(2) $\lambda\stackrel{d}{\approx}\mu$ whenever $L(\lambda)$ and $L(\mu)$ are in trivial blocks of $\uRep(S_d)$.
\(3) $\lambda\stackrel{d}{\approx}\mu$ whenever $L(\lambda)$ is a non-minimal indecomposable object in an infinite block and $L(\mu)$ is in a trivial block of $\uRep(S_d)$.
\(4) $\lambda\stackrel{d}{\approx}\mu$ whenever neither $L(\lambda)$ nor $L(\mu)$ is a minimal indecomposable object in an infinite block of $\uRep(S_d)$.
\(5) Suppose $\lambda\stackrel{d}{\approx}\mu$ and ${\mathcal}{I}$ is a tensor ideal in $\uRep(S_d)$ containing ${\text{id}}_{L(\lambda)}$. Then ${\text{id}}_{L(\mu)}$ is also in ${\mathcal}{I}$.
Part (1) follows from [@CO Proposition 3.25, Corollary 5.9, and Theorem 6.10]. Part (2) is easy to check using [@CO Propositions 3.12, 5.15 and Lemma 5.20(1)]. Part (4) follows from parts (2) and (3). Part (5) is easy to check. Hence, it suffices to prove part (3). To do so, let $\mathsf{b}$ denote the infinite block of $\uRep(S_d)$ containing $L(\lambda)$. We will proceed by induction on $\mathsf{b}$ with respect to $\prec$ (see [@CO Definition 5.12]).
If $\mathsf{b}$ is the minimal with respect to $\prec$, then using [@CO Proposition 3.12 and Lemmas 5.18(1) and 5.20(1)] we can find a Young diagram $\rho$ with $L(\rho)$ in a trivial block of $\uRep(S_d)$ such that $\lambda\stackrel{d}{\approx}\rho$. By part (2) $\rho\stackrel{d}{\approx}\mu$ and we are done. Now suppose $\mathsf{b}$ is not minimal with respect to $\prec$. Then, using [@CO Proposition 3.12 and Lemmas 5.18(2) and 5.20(2)], we can find a Young diagram $\rho'$ with $\lambda\stackrel{d}{\approx}\rho'$ such that $L(\rho')$ is in an infinite block $\mathsf{b}'$ of $\uRep(S_d)$ with $\mathsf{b}'\precneqq \mathsf{b}$. By induction $\rho'\stackrel{d}{\approx}\mu$ and we are done.
We are now ready to classify tensor ideals in $\uRep(S_d)$.
\[idealclassif\] If $d$ is a nonnegative integer, then the only nonzero proper tensor ideal in $\uRep(S_d)$ is the ideal of negligible morphisms.
Assume ${\mathcal}{I}$ is a nonzero proper tensor ideal of $\uRep(S_d)$. Then ${\mathcal}{I}$ is contained in the ideal of negligible morphisms (see [@GW Proposition 3.1]), hence we must show that ${\mathcal}{I}$ contains all negligible morphisms. Suppose $\lambda$ is a Young diagram such that $L(\lambda)$ is not the minimal indecomposable object in an infinite block of $\uRep(S_d)$. By Proposition \[green\](1), it suffices to show ${\text{id}}_{L(\lambda)}$ is contained in ${\mathcal}{I}$. By Corollary \[idid\], there exists a nonzero identity morphism in ${\mathcal}{I}$. It follows that ${\mathcal}{I}$ contains ${\text{id}}_{L(\mu)}$ for some Young diagram $\mu$. In particular, ${\text{id}}_{L(\mu)}$ is a negligible morphism. Hence, by Proposition \[green\](1), $L(\mu)$ is not the minimal indecomposable object in an infinite block of $\uRep(S_d)$. Thus, by Proposition \[green\](4), $\lambda\stackrel{d}{\approx}\mu$. Finally, by Proposition \[green\](5), ${\text{id}}_{L(\lambda)}$ is contained in ${\mathcal}{I}$.
\[DeltaGen\] The tensor ideal in $\uRep(S_d)$ generated by ${\text{id}}_\Delta$ is the ideal of all negligible morphisms.
${\text{id}}_\Delta=x_{d+1}$ is a nonzero negligible morphism in $\uRep(S_d)$ (see [@CO Remark 3.22]). Hence, the result follows from Theorem \[idealclassif\].
$t-$structure on $K^b(\uRep(S_d))$
==================================
Homotopy category {#homgen}
-----------------
Let ${\mathcal{A}}$ be an additive category. Let $K^b({\mathcal{A}})$ be the bounded homotopy category of ${\mathcal{A}}$, see e.g. [@KS §11]. Thus the objects of $K^b({\mathcal{A}})$ are finite complexes of objects in ${\mathcal{A}}$ and the morphisms are morphisms of complexes up to homotopy. The category $K^b({\mathcal{A}})$ has a natural structure of a triangulated category, see [*loc. cit.*]{} In particular, for each integer $n$ we have a translation functor $[n]: K^b({\mathcal{A}})\to K^b({\mathcal{A}})$.
Any object $A\in {\mathcal{A}}$ can be considered as a complex $A[0]$ concentrated in degree 0 or, more generally, as a complex $A[n]$ concentrated in degree $-n$. Thus we have a fully faithful functor ${\mathcal{A}}\to K^b({\mathcal{A}})$, $A\mapsto A[0]$. We will say that an object $K\in K^b({\mathcal{A}})$ is [*split*]{} if it is isomorphic to an object of the form $\oplus_iA_i[n_i]$ with $A_i\in {\mathcal{A}}$, $n_i\in {{\mathbb Z}}$.
Now assume that ${\mathcal{A}}$ is an additive tensor category. The category $K^b({\mathcal{A}})$ has a natural structure of an additive tensor category. If the category ${\mathcal{A}}$ is braided or symmetric then so is the category $K^b({\mathcal{A}})$. The functor ${\mathcal{A}}\to K^b({\mathcal{A}})$, $A\mapsto A[0]$ has an obvious structure of a (braided) tensor functor. If the category ${\mathcal{A}}$ is rigid so is the category $K^b({\mathcal{A}})$.
Definition of $t-$structure {#deft}
---------------------------
We can apply the construction from §\[homgen\] to the case ${\mathcal{A}}=\uRep(S_d)$. We obtain a triangulated tensor category ${\mathcal{K}}_d:=K^b(\uRep(S_d))$. We have
\[split\] For any $K\in {\mathcal{K}}_d$ the object $\Delta {\otimes}K$ is split.
By Lemma \[Delemma\], the functor $\Delta {\otimes}-: \uRep(S_d)\to \uRep(S_d)$ is naturally isomorphic to a composition $\uRep(S_d)\to \uRep(S_{-1})\to \uRep(S_d)$. The category $\uRep(S_{-1})$ is semisimple ([@Del07 Théorème 2.18] or [@CO Corollary 5.21]), so every object of $K^b(\uRep(S_{-1}))$ is split. The result follows.
We define ${\mathcal{K}}_d^{\le 0}$ as the full subcategory of ${\mathcal{K}}_d$ consisting of objects $K$ such that $\Delta{\otimes}K$ is concentrated in non-positive degrees (that is isomorphic to $\oplus_iA_i[n_i]$ with $A_i\in {\mathcal{A}}$ and $n_i\in {{\mathbb Z}}_{\ge 0}$). Similarly, we define ${\mathcal{K}}_d^{\ge 0}$ as the full subcategory of ${\mathcal{K}}_d$ consisting of objects $K$ such that $\Delta{\otimes}K$ is concentrated in non-negative degrees. The following result will be proved in §\[axiom\].
\[tstrth\] The pair $({\mathcal{K}}_d^{\le 0}, {\mathcal{K}}_d^{\ge 0})$ is a $t-$structure [*(see [@BBD Définition 1.3.1])*]{} on the category ${\mathcal{K}}_d$.
Recall that the core of this $t-$structure is the subcategory ${\mathcal{K}}_d^0={\mathcal{K}}_d^{\le 0}\cap {\mathcal{K}}_d^{\ge 0}$. By definition this means that $K\in {\mathcal{K}}_d^0$ if and only if $\Delta{\otimes}K$ is concentrated in degree zero. In particular, for any $A\in \uRep(S_d)$ the object $A[0]\in {\mathcal{K}}_d^0$. We have
\[Kdabtens\] (a) The category ${\mathcal{K}}_d^0$ is abelian.
\(b) The category ${\mathcal{K}}_d^0$ is a tensor subcategory of ${\mathcal{K}}_d$.
\(a) follows from Theorem \[tstrth\] and [@BBD Théorème 1.3.6]. For (b) we need to check that for $K,K' \in {\mathcal{K}}_d^0$ we have $K{\otimes}K'\in {\mathcal{K}}_d^0$. Assume this is not the case. This means that the split complex $\Delta {\otimes}K{\otimes}K'$ is not concentrated in degree zero. Since $\Delta {\otimes}X\ne 0$ for any $0\ne X\in \uRep(S_d)$ (see Remark \[tensornonzero\]) we get that $\Delta {\otimes}\Delta {\otimes}K{\otimes}K'$ is split and not concentrated in degree zero. But this is not the case since $\Delta {\otimes}\Delta {\otimes}K{\otimes}K'\simeq
(\Delta {\otimes}K){\otimes}(\Delta {\otimes}K')$ and both $\Delta {\otimes}K$ and $\Delta {\otimes}K'$ are split and concentrated in degree zero.
We will show in §\[axiom\] that the category ${\mathcal{K}}_d^0$ is actually pre-Tannakian. Thus we constructed a fully faithful tensor functor $\uRep(S_d)\to {\mathcal{K}}_d^0$ where ${\mathcal{K}}_d^0$ is a pre-Tannakian category. Of course a priori this might be quite different from Deligne’s functor $\uRep(S_d)\to \uRep^{ab}(S_d)$.
Verification of $t-$structure axioms {#axiom}
------------------------------------
The main goal of this Section is to prove Theorem \[tstrth\].
### {#tneg}
We start by reformulating the definition of ${\mathcal{K}}_d^{\le 0}$ and ${\mathcal{K}}_d^{\ge 0}$ in terms of negligible objects, i.e. objects whose identity morphisms are negligible.
\[vanish\] Let $K\in {\mathcal{K}}_d$. Then $K\in {\mathcal{K}}_d^{\le 0}$ if and only if ${\operatorname{Hom}}(K,A[n])=0$ for any negligible $A\in\uRep(S_d)$ and $n\in {{\mathbb Z}}_{<0}$. Similarly, $K\in {\mathcal{K}}_d^{\ge 0}$ if and only if ${\operatorname{Hom}}(K,A[n])=0$ for any negligible $A\in\uRep(S_d)$ and $n\in {{\mathbb Z}}_{>0}$.
We prove only the characterization of ${\mathcal{K}}_d^{\le 0}$ (the case of ${\mathcal{K}}_d^{\ge 0}$ is similar). Assume first that ${\operatorname{Hom}}(K,A[n])=0$ for any negligible $A$ and $n\in {{\mathbb Z}}_{<0}$. By Proposition \[xidemp\] $\Delta^\ast=\Delta$, thus by Corollary \[DeltaGen\] $\Delta\otimes B=\Delta^\ast\otimes B$ is negligible for all $B\in\uRep(S_d)$. Hence, ${\operatorname{Hom}}(\Delta {\otimes}K, B[n])={\operatorname{Hom}}(K, \Delta^*{\otimes}B[n])=0$ for any $B\in\uRep(S_d)$ and $n\in {{\mathbb Z}}_{<0}$. Since by Proposition \[split\] the object $\Delta {\otimes}K\in {\mathcal{K}}_d$ is split we get immediately that $K\in {\mathcal{K}}_d^{\le 0}$.
Conversely, assume that $K\in {\mathcal{K}}_d^{\le 0}$. Then by definition ${\operatorname{Hom}}(\Delta {\otimes}K, B[n])=0$ for any $B\in \uRep(S_d)$ and $n\in {{\mathbb Z}}_{<0}$. Hence ${\operatorname{Hom}}(K, \Delta^* {\otimes}B[n])=0$. Since, by Corollary \[DeltaGen\], any negligible object is a direct summand of an object of the form $\Delta{\otimes}B=\Delta^*{\otimes}B$ we are done.
### Blockwise description of $({\mathcal{K}}_d^{\le 0}, {\mathcal{K}}_d^{\ge 0})$ {#tblock}
Recall that the category $\uRep(S_d)$ decomposes into a direct sum of blocks $\uRep(S_d)=\oplus_\mathsf{b} \uRep_{\mathsf{b}}(S_d)$, see §\[StBlocks\]. Similarly, we have a decomposition ${\mathcal{K}}_d=\oplus_\mathsf{b}({\mathcal{K}}_d)_\mathsf{b}$ (in other words, for any object $K\in {\mathcal{K}}_d$ we have a canonical decomposition $K=\oplus_\mathsf{b}K_\mathsf{b}$ where all the terms of the complex $K_\mathsf{b}\in ({\mathcal{K}}_d)_\mathsf{b}$ are in the block $\uRep_\mathsf{b}(S_d)$). Since $\Delta {\otimes}(\oplus_\mathsf{b}K_\mathsf{b})=\oplus_\mathsf{b}\Delta {\otimes}K_\mathsf{b}$ we see that $K=\oplus_\mathsf{b}K_\mathsf{b}\in {\mathcal{K}}_d^{\le 0}$ if and only if $K_\mathsf{b}\in {\mathcal{K}}_d^{\le 0}$ for any $\mathsf{b}$ (and similarly for ${\mathcal{K}}_d^{\ge 0}$). In other words ${\mathcal{K}}_d^{\le 0}=\oplus_\mathsf{b}({\mathcal{K}}_d^{\le 0})_\mathsf{b}$ where $({\mathcal{K}}_d^{\le 0})_\mathsf{b}={\mathcal{K}}_d^{\le 0}\cap ({\mathcal{K}}_d)_\mathsf{b}$, that is the subcategory ${\mathcal{K}}_d^{\le 0}$ is compatible with the block decomposition (and similarly for ${\mathcal{K}}_d^{\ge 0}=\oplus_\mathsf{b}({\mathcal{K}}_d^{\ge 0})_\mathsf{b}$). Thus in order to verify that $({\mathcal{K}}_d^{\le 0}, {\mathcal{K}}_d^{\ge 0})$ is a $t-$structure on ${\mathcal{K}}_d$ it is sufficient to verify that $(({\mathcal{K}}_d^{\le 0})_\mathsf{b}, ({\mathcal{K}}_d^{\ge 0})_\mathsf{b})$ is a $t-$structure on $({\mathcal{K}}_d)_\mathsf{b}$ for every block $\mathsf{b}$. Fortunately, Proposition \[vanish\] gives rise to an easy description of $({\mathcal{K}}_d^{\le 0})_\mathsf{b}$ and $({\mathcal{K}}_d^{\ge 0})_\mathsf{b}$.
\[blockbyblock\] Let $K\in ({\mathcal{K}}_d)_\mathsf{b}$.
\(a) Assume that $\mathsf{b}$ is a semisimple block and let $L$ be a unique indecomposable object in $\mathsf{b}$. Then $K\in ({\mathcal{K}}_d^{\le 0})_\mathsf{b}$ (resp. $K\in ({\mathcal{K}}_d^{\ge 0})_\mathsf{b}$) if and only if $K\in ({\mathcal{K}}_d)_\mathsf{b}$ and ${\operatorname{Hom}}(K,L[n])=0$ for any $n\in {{\mathbb Z}}_{<0}$ (resp. for $n\in {{\mathbb Z}}_{>0}$).
\(b) Assume that $\mathsf{b}$ is a non-semisimple block with indecomposable objects $L_i$ for $i\in {{\mathbb Z}}_{\ge 0}$ labeled as in Proposition \[StBlockProp\](ii). Then $K\in ({\mathcal{K}}_d^{\le 0})_\mathsf{b}$ (resp. $K\in ({\mathcal{K}}_d^{\ge 0})_\mathsf{b}$) if and only if $K\in ({\mathcal{K}}_d)_\mathsf{b}$ and ${\operatorname{Hom}}(K,L_i[n])=0$ for all $i>0$ and any $n\in {{\mathbb Z}}_{<0}$ (resp. for $n\in {{\mathbb Z}}_{>0}$).
Combine Proposition \[vanish\] and Proposition \[StBlockProp\].
### Analogy with Temperley-Lieb category
The definition of the $t-$structure in §\[deft\] was motivated by the following analogy. Pick a nontrivial root of unity $q$ such that $q+q^{-1}\in F$ and recall the Temperley-Lieb category $TL(q)$ from §\[TLcat\]. Consider the category $K^b(TL(q))$. It is well known (see e.g. [@O Proposition 2.7] that the embedding $TL(q)\subset {\mathcal{C}}_q$ induces an equivalence of triangulated categories $K^b(TL(q))\simeq D^b({\mathcal{C}}_q)$ where $D^b({\mathcal{C}}_q)$ is the derived category of the abelian category ${\mathcal{C}}_q$. In particular the category ${\mathcal{D}}_q:=K^b(TL(q))$ inherits a natural $t-$structure $({\mathcal{D}}_q^{\le 0},{\mathcal{D}}_q^{\ge 0})$ from the category $D^b({\mathcal{C}}_q)$, see e.g. [@BBD Exemple 1.3.2(i)][^9]. This $t-$structure can be characterized as follows.
Let $St:=V_{l-1}\in TL(q)$ be the so called [*Steinberg module*]{}. It is known (see [@APW Theorem 9.8]) that $St$ is a projective object of the category ${\mathcal{C}}_q$. Thus $St{\otimes}Y$ is a projective object of ${\mathcal{C}}_q$ for any $Y\in {\mathcal{C}}_q$, see [@APW Lemma 9.10]. In particular, for any $K\in {\mathcal{D}}_q$ the object $St{\otimes}K\in {\mathcal{D}}_q$ is isomorphic to its cohomology (as a finite complex consisting of projective modules and with projective cohomology). It is well known that each projective object of ${\mathcal{C}}_q$ is contained in $TL(q)\subset {\mathcal{C}}_q$, see [@A (5.7)]. Thus in the language of §\[homgen\] for any $K\in K^b(TL(q))$ the complex $St{\otimes}K$ is split (analogous to Proposition \[split\]). It is clear that $K\in {\mathcal{D}}_q^{\le 0}$ if and only if $St{\otimes}K$ is concentrated in non-positive degrees and similarly for ${\mathcal{D}}_q^{\ge 0}$. This is a counterpart of the definition of the $t-$structure $({\mathcal{K}}_d^{\le 0},{\mathcal{K}}_d^{\ge 0})$.
Furthermore, it is known that each direct summand of $St{\otimes}Y$ for $Y\in TL(q)$ is negligible (see [@A Proposition 3.5 and Lemma 3.6]) and that each negligible object of $TL(q)$ is a direct summand of $St{\otimes}Y$ with $Y\in TL(q)$, see [@A p. 158]. Thus we have the following counterpart of Proposition \[vanish\] (with a similar proof):
\(a) [*Let $K\in {\mathcal{D}}_q$. Then $K\in {\mathcal{D}}_q^{\le 0}$ (resp. $K\in {\mathcal{D}}_q^{\ge 0}$) if and only if ${\operatorname{Hom}}(K,A[n])=0$ for any negligible $A\in TL(q)$ and $n\in {{\mathbb Z}}_{<0}$ (resp. $n\in {{\mathbb Z}}_{>0}$).*]{}
Hence, following §\[tblock\], we can give a blockwise description of the $t-$structure $({\mathcal{D}}_q^{\le 0},{\mathcal{D}}_q^{\ge 0})$. For a block $\mathsf{b}$ let $({\mathcal{D}}_q)_{\mathsf{b}}$ denote the full subcategory of ${\mathcal{D}}_q=K^b(TL(q))$ consisting of complexes with all terms from the block $\mathsf{b}$. Using Lemma \[TLneglig\] we obtain the following counterpart of Proposition \[blockbyblock\]:
\(b) [*Let $\mathsf{b}$ be a non-semisimple block of $TL(q)$ with indecomposable objects $L_i$ for $i\in {{\mathbb Z}}_{\ge 0}$ labeled as in Remark \[TLlabel\]. Let $K\in ({\mathcal{D}}_q)_{\mathsf{b}}$. Then $K\in {\mathcal{D}}_q^{\le 0}$ (resp. $K\in {\mathcal{D}}_q^{\ge 0}$) if and only if ${\operatorname{Hom}}(K,L_i[n])=0$ for all $i>0$ and any $n\in {{\mathbb Z}}_{<0}$ (resp. for $n\in {{\mathbb Z}}_{>0}$).*]{}
From this description it is clear that the pair $({\mathcal{D}}_q^{\le 0}\cap ({\mathcal{D}}_q)_{\mathsf{b}}, {\mathcal{D}}_q^{\ge 0}\cap ({\mathcal{D}}_q)_{\mathsf{b}})$ of subcategories of $({\mathcal{D}}_q)_{\mathsf{b}}$ corresponds to the pair $(({\mathcal{K}}_d^{\le 0})_{\mathsf{b}'}, ({\mathcal{K}}_d^{\ge 0})_{\mathsf{b}'})$ under the equivalence $({\mathcal{D}}_q)_{\mathsf{b}}\simeq ({\mathcal{K}}_d)_{\mathsf{b}'}$ induced by the equivalence of blocks from Proposition \[TLblocks\]. Since $({\mathcal{D}}_q^{\le 0}\cap ({\mathcal{D}}_q)_{\mathsf{b}}, {\mathcal{D}}_q^{\ge 0}\cap ({\mathcal{D}}_q)_{\mathsf{b}})$ is a $t-$structure on the category $({\mathcal{D}}_q)_{\mathsf{b}}$ we have the following
\[pokus\] Let $\mathsf{b}$ be a non-semisimple block of the category $TL(q)$ and let $\mathsf{b}'$ be an equivalent block in the category $\uRep(S_d)$ as in Proposition \[TLblocks\]. Then $(({\mathcal{K}}_d^{\le 0})_{\mathsf{b}'}, ({\mathcal{K}}_d^{\ge 0})_{\mathsf{b}'})$ is a $t-$structure on the category $({\mathcal{K}}_d)_{\mathsf{b}'}$. $\square$
### Proof of Theorem \[tstrth\] {#prtstr}
It suffices to show $(({\mathcal{K}}_d^{\le 0})_\mathsf{b}, ({\mathcal{K}}_d^{\ge 0})_\mathsf{b})$ is a $t-$structure on $({\mathcal{K}}_d)_\mathsf{b}$ for every block $\mathsf{b}$. If the block $\mathsf{b}$ is semisimple then the category $({\mathcal{K}}_d)_\mathsf{b}$ can be identified with $K^b({\operatorname{Vec}}_F)$ and Proposition \[blockbyblock\] (a) shows that $(({\mathcal{K}}_d^{\le 0})_\mathsf{b}, ({\mathcal{K}}_d^{\ge 0})_\mathsf{b})$ is the standard $t-$structure on $K^b({\operatorname{Vec}}_F)$.
It remains to consider the case when $\mathsf{b}$ is a non-semisimple block. Choose a nontrivial root of unity $q$ such that $q+q^{-1}\in F$ (for example a primitive cubic root of unity $\zeta$ will work for any $F$ since $\zeta+\zeta^{-1}=-1\in F$). Then there is a non-semisimple block in $TL(q)$, which is equivalent to $\mathsf{b}$ (Proposition \[TLblocks\]). Hence, by Corollary \[pokus\], $(({\mathcal{K}}_d^{\le 0})_\mathsf{b}, ({\mathcal{K}}_d^{\ge 0})_\mathsf{b})$ is a $t-$structure on $({\mathcal{K}}_d)_\mathsf{b}$. $\square$
### Complements
The proof in §\[prtstr\] implies the following
\[KpreT\] (a) The category ${\mathcal{K}}_d^0$ is pre-Tannakian.
\(b) Any object of the category ${\mathcal{K}}_d^0$ is isomorphic to a subquotient of a direct sum of tensor powers of $[pt]$.
We already know that the category ${\mathcal{K}}_d^0$ is an abelian tensor category (see Corollary \[Kdabtens\]). It is obvious that ${\operatorname{Hom}}$’s are finite dimensional and ${\text{End}}({\mathbf{1}})=F$ since this is true in the category ${\mathcal{K}}_d$. The category ${\mathcal{K}}_d^0$ is rigid: if $\Delta {\otimes}K$ is concentrated in degree zero then the same is true for $\Delta {\otimes}K^*\simeq (\Delta {\otimes}K)^*$. It remains to check that any object of ${\mathcal{K}}_d^0$ has finite length. It is clear that we can verify this block by block. The result is clear for semisimple blocks since by Proposition \[blockbyblock\](a) the core of the corresponding $t-$structure identifies with ${\operatorname{Vec}}_F$. This is also clear for non-semisimple blocks since the corresponding $t-$structure (described in Proposition \[blockbyblock\]) identifies with the $t-$structure on a block of the Temperley-Lieb category and the corresponding core has all objects of finite length since this is true for the category ${\mathcal{C}}_q$. This proves (a).
For (b) we use the same argument as above: it is sufficient to verify the statement block by block. Here the result is trivial for semisimple blocks and is known for non-semisimple ones since it is known to hold for the category ${\mathcal{C}}_q$.
Using similar techniques of importing known results about the category ${\mathcal{C}}_q$ to the category ${\mathcal{K}}_d^0$ we can obtain detailed information about this category. In particular, we see that the category ${\mathcal{K}}_d^0$ has enough projective objects; all indecomposable projective objects are direct summands of tensor powers of $[pt]$ (but powers of $[pt]$ are not projective in general; for example $[pt]^{\otimes 0}={\mathbf{1}}$ is not projective). Thus Corollary \[KpreT\](b) can be improved: any object of the category ${\mathcal{K}}_d^0$ is isomorphic to a quotient of a direct sum of tensor powers of $[pt]$.
Universal property
==================
Extension property of the category ${\mathcal{K}}_d^0$ {#extension}
------------------------------------------------------
We start with the following
\[ext\] Let ${\mathcal{T}}$ be a pre-Tannakian category and let ${\mathcal}{F}: \uRep(S_d)\to {\mathcal{T}}$ be a tensor functor. Assume that ${\mathcal}{F}(\Delta)\ne 0$. Then the functor ${\mathcal}{F}$ (uniquely) factorizes as $\uRep(S_d)\to {\mathcal{K}}^0_d\to {\mathcal{T}}$ where ${\mathcal{K}}^0_d\to {\mathcal{T}}$ is an exact tensor functor.
Let $K\in {\mathcal{K}}_d^0$. We can consider ${\mathcal}{F}(K)\in K^b({\mathcal{T}})$. Since the category ${\mathcal{T}}$ is abelian we can talk about cohomology of ${\mathcal}{F}(K)$.
$H^i({\mathcal}{F}(K))=0$ for $i\ne 0$.
Notice that for any $0\ne X\in {\mathcal{T}}$ we have $X{\otimes}{\mathcal}{F}(\Delta)\ne 0$. Since the endofunctor $-{\otimes}{\mathcal}{F}(\Delta)$ of the category ${\mathcal{T}}$ is exact (see e.g. [@BK Proposition 2.1.8]) we see that $H^i({\mathcal}{F}(K {\otimes}\Delta))=H^i({\mathcal}{F}(K){\otimes}{\mathcal}{F}(\Delta))=H^i({\mathcal}{F}(K)){\otimes}{\mathcal}{F}(\Delta)$. By definition of ${\mathcal{K}}_d^0$ the cohomology of ${\mathcal}{F}(K{\otimes}\Delta)$ is concentrated in degree zero and we are done.
We now define the functor ${\mathcal{K}}^0_d\to {\mathcal{T}}$ as $K\mapsto H^0({\mathcal}{F}(K))$ with the tensor structure induced by the one on ${\mathcal}{F}$ (or rather its extension to $K^b(\uRep(S_d))\to {\mathcal{K}}^b({\mathcal{T}})$).
Here is an example of tensor functor between abelian rigid tensor categories which is not exact. Let $k$ be a field of characteristic 2 and consider the category ${\operatorname{Rep}}_k({{\mathbb Z}}/2{{\mathbb Z}})$ of finite dimensional $k-$representations of ${{\mathbb Z}}/2{{\mathbb Z}}$. This category has precisely 2 indecomposable objects: one is simple and 1-dimensional; the other is projective and has categorical dimension 0. Thus the quotient of ${\operatorname{Rep}}_k({{\mathbb Z}}/2{{\mathbb Z}})$ by the negligible morphisms is equivalent to the category ${\operatorname{Vec}}_k$ of finite dimensional vector spaces over $k$. Clearly the quotient functor ${\operatorname{Rep}}_k({{\mathbb Z}}/2{{\mathbb Z}})\to {\operatorname{Vec}}_k$ is not exact since it sends the projective object to zero. One can also construct a similar example over a field of characteristic zero using the representation category of the additive supergroup of a 1-dimensional odd space.
Fundamental groups of ${\mathcal{K}}_d^0$ and ${\operatorname{Rep}}(S_d)$
-------------------------------------------------------------------------
Let $\pi$ be the fundamental group of the pre-Tannakian category ${\mathcal{K}}_d^0$. The action of $\pi$ on $[pt]\in \uRep(S_d)\subset {\mathcal{K}}_d^0$ defines a homomorphism $\pi \to S_{{\bf I}}$ where ${{\bf I}}={\text{Spec}}([pt])$.
\[fgusd\] The homomorphism ${\varepsilon}: \pi \to S_{{\bf I}}$ is in fact an isomorphism.
Since the object $[pt]$ generates ${\mathcal{K}}_d^0$ (see Corollary \[KpreT\](b)) the homomorphism ${\varepsilon}: \pi \to S_{{\bf I}}$ is an embedding.
Consider the category ${\operatorname{Rep}}(S_{{\bf I}},{\varepsilon})$. It is shown in (the proof of) [@Del07 Proposition B1] that its fundamental group is precisely the group $S_{{\bf I}}^{\varepsilon}={\text{Aut}}_{{\operatorname{Rep}}(S_{{\bf I}},{\varepsilon})}({{\bf I}})$. We have an obvious tensor functor $\uRep(S_d)\to {\operatorname{Rep}}(S_{{\bf I}},{\varepsilon})$; by Proposition \[ext\] it extends to a tensor functor ${\mathcal}{F}: {\mathcal{K}}_d^0\to {\operatorname{Rep}}(S_{{\bf I}},{\varepsilon})$. Thus we have a homomorphism $S_{{\bf I}}^{\varepsilon}\to {\mathcal}{F}(\pi)$. It is clear that the composition $S_{{\bf I}}^{\varepsilon}\to {\mathcal}{F}(\pi)\subset {\mathcal}{F}(S_{{\bf I}})=S_{{\bf I}}^{\varepsilon}$ is the identity map. The result follows.
We also recall here the following result from [@Del90 8.14(ii)]:
\[fgsd\] The fundamental group of the category ${\operatorname{Rep}}(S_d)$ is the group $S_d$ acting on itself by conjugation. $\square$
It is explained in [@Del90 8.14(ii)] that we can replace $S_d$ with any affine algebraic group $G$ in the statement of the previous proposition.
Proof of Theorem \[main\]
-------------------------
We start with the following result:
\[mainly\] Let ${\mathcal{T}}$ be a pre-Tannakian category and let ${\mathcal}{F}: \uRep(S_d)\to {\mathcal{T}}$ be a tensor functor with $T={\mathcal}{F}([pt])$.
\(a) If ${\mathcal}{F}(\Delta)=0$ then the category ${\operatorname{Rep}}(S_{{\bf I}},{\varepsilon})$ endowed with the functor ${\mathcal}{F}_T: \uRep(S_d)\to {\operatorname{Rep}}(S_{{\bf I}},{\varepsilon})$ is equivalent to ${\operatorname{Rep}}(S_d)$ equipped with the functor $\uRep(S_d)\to {\operatorname{Rep}}(S_d)$.
\(b) If ${\mathcal}{F}(\Delta)\ne 0$ then the category ${\operatorname{Rep}}(S_{{\bf I}},{\varepsilon})$ endowed with the functor ${\mathcal}{F}_T: \uRep(S_d)\to {\operatorname{Rep}}(S_{{\bf I}},{\varepsilon})$ is equivalent to ${\mathcal{K}}_d^0$ equipped with the functor $\uRep(S_d)\to {\mathcal{K}}_d^0$.
\(a) In this case ${\mathcal}{F}$ factorizes as $\uRep(S_d)\to {\operatorname{Rep}}(S_d)\to {\mathcal{T}}$ (see Corollary \[DeltaGen\]). The result follows from [@Del90 Theoreme 8.17] and Proposition \[fgsd\].
\(b) In this case ${\mathcal}{F}$ extends to a functor $\uRep(S_d)\to {\mathcal{K}}_d^0\to {\mathcal{T}}$ by Proposition \[ext\]. The result follows from [@Del90 Theoreme 8.17] and Proposition \[fgusd\].
If we apply Theorem \[mainly\](b) to the category ${\mathcal{T}}=\uRep(S_{-1})$ and the functor ${\underline{\operatorname{Res}}}^{S_d}_{S_{-1}}:\uRep(S_d)\to {\mathcal{T}}$ described in §\[222\] and §\[DLsec\] we obtain the following
\[abD\] The category $\uRep^{ab}(S_d)$ endowed with the functor $\uRep(S_d)\to
\uRep^{ab}(S_d)$ is equivalent to the category ${\mathcal{K}}_d^0$ with the functor $\uRep(S_d)\to {\mathcal{K}}_d^0$. $\square$
Clearly Theorem \[mainly\] and Corollary \[abD\] together imply Theorem \[main\].
[A]{}
H. H. Andersen, *Tensor products of quantized tilting modules,* Comm. Math. Phys. [**149**]{} (1992), no. 1, 149-159.
H. H. Andersen, P. Polo, K. Wen, *Representations of quantum algebras,* Invent. Math. [**104**]{} (1991), 1-59.
B. Bakalov, A. Kirillov Jr., *Lectures on Tensor categories and modular functors*, AMS, (2001).
A. Beilinson, J. Bernstein, P. Deligne, *Faisceaux pervers*, Asterisque [**100**]{} (1982).
J. Comes, V. Ostrik, *On blocks of Deligne’s category $\uRep(S_t)$,* Adv. in Math. [**226**]{} (2011), no. 2, 1331-1377.
P. Deligne, *Catégories tannakiennes,* The Grothendieck Festschrift, Vol. II, 111-195, Progr. Math., 87, Birkhäuser Boston, Boston, MA, 1990.
P. Deligne, *La catégorie des représentations du groupe symétrique $S_t$, lorsque $t$ n’est pas un entier naturel,* in: Algebraic Groups and Homogeneous Spaces, in: Tata Inst. Fund. Res. Stud. Math., Tata Inst. Fund. Res., Mumbai, 2007, 209-273.
W. Fulton, J. Harris, *Representation theory. A first course,* Graduate Texts in Mathematics, 129. Readings in Mathematics. Springer-Verlag, New York, 1991. xvi+551 pp.
F. Goodman, H. Wenzl, *Ideals in the Temperley-Lieb category,* Appendix to M. H. Freedman, *A magnetic model with a possible Chern-Simons phase,* Comm. Math. Phys. [**234**]{} (2003) 129-183.
M. Kashiwara, P. Schapira, *Categories and sheaves,* Grundlehren der Mathematischen Wissenschaften, 332. Springer-Verlag, Berlin, 2006. x+497 pp.
V. Ostrik, *Module categories over representations of $SL_q(2)$ in the non-semisimple case,* Geom. Funct. Anal. [**17**]{} (2008), no. 6, 2005-2017.
[^1]: Set $I$ to be the set of $F-$algebra homomorphisms $T\to \bar F$ where $\bar F$ is an algebraic closure of $F$ and use an obvious homomorphism $G\to S_I$.
[^2]: We refer the reader to [@Del07 §5.8] for an example of Karoubian symmetric tensor category which admits no braided tensor functor to an abelian symmetric tensor category.
[^3]: Recall that a morphism $f\in {\operatorname{Hom}}_{\mathcal}{T}(X,Y)$ is negligible if ${\text{Tr}}(fg)=0$ for any $g\in {\operatorname{Hom}}_{\mathcal}{T}(Y,X)$. We will call an object negligible if its identity morphism is negligible.
[^4]: we refer the reader to [@Del07 §1.7-1.8] for the discussion of this notion.
[^5]: Here $y_\lambda$ is a scalar multiple of the so-called Young symmetrizer (see for instance [@FH]).
[^6]: In the notation of [@CO], $\Delta_n=([n], x_n)$.
[^7]: In fact, (\[D1\]), (\[D2\]), (\[D3\]) do not depend on $t$, so we only need to verify they hold for some $t$.
[^8]: If $t\not\in{\mathbb{Z}}_{\geq0}$ then $\uRep(S_t)$ is semisimple (see [@Del07 Théorème 2.18] or [@CO Corollary 5.21]). Hence there are no nonzero proper tensor ideals in $\uRep(S_t)$ when $t\not\in{\mathbb{Z}}_{\geq0}$.
[^9]: Thus the category ${\mathcal{D}}_q^{\le 0}$ consists of objects of $D^b({\mathcal{C}}_q)$ with nontrivial cohomology only in non-positive degrees and similarly for ${\mathcal{D}}_q^{\ge 0}$.
|
---
abstract: |
This paper proposes a paradigm shift in the valuation of long term annuities, away from classical no-arbitrage valuation towards valuation under the real world probability measure. Furthermore, we apply this valuation method to two examples of annuity products, one having annual payments linked to a mortality index and the savings account and the other having annual payments linked to a mortality index and an equity index with a guarantee that is linked to the same mortality index and the savings account. Out-of-sample hedge simulations demonstrate the effectiveness of real world valuation.
In contrast to risk neutral valuation, which is a form of relative valuation, the long term average excess return of the equity market comes into play. Instead of the savings account, the numéraire portfolio is employed as the fundamental unit of value in the analysis. The numéraire portfolio is the strictly positive, tradable portfolio that when used as benchmark makes all benchmarked nonnegative portfolios supermartingales. The benchmarked real world value of a benchmarked contingent claim equals its real world conditional expectation. This yields the minimal possible value for its hedgeable part and minimizes the fluctuations for its benchmarked hedge error. Under classical assumptions, actuarial and risk neutral valuation emerge as special cases of the proposed real world valuation. In long term liability and asset valuation, the proposed real world valuation can lead to significantly lower values than suggested by classical approaches when an equivalent risk neutral probability measure does not exist.
author:
- Kevin Fergusson
- Eckhard Platen
bibliography:
- 'HedgingCLBsRevised.bib'
title: 'Less-Expensive Valuation of Long Term Annuities Linked to Mortality, Cash and Equity'
---
[^1]
Introduction
============
Long dated contingent claims are relevant in insurance, pension fund management and derivative valuation. This paper proposes a paradigm shift in the valuation of long term contracts, away from classical no-arbitrage valuation, towards valuation under the real world probability measure. In contrast to risk neutral valuation, which is a form of relative valuation, the long term average trend of the equity market above the fixed income money market, known as the equity premium, is coming into play in the proposed real world valuation. A benchmark, the numéraire portfolio, is employed as the fundamental unit of value in the analysis, replacing the savings account. The numéraire portfolio is the strictly positive, tradable portfolio that when used as benchmark makes all benchmarked nonnegative portfolios supermartingales. This means, their current benchmarked values are greater than, or equal to, their expected future benchmarked values. Furthermore, the benchmarked real world value of a benchmarked contingent claim is proposed to equal its real world conditional expectation. This yields the minimal possible value for its hedgeable part of the benchmarked contingent claim and minimizes the fluctuation of its benchmarked hedge error. It turns out that the pooled total benchmarked hedge error of a well diversified book of contracts issued by an insurance company can practically vanish due to diversification when the number of contracts becomes large. Classical actuarial and risk neutral valuation emerge as special cases of the proposed real world valuation methodology when classical modeling assumptions are imposed. In long term asset and liability valuation, real world valuation can lead to significantly lower values than suggested by classical valuation arguments when the existence of some equivalent risk neutral probability measure is not requested. A wider and more realistic modeling framework then becomes available which allows this phenomenon to be exploited.
The [*benchmark approach*]{}, described in [@PlHe2010], proposes such a framework. Instead of relying on the domestic savings account as the reference unit, a benchmark in form of the best performing, tradable strictly positive portfolio is chosen as numéraire. More precisely, it is proposed to employ the [*numéraire portfolio*]{} as benchmark, whose origin can be traced back to [@Long90], and which is equal to, in general, the growth optimal portfolio; see [@Kelly56].
In recent years the problem of accurately valuing long term assets and liabilities, held by insurance companies, banks and pension funds, has become increasingly important. How these institutions perform such valuations often remains unclear. However, the recent experience with low interest rate environments suggests that some major changes are due in these industries. One possible explanation, to be explored in this article, is that the risk neutral valuation paradigm itself may be inherently flawed, especially when it is applied to the valuation of long term contracts. It leads to a more expensive production method than necessary, as will be explained in the current paper.
The structure of the paper is as follows: Section \[sec:val2\] gives a brief survey on the literature about valuation methods in insurance and finance. Section \[sec:val4\] introduces the benchmark approach. Real world valuation is described in Section \[sec:val5\]. Two examples on real world valuation and hedging of long term annuities, with annual payments linked to a mortality index and either a savings account or an equity index, are illustrated in Section \[sec:val8\]. Section \[sec:val10\] concludes.
Valuation Methods for Long Term Contracts {#sec:val2}
=========================================
One of the most dynamic areas in the current risk management literature is the valuation of long term contracts, including variable annuities. The latter represent long term contracts with payoffs that depend on insured events and on underlying assets that are traded in financial markets. The valuation methods can be categorised into three main types: actuarial valuation or expected present value, risk neutral valuation and utility maximization valuation.
[*Actuarial Valuation (Expected Present Value)*]{}
One of the pioneers of calculating present values of contingent claims for life insurance companies was James Dodson, whose work is described in the historical accounts of [@Camp2016] and [@Dods1756]. The application of such methods to with-profits policies ensued. In the late 1960s US life insurers entered the variable annuity market, as mentioned in [@Sloane70], where such products required assumptions on the long term behaviour of equity markets. Many authors have analysed various actuarial models of the long term evolution of stochastic equity markets, such as [@Wise84] and [@Wilkie85; @Wilkie87; @Wilkie95].
Since the work of [@Redi1952], the matching of well-defined cash flows with liquidly traded ones, while minimizing the risk of reserves, has been a widely used valuation method in insurance. For instance, [@Wise84b; @Wise84; @Wise87a; @Wise87b; @Wise89], [@Wilkie85] and [@KeelMu95] study contracts when a perfect match is not possible.
[*Risk Neutral Valuation*]{}
The main stream of research, however, follows the concept of no-arbitrage valuation in the sense of [@Ross76] and [@HarrisonKr79]. This approach has been widely used in finance, where it appears in the guise of risk neutral valuation. The earliest applications of no-arbitrage valuation to variable annuities are in the papers by [@BrennanSc76; @BrennanSc79] and [@BoyleSc77], which extend the Black-Scholes-Merton option valuation (see [@BlackSc73] and [@Merton73b]) to the case of equity-linked insurance contracts.
The Fundamental Theorem of Asset Pricing, in its most general form formulated by [@DelbaenSc98], establishes a correspondence between the “no free lunch with vanishing risk" no-arbitrage concept and the existence of an equivalent risk neutral probability measure. This important result demonstrates that theoretically founded classical no-arbitrage pricing requires the restrictive assumption that an equivalent risk neutral probability measure must exist. In such a setting, the effect of stochastic interest rates on the risk neutral value of a guarantee has been discussed by many authors, such as [@BacinelloOr93; @BacinelloOr96], [@AasePe94] and [@HuangCa04; @HuangCa05].
[*Risk Neutral Valuation for Incomplete Markets*]{}
In reality, one has to deal with the fact that markets are incomplete and insurance payments are not fully hedgeable. The choice of a risk neutral pricing measure is, therefore, not unique, as pointed out by [@FollmerSo86] and [@FollmerSc91], for example. [@HofmannPlSc92], [@GerberSh94], [@Gerber97] and [@JaimungalYo05]. [@DuffieRi91] and [@Schweizer92] address this issue by suggesting certain mean-variance hedging methods based on a form of variance- or risk-minimizing objective, assuming the existence of a particular risk neutral measure. In the latter case, the so-called minimal equivalent martingale measure, due to [@FollmerSc91], emerges as the pricing measure. This valuation method is also known as local risk minimization and was considered by [@Moller98; @Moller01], [@Schweizer01] and [@DahlMo06] for the valuation of insurance products.
[*Expected Utility Maximization*]{}
Another approach involves the maximization of expected terminal utility, see [@KaratzasShLeXu91], [@KramkovSc99] and [@Delbaen_6_02]. In this case the valuation is based on a particular form of utility indifference pricing. This form of valuation has been applied by [@HodgesNe89] and later by [@Davis97]. It has been used to value equity-linked insurance products by [@YoungZa02d; @YoungZa02i], [@Young03] and [@MooreYo03].
Typically in the context of some expected utility maximization there is an ongoing debate on the links between the valuation of insurance liabilities and financial economics for which the reader can be refered to [@Reitano97], [@Longley98], [@BabbelMe98], [@Moller98; @Moller02], [@PhillipsCuAl98], [@Girard00], [@Lane00] and [@Wang00; @Wang02]. Equilibrium modeling from a macro-economic perspective has been the focus of a line of research that can be traced back to [@Debreu82], [@Starr97] and [@Duffie92].
[*Stochastic Mortality Rates*]{}
Note that stochastic mortality rates are easily incorporated in the pricing of insurance products as demonstrated by [@MilevskyPr01], [@Dahl04], [@KirchMe05], [@CairnsBlDo06a; @CairnsBlDo06p; @CairnsBlDo08], [@Biffis05], [@MelnikovRo06; @MelnikovRo08] and [@JalenMa08]. Most of these authors assume that the market is complete with respect to mortality risk, which means that it can be removed by diversification.
[*Stochastic Discount Factors*]{}
Several no-arbitrage pricing concepts have been popular in finance that are equivalent to the risk neutral approach. For instance, [@Cochrane01] employs the notion of a stochastic discount factor. The use of a state-price density, a deflator or a pricing kernel have been considered by [@Constantinides92], [@Cochrane01] and [@Duffie92], respectively. Another way of describing classical no-arbitrage pricing was pioneered by [@Long90] and further developed in [@BajeuxPo97] and [@Becherer01], who use the numéraire portfolio as numéraire instead of the savings account, and employ the real world probability measure as pricing measure to recover risk neutral prices.
[*Real World Pricing under the Benchmark Approach*]{}
The previous line of research involving the numéraire portfolio comes closest to the form of real world valuation proposed under the benchmark approach in [@Plat2002b] and [@PlHe2010]. The primary difference is that the benchmark approach does no longer assume the existence of an equivalent risk neutral probability measure. In so doing it allows for a much richer class of models to be available for consideration and permits several self-financing portfolios to replicate the same contingent claim, where it can select the least expensive one as corresponding value process.
The benchmark approach employs the best performing, strictly positive, tradable portfolio as benchmark and makes it the central reference unit for modeling, pricing and hedging.
All valuations are performed under the real world probability measure and, therefore, labelled “real world pricing". In a complete market where an equivalent risk neutral probability measure exists real world pricing yields the same price as risk neutral pricing. When there is no equivalent risk neutral probability measure in the market model, then risk neutral prices can still be employed without generating any economically meaningful arbitrage but these may be more expensive than the respective real world prices.
In [@DuPl16] the concept of benchmarked risk minimization has been introduced, which yields via the real world price the minimal value for the hedgeable part of a not fully hedgeable contingent claim and minimizes the fluctuations of the profit and losses when denominated in units of the numéraire portfolio. Risk minimization that is close to the previously mentioned concept of local risk minimization of [@FollmerSc91] and [@FollmerSo86] was studied under the benchmark approach in [@BiCrPl2014].
Benchmark Approach {#sec:val4}
==================
Within this and the following section we give a brief survey about the benchmark approach, which goes beyond results presented in [@PlHe2010] and underpins our findings. Consider a market comprising a finite number $J+1$ of primary security accounts. An example of such a security could be an account containing shares of a company with all dividends reinvested in that stock. A savings account held in some currency is another example of a primary security account. In reality, time is continuous and this paper considers continuous time models. These can provide compact and elegant mathematical descriptions of asset value dynamics. We work on a filtered probability space $(\Omega , {\mathcal{F}},{\underline{\mathcal{F}}} , P )$ with filtration ${\underline{\mathcal{F}}} = ({\mathcal{F}}_t)_{t\ge 0}$ satisfying the usual conditions, as in [@KaratzasSh91].
This section introduces the benchmark approach with its concept of real world pricing. The key assumption is that there exists a best performing, strictly positive, tradable portfolio in the given investment universe, which we specify later on as the numéraire portfolio. This benchmark portfolio can be interpreted as a universal currency. Its existence turns out to be sufficient for the formulation of powerful results concerning diversification, portfolio optimization and valuation.
The [*benchmarked value*]{} of a security represents its value denominated in units of the benchmark portfolio. Denote by ${\hat{S}}^j_t$ the benchmarked value of the $j$th [*primary security account*]{}, $j \in \{ 0,1,\ldots ,J\}$, at time $t \geq 0$. The $0$-th primary security account is chosen to be the savings account of the domestic currency. The particular dynamics of the primary security accounts are not important for the formulation of several statements presented below. For simplicity, taxes and transaction costs are neglected in the paper.
The market participants can form self-financing portfolios with primary security accounts as constituents. A portfolio at time $t$ is characterized by the number $\delta^j_t$ of units held in the $j$th primary security account, $j \in \{ 0,1,2,\ldots ,J \}$, $t \geq 0$. Assume for any given [*strategy* ]{} $\delta=\{\delta_t=(\delta^0_t,
\delta^1_t,\ldots , \delta^J_t )^\top, t \geq 0 \}$ that the values $\delta^0_t,\delta^1_t,\ldots ,\delta^J_t $ depend only on information available at the time $t$. The value of the benchmarked portfolio, which means its value denominated in units of the benchmark, is given at time $t$ by the sum $$\label{dt2.5'}
{\hat{S}}^\delta_t = \sum_{j=0}^J \delta^j_t\,{\hat{S}}^j_t,$$ for $t \geq 0$. Since there is only finite total wealth available in the market, the paper considers only strategies whose associated benchmarked portfolio values remain finite at all times.
Let $E_t(X)=E(X|{\mathcal{F}}_t)$ denote the expectation of a random variable $X$ under the real world probability measure $P$, conditioned on the information available at time $t$ captured by ${\mathcal{F}}_t$ (for example, see Section 8 of Chapter 1 of [@Shiryaev84]). This allows us to formulate the main assumption of the benchmark approach as follows:
\[ass:dt3.2b\] There exists a strictly positive benchmark portfolio, called the numéraire portfolio, such that each benchmarked nonnegative portfolio ${\hat{S}}^\delta_t$ forms a supermartingale, which means that $$\label{bval2}
{\hat{S}}^\delta_t \geq E_t({\hat{S}}^\delta_s)$$ for all $0 \leq t \leq s < \infty$.
Inequality can be referred to as the [*supermartingale property*]{} of benchmarked securities. It is obvious that the benchmark represents in the sense of Inequality the best performing portfolio, forcing all benchmarked nonnegative portfolios in the mean downward or having no trend. In general the assumed numéraire portfolio coincides with the growth optimal portfolio, which is the portfolio that maximizes expected logarithmic utility, see [@Kelly56]. Since only the existence of the numéraire portfolio is requested, the benchmark approach reaches beyond the classical no-arbitrage modeling world.
According to Assumption \[ass:dt3.2b\] the current benchmarked value of a nonnegative portfolio is greater than or equal to its expected future benchmarked values. Assumption \[ass:dt3.2b\] guarantees several essential properties of a financial market model without assuming a particular dynamics for the asset values. For example, it implies the absence of economically meaningful arbitrage by ensuring that any strictly positive portfolio remains finite at any finite time because the best performing portfolio has this property. As a consequence of the supermartingale property and because a nonnegative supermartingale that reaches zero is absorbed at zero, no wealth can be created from zero initial capital under limited liability.
For the classical risk neutral valuation, the corresponding no-arbitrage concept is formalised as “no free lunch with vanishing risk" (NFLVR), see [@DeSc1994b]. The benchmark approach assumes that the portfolio cannot explode, which is equivalent to the “no unbounded profits with bounded risk" (NUPBR) concept, see [@KaratzasKa07]. An equivalent martingale measure is not required to exist and, therefore, benchmarked portfolio strategies are permitted to form strict supermartingales, a phenomenon which we exploit in the current paper.
Another fundamental property that follows directly from the supermartingale property is that the benchmark portfolio is unique. To see this, consider two strictly positive portfolios that are supposed to represent the benchmark. The first portfolio, when expressed in units of the second one, must satisfy the supermartingale property . By the same argument, the second portfolio, when expressed in units of the first one, must also satisfy the supermartingale property. Consequently, by Jensen’s inequality both portfolios must be identical. Thus, the value process of the benchmark that starts with given strictly positive initial capital is unique. Due to possible redundancies in the set of primary security accounts, this does not imply uniqueness for the trading strategy generating the benchmark portfolio.
Assumption \[ass:dt3.2b\] is satisfied for most reasonable financial market models. It simply asserts the existence of a best performing portfolio that does not “explode". This requirement can be interpreted as the absence of economically meaningful arbitrage. In Theorem 14.1.7 of Chapter 14 of [@PlHe2010], Assumption \[ass:dt3.2b\] has been verified for jump diffusion markets, which cover a wide range of possible market dynamics. [@KaratzasKa07] show that Assumption \[ass:dt3.2b\] is satisfied for any reasonable semimartingale model. Note that Assumption \[ass:dt3.2b\] permits us to model benchmarked primary security accounts that are not martingales. This is necessary for realistic long term market modeling, as will be demonstrated in Section \[sec:val8\].
By referring to results in [@Platen05dp], [@LePl06a], [@PlatenRe12] and [@PlRe17], one can say that the benchmark portfolio is not only a theoretical construct, but can be approximated by well diversified portfolios, e.g. by the MSCI world stock index for the global equity market or the S$\&$P500 total return index for the US equity market.
A special type of security emerges when equality holds in relation .
\[def:bval4\] A security is called [*fair*]{} if its benchmarked value ${\hat{V}}_t$ forms a martingale, that is, the current value of the process $\hat{V}$ is the best forecast of its future values, which means that, $$\label{bval6}
{\hat{V}}_t = E_t\left( {\hat{V}}_s\right)$$ for all $0 \leq t \leq s < \infty$.
Note that the above notion of a fair security is employing the best performing portfolio, the benchmark. The benchmark approach allows us to consider securities that are not fair. This important flexibility is missing in the classical no-arbitrage approach and will be required when modeling the market realistically over long time periods.
Real World Pricing {#sec:val5}
==================
As stated earlier, the most obvious difference between the benchmark approach and the classical risk neutral approach is the choice of pricing measure. The former uses the real world probability measure with the numéraire portfolio as reference unit for valuation, while the savings account is the chosen numéraire under the risk neutral approach, which assumes the existence of an equivalent risk neutral probability measure. The assumption is additionally imposed to our Assumption \[ass:dt3.2b\] and, therefore, reduces significantly the class of models and phenomena considered. The supermartingale property ensures that the expected return of a benchmarked nonnegative portfolio can be at most zero. In the case of a fair benchmarked portfolio, the expected return is precisely zero. The current benchmarked value of such a portfolio is, therefore, the best forecast of its benchmarked future values. The risk neutral approach assumes that the savings account is fair, which seems to be at odds with evidence; see for example [@BaGrPl2015] and [@BaIgPl2016].
Under the benchmark approach, there can be many supermartingales that approach the same future random value. Within a family of nonnegative supermartingales, the supermartingale with the smallest initial value turns out to be the corresponding martingale; see Proposition 3.3 in [@DuPl16]. This basic fact allows us to deduce directly the following [*Law of the Minimal Price*]{}:
\[the:dt4’.4\] If a fair portfolio replicates a given nonnegative payoff at some future time, then this portfolio represents the minimal replicating portfolio among all nonnegative portfolios that replicate this payoff.
For a given payoff there may exist self-financing hedge portfolios that are not fair. Consequently, the classical Law of One Price (see, for example, [@Tayl02]) does no longer hold under the benchmark approach. However, the above Law of the Minimal Price provides instead a consistent, unique value system for all hedgeable contracts with finite expected benchmarked payoffs.
It follows for a given hedgeable payoff that the corresponding fair hedge portfolio represents the least expensive hedge portfolio. From an economic point of view investors prefer more to less and this is, therefore, also the correct value in a liquid, competitive market. As will be demonstrated in Section \[sec:val8\], there may exist several self-financing portfolios that hedge one and the same payoff. It is the fair portfolio that hedges the payoff at minimal cost. We emphasize that risk neutral valuation based purely on hedging via classical no-arbitrage arguments, see [@Ross76] and [@HarrisonKr79], may lead to more expensive values than those given by the corresponding fair value.
Now, consider the problem of valuing a given payoff to be delivered at a maturity date $T \in (0,\infty)$. Define a benchmarked [*contingent claim*]{} ${\hat{H}}_T$ as a nonnegative payoff denominated in units of the benchmark portfolio with finite expectation $$\label{dt4'.a}
E_0 \left( {\hat{H}}_T\right) < \infty.$$
If for a benchmarked contingent claim ${\hat{H}}_T$, $T \in
(0, \infty)$, there exists a benchmarked fair portfolio ${\hat{S}}^{\delta_{\hat{H}_T}}$, which replicates this claim at maturity $T$, that is ${\hat{H}}_T =
{\hat{S}}^{\delta_{\hat{H}_T}}_T$, then, by the above Law of the Minimal Price, its minimal replicating value process is at time $t \in
[0,T]$ given by the real world conditional expectation $$\label{dt4'.63}
{\hat{S}}^{\delta_{\hat{H}_T}}_t = E_t
\left( {\hat{H}}_T \right) .$$ Multiplying both sides of equation by the value of the benchmark portfolio in domestic currency at time $t$, denoted by $S^*_t$, one obtains the [*real world valuation formula*]{} $$\label{val3.3}
S^{\delta_{\hat{H}_T}}_t = \hat{S}^{\delta_{\hat{H}_T}}_t \, S^*_t = S^*_t\,
E_t \left( \frac{H_T}{S^*_T} \right) ,$$ where $H_T={\hat{H}}_T\,S^*_T$ is the payoff denominated in domestic currency and $S^{\delta_{\hat{H}_T}}_t$ the fair value at time $t \in [0,T]$ denominated in domestic currency. Note that the benchmark portfolio can be obtained by the product $S^*_t=(\hat{S}^0_t)^{-1}\, S^0_t$ of the inverse of the benchmarked savings account $\hat{S}^0_t$ and the value $S^0_t$ of this savings account denominated in domestic currency.
Formula is called the real world valuation formula because it involves the conditional expectation $E_t$ with respect to the real world probability measure $P$. It only requires the existence of the numéraire portfolio and the finiteness of the expectation in . These two conditions can hardly be weakened. By introducing the concept of benchmarked risk minimization in [@DuPl16] it has been shown that the above real world valuation formula also provides the natural valuation for nonhedgeable contingent claims when one aims to diversify as much as possible nonhedgeable parts of contingent claims.
An important application for the real world pricing formula arises when $H_T$ is independent of $S^*_T$, which leads to the [*actuarial valuation formula*]{} $$\label{dt4'.63a}
S^{\delta_{{\hat{H}}_T}}_t =P(t,T)\,E_t(H_T).$$ The derivation of from exploits the simple fact that the expectation of a product of independent random variables equals the product of their expectations. One discounts in by multiplying the real world expectation $E_t(H_T)$ with the fair zero coupon bond value $$\label{dt4'.63b}
P(t,T) = S^*_t\,E_t\left( (S^*_T)^{-1}\right) .$$ The actuarial valuation formula has been used as a valuation rule by actuaries for centuries to determine the net present value of a claim. This important formula follows here as a direct consequence of real world pricing, confirming actuarial intuition and experience.
The following discussion aims to highlight the link between real world valuation and risk neutral valuation. Risk neutral valuation uses as its numéraire the domestic savings account process $S^0=\{S^0_t \, , \, t \geq 0 \}$, denominated in units of the domestic currency. Under certain assumptions, which will be described below, one can derive risk neutral values from real world values by rewriting the real world valuation formula in the form $$\label{dt4'.b7}
S^{\delta_{{\hat{H}}_T}}_t = E_t \left( \frac{\Lambda_T}{\Lambda_t}
\,\frac{S^0_t}{S^0_T} \,H_T \right) ,$$ employing the benchmarked, normalized savings account $\Lambda_t = \frac{S^0_t\,S^*_0}{S^*_t\,S^0_0}$ for $t \in [0,T]$. Note that $\Lambda_0=1$ and that when assuming that the putative risk neutral measure $Q$ is an equivalent probability measure we get $$\label{dt4'.b7a}
E_t \left( \frac{\Lambda_T}{\Lambda_t}
\,\frac{S^0_t}{S^0_T} \,H_T \right) = E_t^Q \left( \frac{S^0_t}{S^0_T} \,H_T \right) ,$$ where $\Lambda_t$ represents in a complete market the respective (Radon-Nikodym derivative) density at time $t$ and $E^Q_t$ denotes conditional expectation under $Q$. We remark that $Q$ is an equivalent probability measure if and only if $\Lambda_t$ forms a true martingale, which means that the benchmarked savings account ${\hat{S}}^0_t$ forms a true martingale.
For illustration, let us interpret throughout this paper the S$\&$P500 total return index as the benchmark and numéraire portfolio for the US equity market. Its monthly observations in units of the US dollar savings account are displayed in Figure \[fig:val41\] for the period from January 1871 until March 2017. The logarithms of the S$\&$P500 and the US dollar saving account are exhibited in Figure \[fig:val411\]. One clearly notes the higher long term growth rate of the S$\&$P500 when compared with that of the savings account, a stylized empirical fact which is essential for the existence of the stock market.
![Discounted S$\&$P500 total return index.[]{data-label="fig:val41"}](Figure1.pdf){width="\textwidth"}
![Logarithms of S$\&$P500 and savings account.[]{data-label="fig:val411"}](Figure2.pdf){width="\textwidth"}
![Radon-Nikodym derivative of the putative risk neutral measure in dependence on time $T$.[]{data-label="fig:val42"}](Figure3.pdf){width="\textwidth"}
The normalized inverse of the discounted S$\&$P500 allows us to plot in Figure \[fig:val42\] the resulting density process $\Lambda=\{\Lambda_t \, , \, t \in [0,T]\}$ of the putative risk neutral measure $Q$ as it appears in . Although one only has one sample path to work with, it seems unlikely that the path displayed in Figure \[fig:val42\] is the realization of a true martingale. Due to its obvious systematic downward trend it seems more likely to be the trajectory of a strict supermartingale. In this case the density process would [*not*]{} describe a probability measure and one could expect substantial overpricing to occur in risk neutral valuations of long term contracts when simply assuming that the density process is a true martingale. Note that this martingale condition is the key assumption of the theoretical foundation of classical risk neutral valuation, see [@DelbaenSc98]. In current industry practice and most theoretical work this assumption is typically made. Instead of working on the filtered probability space $(\Omega , {\mathcal{F}}, {\underline{\mathcal{F}}}, P)$ one works on the filtered probability space $(\Omega , {\mathcal{F}}, {\underline{\mathcal{F}}}, Q)$ assuming that there exists an equivalent risk neutral probability measure $Q$ without ensuring that this is indeed the case. In the case of a complete market we have seen that the benchmarked savings account has to be a martingale to ensure that risk neutral prices are theoretically founded as intended. We observed in and that in this case the real world and the risk neutral valuation coincide. In the case when the benchmarked savings account is not a true martingale one can still perform formally risk neutral pricing. For a hedgeable nonnegative contingent claim one obtains then a self-financing hedge portfolio with values that represent the formally obtained risk neutral value. This portfolio when benchmarked is a supermartingale as a consequence of Assumption \[ass:dt3.2b\]. Therefore, employing formally obtained risk neutral prices by the market does not generate any economically meaningful arbitrage in the sense as previously discussed. However, this may generate some classical form of arbitrage, as discussed in [@LoewensteinWi00] and [@Plat2002b]. Under the benchmark approach such classical forms of arbitrage are allowed to exist and can be systematically exploited, as we will demonstrate later on for the case of long dated bonds and similar contracts. The best performing portfolio is then still the benchmark or numéraire portfolio which remains finite at any finite time.
Finally, consider the valuation of nonhedgeable contingent claims. Recall that the conditional expectation of a square integrable random variable can be interpreted as a least squares projection; see [@Shiryaev84]. Consequently, the real world valuation formula provides with its conditional expectation, the least squares projection of a given square integrable benchmarked payoff into the set of possible current benchmarked values. It is well-known that in a least squares projection the forecasting error has mean zero and minimal variance, see [@Shiryaev84]. Therefore, the benchmarked hedge error has mean zero and minimal variance. More predisely, as shown in [@DuPl16], under benchmark risk minimization the Law of the Minimal Price ensures through its real world valuation that the value of the contingent claim is the minimal possible value and the benchmarked profit and loss has minimal fluctuations and is a local martingale orthogonal to all benchmarked traded wealth.
In an insurance company the benchmarked profits and losses of diversified benchmarked contingent claims are pooled. If these benchmarked profits and losses are generated by sufficiently independent sources of uncertainty, then it follows intuitively via the Law of Large Numbers that the total benchmarked profit and loss for an increasing number of benchmarked contingent claims is not only a local martingale starting at zero, but also a process with an asymptotically vanishing quadratic variation or variance. In this manner, insurance companies can theoretically complete asymptotically the market by pooling benchmarked profits and losses. This shows that real world valuation makes perfect sense from the perspective of a financial institution with a large pool of sufficiently different contingent claims.
Valuation of Long Term Annuities {#sec:val8}
================================
This section illustrates the real world valuation methodology in the context of simple long term contracts, which we call here basic annuities. More complicated annuities, life insurance products, pensions and also equity linked long term contracts can be treated similarly. All show, in general, a similar effect where real world prices become significantly lower than prices formed under classical risk neutral valuation. The most important building blocks of annuities, and also many other contracts, are zero coupon bonds. This section will, therefore, study first in detail the valuation and hedging of zero coupon bonds. It will then apply these findings to some basic annuity and compare its real world price with its classical risk neutral price.
Savings Bond {#subsec:val51}
------------
To make the illustrations reasonably realistic, the following study considers the US equity market as investment universe. It uses the US one-year cash deposit rate as short rate when constructing the savings account. The S$\&$P500 total return index is chosen as proxy for the numéraire portfolio, the benchmark. Monthly S$\&$P500 total return data is sourced from Robert Shiller’s website (http://www.econ.yale.edu/ shiller/data.htm) for the period from January 1871 until March 2017. The savings account discounted S$\&$P500 total return index has been already displayed in Figure \[fig:val41\].
For simplicity, and to make the core effect very clear, assume that the short rate is deterministic. By making the short rate random one would complicate the exposition, and would obtain very similar and even slightly more pronounced differences between real world and risk neutral valuation, due to the effect of stochastic interest rates on bond prices as a consequence of Jensen’s inequality. A similar comment applies to the choice of the S$\&$P500 total return index as proxy for the numéraire portfolio or benchmark. Very likely there exist better proxies for the numéraire portfolio, see e.g. [@LePl06a] or [@PlRe17]. As will become clear, their larger long term growth rates would make the effects to be demonstrated even more pronounced.
The first aim of this section is to illustrate the fact that under the benchmark approach there may exist several self-financing portfolios that replicate the payoff of one dollar of a zero coupon bond. Let $$\label{subsec:val511}
D(t,T)=\frac{S^0_t}{S^0_{T}}$$ denote the price at time $t \in [0,T]$, of the, so-called, [*savings bond*]{} with maturity $T$, where $S^0_t$ denotes the value of the savings account at time $t$. Consequently, the savings bond price is the price of a zero coupon bond under risk neutral valuation and other classical valuation approaches. The upper graph in Figure \[fig:val51\] exhibits the logarithm of the saving bond price, with maturity in March 2017 valued at the time shown on the x-axis. The benchmarked value of this savings bond, which equals its value denominated in units of the S$\&$P500, is displayed as the upper graph in Figure \[fig:val52\].
![Logarithms of values of savings bond and fair zero coupon bond.[]{data-label="fig:val51"}](Figure4.pdf){width="\textwidth"}
![Values of benchmarked savings bond and benchmarked fair zero coupon bond.[]{data-label="fig:val52"}](Figure5.pdf){width="\textwidth"}
As pointed out previously, an equivalent risk neutral probability measure is unlikely to exist for any realistic complete market model. Financial planning recommends to invest at young age in the equity market and to shift wealth into fixed income securities closer to retirement. This type of strategy is widely acknowledged to be more efficient than investing all wealth in a savings bond, which represents the classical risk neutral strategy for obtaining bond payoffs at maturity. Based on the absence of an equivalent risk neutral probability measure, this paper provides a theoretical reasoning for such a long term investment strategy. Moreover, it will quantify rigorously such a strategy under the assumption of a stylized model for the benchmark dynamics.
Fair Zero Coupon Bond
---------------------
Under real world valuation, the time $t$ value of the [*fair zero coupon bond price*]{} with maturity date $T$ is denoted by $$\label{lm2.8}
P(t,T) = S^*_t\,E_t\left( \frac{1}{S^*_T}\right) ,$$ and results from the real world valuation formula , see also . It provides the minimal possible price for a self-financing portfolio that replicates $\$1$ at maturity $T$. Note that under real world valuation the fair zero coupon bond becomes an index derivative, with the index as benchmark. The underlying assets involved, are the benchmark (here the S$\&$P500 total return index) and the savings account of the domestic currency, here the US dollar. Both securities will appear in the corresponding hedge portfolio, which shall replicate at maturity the payoff of one dollar.
To calculate the price of a fair zero coupon bond, one has to compute the real world conditional expectation in . For this calculation one needs to employ a model for the real world distribution of the random variable $(S^*_T)^{-1}$. To be realistic and different to the formally obtained risk neutral valuation, such a model must reflect the fact that the benchmarked savings account should be a strict supermartingale. Any model that models the benchmarked savings account as a strict supermartingale, will value the above benchmarked fair zero coupon bond less expensively than the corresponding benchmarked savings bond. Figure \[fig:val52\] displays, additionally to the benchmarked savings bond, the value of a benchmarked fair zero coupon bond, which will be derived below, under a respective model. One notes the significantly lower initial value of the benchmarked fair zero coupon bond. Also visually, its benchmarked value seems to appear as the best forecast of its future benchmarked values. In Figure \[fig:val51\] the logarithm of the fair zero coupon bond is shown together with the logarithm of the savings bond value. One notes that the fair zero coupon bond appears to follow essentially the benchmark for many years. The strategy that delivers this hedging porfolio will be discussed in detail below.
We interpret the value of the savings bond as the one obtained by formal risk neutral valuation. As shown in relation , this value is greater than or equal to that of the fair zero coupon bond. For readers who want to have some economic explanation for the observed value difference one could argue that the savings bond gives the holder the right to liquidate the contract at any time without costs. On the other hand, a fair zero coupon bond is akin to a term deposit without the right to access the assets before maturity. One could say that the savings bond carries a “liquidity premium" on top of the value for the fair zero coupon bond. Under the classical no-arbitrage paradigm, with its Law of One Price (see e.g. [@Tayl02]), there is only one and the same price process possible for both instruments, which is that of the savings bond. The benchmark approach opens with its real world valuation concept the possibility to model costs for early liquidation of financial instruments. The fair zero coupon bond is the least liquid instrument that delivers the bond payoff and, therefore, the least expensive zero coupon bond. The savings bond is more liquid and, therefore, more expensive.
Fair Zero Coupon Bond for the Minimal Market Model
--------------------------------------------------
The benchmarked fair zero coupon value at time $t\in [0,T]$ is the best forecast of its benchmarked payoff ${\hat{S}}^0_T = (S^*_T)^{-1}$. It provides the minimal self-financing portfolio value process that hedges this benchmarked contingent claim. To facilitate a tractable evaluation of a fair zero coupon bond, one has to employ a continuous time model for the benchmarked savings account that represents a strict supermartingale. The inverse of the benchmarked savings account is the discounted numéraire portfolio ${\bar{S}}^*_t=\frac{S^*_t}{S^0_t} = ({\hat{S}}^0_t)^{-1}$. In the illustrative example we present, it is the discounted S$\&$P500, which, as discounted numéraire portfolio, satisfies in a continuous market model the stochastic differential equation (SDE) $$\label{lm2.8'}
d\, {\bar{S}}^*_t =\alpha_t\,dt + \sqrt{{\bar{S}}^*_t\,\alpha_t}
\,dW_t,$$ for $t\ge 0$ with ${\bar{S}}^*_0 >0$, see [@PlHe2010] Formula (13.1.6). Here $W=\{W_t, t\ge 0\}$ is a Wiener process, and $\alpha=\{\alpha_t,
t\ge 0\}$ is a strictly positive process, which models the trend $\alpha_t={\bar{S}}^*_t \theta^2_t$ of ${\bar{S}}^*_t$, with $\theta_t$ denoting the market price of risk. For constant market price of risk one would obtain the Black-Scholes model, which has been the standard market model. In the long term it yields benchmarked savings accounts that are martingales and is, therefore, not suitable for our study. Since in $\alpha_t$ can be a rather general stochastic process, the parametrization of the SDE does so far not constitute a model.
The trend or drift in the SDE can be interpreted economically as a measure for the discounted average value of wealth generated per unit of time by the underlying economy. To construct in a first approximation a respective model, we assume now that the drift of the discounted S$\&$P500 total return index grows exponentially with a net growth rate $\eta > 0$. At time $t$ the drift of the discounted S$\&$P500 total return index ${\bar{S}}^*_t$ is then modeled by the exponential function $$\alpha_t= {\alpha}\,\exp\{\eta\,t\}.$$ This yields the stylized version of the [*minimal market model*]{} (MMM), see [@Platen01a; @Plat2002b], which emerges from and is the ‘workhorse’ of the benchmark approach. We know explicitly the transition density of the resulting time-transformed squared Bessel process of dimension four, ${\bar{S}}^*$, see [@RevuzYo99].
The transition density function of the discounted numéraire portfolio ${\bar{S}}^*$ equals $$\label{Eqn:transitiondensityMMMdiscountedGOP}
p_{{\bar{S}}^*}(t, x_t , {\bar{T}} , x_{\bar{T}} ) = \frac{1}{2(\varphi_{\bar{T}} - \varphi_t )} \sqrt{\frac{x_{\bar{T}}}{x_t}} \exp \left( -\frac{x_t + x_{\bar{T}}}{2(\varphi_{\bar{T}} - \varphi_t )} \right) I_1 \left( \frac{\sqrt{x_t x_{\bar{T}}}}{\varphi_{\bar{T}} - \varphi_t} \right) ,$$ where $\varphi_t = \frac{1}{4\eta }\alpha (\exp(\eta t)-1)$ is also the quadratic variation of $ \sqrt{{\bar{S}}^*} $. The corresponding distribution function is that of a non-central chi-squared random variable with four degrees of freedom and non-centrality parameter $x_t/(\varphi_T - \varphi_t)$. Using the above transition density function we apply standard maximum likelihood estimation to monthly data for the discounted S$\&$P500 total return index over the period from January 1871 to January 1932, giving the following estimates of the parameters $\alpha$ and $\eta$, $$\begin{aligned}
\label{Eqn:ParaEstMMM}
\alpha &= \MMMalphabar\, (\MMMalphabarSE ), \\ \notag
\eta &= \MMMeta \, (\MMMetaSE) ,\end{aligned}$$ where the standard errors are shown in brackets. In Appendix \[App:A\] we explain the estimation used in this paper.
These estimates for the net growth rate $\eta$ are consistent with estimates from various other sources in the literature, where the net growth rate of the US equity market during the last century has been estimated at about $5\%$, see for instance [@DimsonMaSt02].
Under the stylized MMM the explicitly known transition density of the discounted numéraire portfolio ${\bar{S}}^*_t$ yields for the fair zero coupon bond price by the explicit formula $$\label{dt5'.4}
P(t,T) =
D(t,T)\left( 1-\exp\left\{-\frac{2\,\eta\,{\bar{S}}^*_t}
{\alpha\,(\exp\{\eta\,T\} - \exp\{\eta\,t\})} \right\}\right)$$ for $t \in [0,T)$, which has been first pointed out in [@PlHe2010] Section 13.3. Figure \[fig:val52\] displays with the lower graph the trajectory of the benchmarked fair zero coupon bond price with maturity $T$ in March 2017. By the price of the benchmarked fair zero coupon bond remains always below that of the benchmarked savings bond, where the latter we interpret as the formally taken risk neutral zero coupon bond price. The fair zero coupon bond value provides the minimal portfolio process for hedging the given payoff under the assumed MMM. Other benchmarked replicating portfolios need to form strict supermartingales and, therefore, yield higher price processes. One such example is given by the benchmarked savings bond, which pays one dollar at maturity. Recall that the benchmarked fair zero coupon bond is a martingale. It is minimal among the supermartingales that represent benchmarked replicating self-financing portfolios and pay one dollar at maturity.
![Values of savings bond and fair zero coupon bond.[]{data-label="fig:val53"}](Figure6.pdf){width="\textwidth"}
Figure \[fig:val53\] exhibits with its upper graph the trajectory of the savings bond and with its lower graph that of the fair zero coupon bond in US dollar denomination. Closer to maturity the fair zero coupon bond price merges with the savings bond price. Both self-financing portfolios replicate the payoff at maturity. Most important is the observation that they start with significantly different initial prices. The fair zero coupon bond exploits the presence of the strict supermartingale property of the benchmarked savings account, whereas the savings bond ignores it totally. Two self-financing replicating portfolios are displayed in Figure \[fig:val53\]. Such a situation, where two self-financing portfolios replicate the same contingent claim, is impossible to model under the classical no-arbitrage paradigm. However, under the benchmark approach this is a natural situation.
In the above example for the parameters estimated during the period from January 1871 to January 1932, the savings bond with maturity in March 2017 has in January 1932 a price of $D(0,T) \approx \$0.026596$. The fair zero coupon bond is far less expensive and priced at only $P(0,T)
\approx \$0.000709$. The fair zero coupon bond with term to maturity of more than 80 years costs here less than $3\%$ of the savings bond. This reveals a substantial premium in the value of the savings bond.
We repeated with the estimated parameters the study for all possible zero coupon bonds that cover a period of 10, 15, 20 and 25 years from initiation until maturity that fall into the period starting in January 1932 and ending in March 2017. Table \[Tab:1\] displays the average difference in US dollars between the risk neutral and the fair bond. One clearly notes that for a 25-year bond one saves about 27% of the risk neutral price, which is a typical time to maturity for many pension products.
Years to Maturity Mean of D(t,T) Mean of P(t,T) Mean of {D(t,T)-P(t,T)}
------------------- ---------------- ---------------- -------------------------
10 0.6534 0.6468 0.0066
15 0.5219 0.4931 0.0287
20 0.4084 0.3479 0.0604
25 0.3130 0.2294 0.0836
: Mean values of zero coupon bonds of prescribed terms to maturity whose start and end dates lie between January 1932 and March 2017.[]{data-label="Tab:1"}
By demonstrating explicitly the valuation methodology we suggest a realistic way of changing from the classical risk neutral production strategy to the less expensive benchmark production strategy.
Hedging of a Fair Zero Coupon Bond
----------------------------------
The benefits of the proposed benchmark production methodology can only be harvested if the respective hedging strategy would allow to replicate the hedge portfolio values as theoretically predicted.
The hedging strategy by which this is theoretically achieved follows under the MMM from the explicit fair zero coupon bond pricing formula . At the time $t \in [0,T)$ the corresponding theoretical number of units of the S$\&$P500 to be held in the hedge portfolio follows, similar to the well-known Black-Scholes delta hedge ratio, as a partial derivative with respect to the underlying, and is given by the formula $$\begin{aligned}
\label{dt5'.5}
\delta^*_t &=& \frac{\partial {\bar{P}}(t,T)}{\partial
{\bar{S}}^*_t} \nonumber \\
&=& D(0,T)\, \exp\left\{\frac{- 2\,\eta\,{\bar{S}}^*_t}
{\alpha\,(\exp\{\eta\,T\} - \exp\{\eta\,t\})} \right\} \,
\frac{2\,\eta}
{\alpha\,(\exp\{\eta\,T\} - \exp\{\eta\,t\})}.
\end{aligned}$$ Here $D(0,T)$ is the respective value of the formally obtained risk neutral bond, the savings bond.
The resulting fraction of wealth to be held at time $t$ in the S$\&$P500, as it evolves for the given example of zero coupon bond valuation from January 1932 to maturity, shown in Figure \[fig:val54\]. The remaining wealth is always invested in the savings account.
To demonstrate how realistic the hedge of the fair zero coupon bond payoff is for the given delta under the stylized MMM under monthly reallocation, a self-financing hedge portfolio is formed. The delta hedge is performed similarly to the well known one for options under the Black-Scholes model. The self-financing hedge portfolio starts in January 1932, which ensures that the hedge simulations employing the fitted parameters in are out-of-sample. Each month the fraction invested in the S$\&$P500 is adjusted in a self-financing manner according to the above prescription. The resulting benchmarked profit and loss for this delta hedge turns out to be very small and is visualized in Figure \[fig:val55\]. The maximum absolute benchmarked profit and loss amounts only to about 0.00000061. This benchmarked profit and loss is so small that the resulting hedge portfolio, when plotted additionally in Figure \[fig:val53\], would be visually indistinguishable from the path of the already displayed fair zero coupon bond price process. Dollar values of the self-financing hedge portfolio, the fair zero coupon bond and the savings bond are shown in Figure \[fig:val55a\] where it is evident that the self-financing portfolio replicates 95% of the face value of the bond but employs less than 3% of the initial capital. One may argue that this represents only one hedge simulation. Therefore, we employed the same 10, 15, 20 and 25 year fair bonds that fit from initiation until maturity into the period from January 1932 until March 2017 and performed analogous hedge simulations. In Table \[Tab:2\] we report in US dollars the average profits and losses with respective standard deviations, where for a 25-year ZCB the hedge losses at maturity average 2.26% of the face value while the initial hedge portfolio is 27% less expensive than the savings bond.
Years to Maturity Mean P&L Std. Dev. P&L
------------------- ---------- ---------------
10 -0.0004 0.0078
15 -0.0057 0.0108
20 -0.0144 0.0138
25 -0.0226 0.0168
: Means and standard deviations of profits and losses on hedge at maturities of zero coupon bonds having various terms to maturity. (Negative P&Ls indicate losses).[]{data-label="Tab:2"}
The above example and hedge simulations demonstrate the principle that a hedge portfolio can generate a fair zero coupon bond value process by investing long only in a dynamic hedge in the S$\&$P500 and the savings account. The resulting hedge portfolio is for long term maturities significantly less expensive than the corresponding saving bond. Moreover, as can be seen in Figure \[fig:val51\], the hedge portfolio can initially significantly fluctuate, as it did around 1930 during the Great Depression. However, close to maturity, the hedge portfolio cannot be significantly affected by any equity market meltdown, as was the case in our illustrative example during the 2007-2008 financial crisis.
![Fraction of wealth invested in the S$\&$P500.[]{data-label="fig:val54"}](Figure7.pdf){width="\textwidth"}
![Benchmarked profit and loss.[]{data-label="fig:val55"}](Figure8.pdf){width="\textwidth"}
![Comparison of values of zero coupon bonds with hedge portfolio, also showing the profit and loss.[]{data-label="fig:val55a"}](Figure8a.pdf){width="\textwidth"}
As we will describe later on, in a similar manner as above described, one can value and produce less expensively other long term contingent claims including equity linked payoffs that are typical for variable annuities or life insurance and pension payoffs.
In summary, one can say that by shifting the valuation paradigm from classical risk neutral to real world valuation, we suggest to replicate more cost efficiently long term payoffs. In particular, it follows from that one can expect significant savings to arise for payoffs that do not vanish when the benchmark approaches zero. This production methodology applies to a range of payoffs that are embedded in various insurance and pension contracts, where we aim below to provide some indications.
Long Term Mortality and Cash-Linked Annuities
---------------------------------------------
Consider now a stylized example that aims to illustrate valuation and hedging under the benchmark approach in the context of basic annuities. Consider annuities sold to $K$ policy holders that pay an indexed number of units of the savings account per year at the beginning of each year where they provide a payoff. Here we assume that the savings account and total return index account commence with \$1 at the date $t_0$ of purchasing the annuity and the indexed number of units at time $T$ is prescribed as $$MI_T = \frac{1}{1+\sum_{k=1}^K 1_{\tau^{(k)}>T }},$$ where $\tau^{(k)}$ is a random variable equal to the time at which the $k$-th policy holder dies, for $k=1,2,\ldots , K$. Therefore, the payoff at time $T$ in respect of the $k$-th policy holder is $$MI_T\times \frac{S^0_T}{S^0_{t_0}} \times 1_{\tau^{(k)}>T }.$$ Additionally, we assume that there is an asset management fee payable as an annuity whose payoff at time $T$ equals $MI_T\times \frac{S^0_T}{S^0_{t_0}}$. This type of payoff is likely to account well in the long run for the effect of inflation because the average US interest rate was during the last century on average about 1% above the US average inflation rate, see [@DimsonMaSt02]. Since in our evaluation the interest rate will not play any role, we can now assume that the interest rate is stochastic. Also, since the portfolio of annuities has at time $T$ the aggregate payoff $$MI_T\times \frac{S^0_T}{S^0_{t_0}} + \sum_{k=1}^K \bigg\{ MI_T\times \frac{S^0_T}{S^0_{t_0}} \times 1_{\tau^{(k)}>T } \bigg\} = \frac{S^0_T}{S^0_{t_0}} ,$$ the mortality rate plays no role and we can assume that the mortality rate is stochastic.
To use the available historical data efficiently, let us place our discussion in the past and consider a person who reached the age of 25 in January 1932. The subset of the monthly S$\&$P500 time series used for fitting the parameters in starts at January 1871 and ends at this date. The person may have considered purchasing some annuity which pays, from the age of 65 to the beginning of the age of 110, at the beginning of the year $T$ an indexed number of units $MI_T$ of the savings account. In the case when the person may have passed away before reaching the age of 110 in 2017 the payments that would have otherwise been made revert to the asset pool backing the annuity portfolio. Since in this setting there is no mortality risk or interest rate risk involved in the given payoff stream, classical risk neutral pricing would value this annuity portfolio as $$V^{\textrm{RN}}_t = \sum_{T\in G} S^0_t E_t \left( \frac{S^0_T/S^0_{t_0}}{S^0_T} \right) = 45\times \frac{S^0_t}{S^0_{t_0}}$$ for the set of payment dates $G = \{\textrm{Jan }1972, \textrm{Jan }1973,\dots, \textrm{Jan }2016\}$. Thus for any date $t$ during the period from January 1932 until December 1971 classical risk neutral pricing would always value this annuity as being equal to 45 units of the savings account. This is exactly the number of units of the savings account that have to be paid out by the annuity over the 45 years from 1972 until 2017. This means, the annuity has the discounted risk neutral value $$\label{vrn}
\bar{V}^{\textrm{RN}}_t=\frac{V^{\textrm{RN}}_t}{S^0_t}=45$$ for all $t \in \{\textrm{Jan }1932, \textrm{Feb }1932,\dots, \textrm{Dec }1971\}$.
As shown in the previous section, when valuing such an annuity under the benchmark approach it will be less expensive than suggested by the above classical risk neutral price. This example aims to illustrate that significant amounts can be saved. The real world valuation formula, given in , captures at time $t$ the fair value of one unit of the savings account at time $T$, see and , via the expression $$\label{val532}
S^*_t E_t\Big(\frac{S^0_T}{S^*_T}\Big)=
S^0_t E_t\Big(\frac{{\bar S}^*_t}{{\bar S}^*_T}\Big) = S^0_t\left( 1-\exp\left\{-\frac{2\,\eta\,{\bar{S}}^*_t}
{\alpha\,(\exp\{\eta\,T\} - \exp\{\eta\,t\})} \right\}\right) .$$ Consequently, the discounted real world value of the annuity at the time $t \in \{\textrm{Jan }1932, \textrm{Feb }$ $1932, \dots, \textrm{Dec }1971\}$ equals $$\label{val532a}
{\bar{V}}^{\textrm{RW}}_t=\sum_{T \in G}\frac{S^*_t}{S^0_t} E_t\Big(\frac{S^0_T}{S^*_T}\Big)=
\sum_{T\in G}\left( 1-\exp\left\{-\frac{2\,\eta\,{\bar{S}}^*_t}
{\alpha\,(\exp\{\eta\,T\} - \exp\{\eta\,t\})} \right\}\right)$$ for the set of payment dates $G=\{\textrm{Jan }1972,\textrm{Jan }1973,\dots,\textrm{Jan }2016\}$. For the previously fitted parameters of the MMM, Figure \[fig:val56\] shows the discounted price of the fair annuity (denominated in units of the savings account) according to formula , as a function of the purchasing time $t$. One notes that the time of purchase plays a significant role. Over the years the discounted fair annuity becomes more expensive. The value of the fair annuity remains always below that of the corresponding risk neutral value. It is ranging from about 10% of the risk neutral value in January 1932 to about 89% of the risk neutral value in December 1971. This means, someone who starts at the beginning of her or his working life to prepare for retirement can enjoy benefits about eight times greater than those delivered from a later start which is close to retirement. Note that the typical compounding effect in a savings account does not matter in this example, because the value of the annuity and its payments are denominated in units of the savings account. It is the exploitation of the strict supermartingale property of the benchmarked savings account that creates the remarkable effect. The payoff stream of the fair annuity needs to be hedged, generating only a small benchmarked profit and loss or hedge error of similar size as demonstrated in the previous section, where we considered a single fair zero coupon bond. Now we have a portfolio of bonds that pays units of the savings account at their maturities. To demonstrate that the above effect holds independently from the period entered, we repeat with the same parameters for the MMM and similar contracts the calculations for all possible start dates within the period from January 1871 until January 1932. We show in Table \[Tab:3\] the mean percentage saving by using the proposed benchmark methodology and the standard deviation for this estimate.
-------------------- ----------------- ----------------- ------------
Discounted Risk Discounted Real Percentage
Neutral Value World Value Saving
Mean 45 13.672484 69.616702
Standard Deviation 0 5.540818 12.312929
-------------------- ----------------- ----------------- ------------
: Comparison of discounted values of deferred annuities paying one dollar per annum for forty five years, commencing in 40 years’ time, over all monthly start dates in the period January 1871 to January 1932.[]{data-label="Tab:3"}
![Savings account discounted price of the annuity during the period from January 1932 until December 1971 which provides annual payments from January 1972 until January 2016.[]{data-label="fig:val56"}](Figure9.pdf){width="\textwidth"}
Long Term Mortality and Equity-Linked Annuities with Mortality and Cash-Linked Guarantees
-----------------------------------------------------------------------------------------
Consider now a stylized example that aims to illustrate valuation and hedging under the benchmark approach in the context of annuities that offer optional guarantees. We may assume also here that interest rates are stochastic because this will not affect our valuation of the product. Consider an annuity that pays to a policy holder at times $T\in G$ until maturity the greater value of $MI_T\times \exp(\eta(T-t_0))$ units of the savings account and $MI_T$ units of the S$\&$P500 total return index account per year at the beginning of each year while the policy holder is alive. Here we assume that the potential savings account or the total return index account payments commence with \$1 at the date $t_0$ of purchasing the annuity. As in the previous example, we assume that there is an asset management fee payable as annuity whose payoff at time $T$ is the same as for a living policy holder. This type of payoff is likely to account well in the long run for the effect of inflation and equity returns.
The real world value at time $t$ of a portfolio of payments at the future time $T$ is given by $$V^{\textrm{RW}}_{t,T} = S_t^* E_t\left( \frac{1}{S_T^*}\max \Big( \frac{S^*_T}{S^*_{t_0}}, \frac{S^0_T\exp(\eta(T-t_0))}{S^0_{t_0}} \Big) \right)$$ which rearranges as $$\begin{aligned}
\label{Eqn:ValueEquityLinkedAnnuity}
V^{\textrm{RW}}_{t,T} &= \frac{S^0_t}{S^0_{t_0}} E_t\left(
\max \Big( \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_{t_0}} , \frac{{\bar{S}}^*_{t}\exp(\eta(T-t_0))}{{\bar{S}}^*_T} \Big)
\right) \\ \notag
&= \frac{{{S}}^*_{t}}{{{S}}^*_{t_0}}
+ \frac{S^0_t}{S^0_{t_0}} \exp(\eta(T-t_0)) E_t
\left(
\Big( \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_T} - \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_{t_0}\exp(\eta(T-t_0))} \Big)^{+}
\right) .\end{aligned}$$ This rearrangment shows that today’s value is the sum of an equity-linked component and an equity index put option component, where the discounted index value is non-central chi-squared distributed. As in the previous example, neither the interest rate nor the mortality rate plays a role in the valuation formula.
We use the following lemma to compute the price of a guarantee.
\[Lem:ncx2\] Let $U$ be a non-central chi-squared random variable with four degrees of freedom and non-centrality parameter $\lambda>0$. Then the following expectations hold: $$\begin{aligned}
E\left( \frac{\lambda}{U} \right) &= 1-\exp (-\lambda /2)\\
E\left( \frac{\lambda}{U} 1_{U< x}\right) &= \chi^2_{0,\lambda }(x) - \exp (-\lambda /2)\\
E\left( \frac{\lambda}{U} 1_{U\ge x}\right) &= 1-\chi^2_{0,\lambda }(x) ,\end{aligned}$$ where $\chi^2_{\nu,\lambda}(x)$ denotes the cumulative distribution function of a non-central chi-squared random variable having $\nu$ degrees of freedom and non-centrality parameter $\lambda$.
The proof of this result is given in Appendix \[App:B\].
Furthermore, we mention the following result which we prove in Appendix \[App:C\].
\[Cor:expn\] For a discounted numéraire portfolio ${\bar{S}}^*_{t}$ obeying the SDE we have the expectations $$\begin{aligned}
E_t\left( \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_T} \right) &= 1-\exp (-\lambda /2)\\
E_t\left( \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_T} 1_{{\bar{S}}^*_T< x}\right) &= \chi^2_{0,\lambda }\left(\frac{x}{\varphi_T - \varphi_t}\right) - \exp (-\lambda /2)\\
E_t\left( \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_T} 1_{{\bar{S}}^*_T\ge x}\right) &= 1-\chi^2_{0,\lambda }\left(\frac{x}{\varphi_T - \varphi_t}\right) ,\end{aligned}$$ where $\chi^2_{\nu,\lambda}(x)$ denotes the cumulative distribution function of a non-central chi-squared random variable having $\nu$ degrees of freedom and non-centrality parameter $\lambda$ given by $$\lambda = \frac{{\bar{S}}^*_{t}}{\varphi_T - \varphi_t}$$ and $\varphi_t = \frac{1}{4\eta}\alpha(\exp(\eta t)-1)$.
This leads us to the following result, which we prove in Appendix \[App:D\].
\[Thm:D\] For a discounted numéraire portfolio ${\bar{S}}^*_{t}$, obeying the SDE , we have $$\begin{aligned}
&E_t \left(
\Big( \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_T} - \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_{t_0}\exp(\eta(T-t_0))} \Big)^{+}
\right) \\ \notag
&= \chi^2_{0,\lambda }\left(\kappa \right) - \exp (-\lambda /2)
- \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_{t_0}\exp(\eta(T-t_0))} \chi^2_{4,\lambda }\left(\kappa \right) ,\end{aligned}$$ where $\chi^2_{\nu,\lambda}(x)$ denotes the cumulative distribution function of a non-central chi-squared random variable having $\nu$ degrees of freedom and non-centrality parameter $\lambda$ given by $$\lambda = \frac{{\bar{S}}^*_{t}}{\varphi_T - \varphi_t}$$ and $$\begin{aligned}
\kappa &= \frac{{\bar{S}}^*_{t_0}\exp(\eta(T-t_0))}{\varphi_T - \varphi_t} \\
\varphi_t &= \frac{1}{4\eta}\alpha(\exp(\eta t)-1).\end{aligned}$$
Therefore, we can calculate $V^{\textrm{RW}}_{t,T}$ in as $$\begin{aligned}
V^{\textrm{RW}}_{t,T} =& \frac{{{S}}^*_{t}}{{{S}}^*_{t_0}} \left( 1 - \chi^2_{4,\lambda(t,T)} (\kappa(t,T)) \right) \\ \notag
&+ \frac{S^0_t\exp (\eta (T-t_0))}{S^0_{t_0}} \left( \chi^2_{0,\lambda(t,T)} (\kappa(t,T)) -\exp(-\lambda(t,T) /2) \right) ,\end{aligned}$$ where $$\begin{aligned}
\lambda (t,T) &= \frac{{\bar{S}}^*_{t} }{\varphi_T - \varphi_t} \\ \notag
\kappa(t,T) &= \frac{{\bar{S}}^*_{t_0}\exp(\eta (T-t_0)) }{ \varphi_T - \varphi_t} \\ \notag
\varphi_t &= \frac{1}{4\eta}\alpha(\exp(\eta t)-1) .\end{aligned}$$ Summing over all payment dates $T\in G$ gives the value of the annuity $$\begin{aligned}
V^{\textrm{RW}}_{t,T} =& \frac{{{S}}^*_{t}}{{{S}}^*_{t_0}} \sum_{T\in G}\left( 1 - \chi^2_{4,\lambda(t,T)} (\kappa(t,T)) \right) \\ \notag
&+ \frac{S^0_t}{S^0_{t_0}} \sum_{T\in G}\exp (\eta (T-t_0))\left( \chi^2_{0,\lambda(t,T)} (\kappa(t,T)) -\exp(-\lambda(t,T) /2) \right).\end{aligned}$$
For the previously fitted parameters of the MMM, Figure \[fig:val57\] shows the discounted price of the fair annuity (denominated in units of the savings account) according to the formula $$\label{val535b}
\bar{V}^{\textrm{RW}}_t=\frac{V^{\textrm{RW}}_t}{S^0_t}$$ in dependence on the purchasing time $t$.
Also, for the sake of comparison, the discounted price of the annuity is shown in Figure \[fig:val57\] under the assumption of geometric Brownian motion of the discounted numéraire portfolio $\bar{S}^*$, that is, a Black-Scholes dynamics with SDE $$\label{ass:GBM}
d\bar{S}^*_t = \theta^2 \bar{S}^*_t dt + \theta \bar{S}^*_t dW_t,$$ where $\theta $ is estimated using maximum likelihood estimation (MLE) as $\BStheta$ with standard error $\BSthetaSE$ and log-likelihood $\BSloglkhd$ (see, for example, [@Fergusson17b]). We can use to price the annuity because under the Radon-Nikodym derivative $\Lambda_t = \frac{S^0_t\,S^*_{t_0}}{S^*_t\,S^0_{t_0}}$ is a martingale. We have the following theorem.
\[Thm:E\] For a discounted numéraire portfolio ${\bar{S}}^*_{t}$ obeying the SDE we have $$\begin{aligned}
&E_t \left(
\Big( \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_T} - \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_{t_0}\exp(\eta(T-t_0))} \Big)^{+}
\right) \\ \notag
&= N(d_1) - \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_{t_0}\exp(\eta (T-t_0))} N(d_2) ,\end{aligned}$$ where $N(x)$ denotes the cumulative distribution function of a standard normal random variable and $d_1$ and $d_2$ are given by $$\begin{aligned}
d_1 &= \frac{\log\left( \frac{{\bar{S}}^*_{t_0}\exp(\eta (T-t_0))}{{\bar{S}}^*_{t}}\right) + \frac{1}{2}\theta^2(T-t)}{\theta\sqrt{T-t}} \\
d_2 &= d_1 -\theta\sqrt{T-t}.\end{aligned}$$
See Appendix \[App:E\] for a proof of this theorem.
Thus the value of the annuity under Black-Scholes dynamics has the formula $${V}^{\textrm{RN}}_t = \sum_{T\in G} \left( \frac{{{S}}^*_{t}}{{{S}}^*_{t_0}} (1-N(d_2(t,T)))
+ \frac{S^0_t}{S^0_{t_0}} \exp(\eta(T-t_0)) N(d_1(t,T))
\right)$$ and the discounted value of the annuity has the formula $$\bar{V}^{\textrm{RN}}_t = \frac{1}{S^0_{t_0}} \sum_{T\in G} \left( \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_{t_0}} (1-N(d_2(t,T)))
+ \exp(\eta(T-t_0)) N(d_1(t,T))
\right)$$ where $$\begin{aligned}
d_1(t,T) &= (\log (\bar{S}^*_{t_0}\exp(\eta(T-t_0))/\bar{S}^*_{t}) + \frac{1}{2}\theta^2 (T-t))/\sqrt{\theta^2 (T-t)}\\ \notag
d_2(t,T) &= d_1(t,T) - \sqrt{\theta^2 (T-t)} .\end{aligned}$$ Here $N(x)$ is the cumulative distribution function of the standard normal distribution. The discounted real world price of the annuity under the MMM has the initial value 88.853 which makes it significantly less expensive than the initial value 1181.076 for the discounted price of the annuity when assuming the Black-Scholes model for the index dynamics.
![Savings account discounted values under the MMM and the Black-Scholes model of the mortality and equity-linked annuity with mortality and cash-linked guarantee during the period from 1932 until 1971, which provides annual payments from 1972 until 2016.[]{data-label="fig:val57"}](Figure10.pdf){width="\textwidth"}
The above examples are deliberately designed to illustrate the cost effectiveness of real world valuation of long term contracts under the MMM compared to valuation under the classical market model and paradigm. It is obvious that the introduction of mortality risk and more refined models for the index dynamics would not materially change the principal message provided by these preceding examples: There are less expensive ways to transfer wealth into payoff streams than classical modeling and valuation approaches can provide.
Conclusion {#sec:val10}
==========
The paper proposes to move away from classical risk neutral valuation towards a more general real world valuation methodology under the benchmark approach. The resulting production methodology does not assume the existence of an equivalent risk neutral probability measure and offers, therefore, a much wider modeling world. As a consequence, the better long term performance of the equity market compared to that of the fixed income market can be systematically exploited to produce less expensively pension and insurance products. Real world valuation allows one to generate hedge portfolios with prices for long term contracts that can be significantly lower than those obtained under classical pricing paradigms. The proposed real world valuation methodology uses the best performing portfolio, the numéraire portfolio, as numéraire and the real world probability measure as pricing measure when taking expectations. Real world valuation identifies the minimal possible value for a replicable contingent claim. Real world pricing generalizes classical risk neutral pricing and also actuarial pricing.
Maximum Likelihood Estimation of MMM Parameters {#App:A}
===============================================
Given a series of observations of the discounted index $${\bar{S}}^*_{t_0} , {\bar{S}}^*_{t_1} , \ldots , {\bar{S}}^*_{t_n}$$ at the times $t_0<t_1<\ldots <t_n$ we seek the values of the parameters $\alpha$ and $\eta$ of the SDE $$ d\, {\bar{S}}^*_t =\alpha_t\,dt + \sqrt{{\bar{S}}^*_t\,\alpha_t}
\,dW_t,$$ which maximize the likelihood of the occurrence of the observations under the hypothesis that the stylized version of the MMM holds. Here we have $t\ge 0$, ${\bar{S}}^*_0 >0$, $W=\{W_t, t\ge 0\}$ being a Wiener process and $\alpha_t$ modeled by the exponential function $$\alpha_t= {\alpha}\,\exp\{\eta\,t\}.$$ The transition density of the discounted numéraire portfolio ${\bar{S}}^*$ is given in . Using this transition density function the logarithm of the likelihood function, which we seek to maximize, is found to be $$\begin{aligned}
&\ell (\alpha, \eta) \\ \notag
&= \sum_{i=1}^n \log \bigg( \frac{1}{2(\varphi_{t_i} - \varphi_{t_{i-1}} )} \sqrt{\frac{{\bar{S}}^*_{t_i}}{{\bar{S}}^*_{t_{i-1}}}} \exp \left( -\frac{{\bar{S}}^*_{t_{i-1}} + {\bar{S}}^*_{t_i}}{2(\varphi_{t_i} - \varphi_{t_{i-1}} )} \right) I_1 \left( \frac{\sqrt{{\bar{S}}^*_{t_{i-1}} {\bar{S}}^*_{t_i}}}{\varphi_{t_i} - \varphi_{t_{i-1}}} \right) \bigg) \\ \notag
&= \sum_{i=1}^n \bigg\{ \log\bigg(\frac{1}{2(\varphi_{t_i} - \varphi_{t_{i-1}} )}\bigg) +\frac{1}{2}\log \bigg(\frac{{\bar{S}}^*_{t_i}}{{\bar{S}}^*_{t_{i-1}}}\bigg) + \left( -\frac{{\bar{S}}^*_{t_{i-1}} + {\bar{S}}^*_{t_i}}{2(\varphi_{t_i} - \varphi_{t_{i-1}} )} \right) \\ \notag
&+ \log \bigg( I_1 \left( \frac{\sqrt{{\bar{S}}^*_{t_{i-1}} {\bar{S}}^*_{t_i}}}{\varphi_{t_i} - \varphi_{t_{i-1}}} \right) \bigg) \bigg\}.\end{aligned}$$ Initial estimates of $\alpha $ and $\eta$ can be found by equating the empirically calculated quadratic variation of $\sqrt{{\bar{S}}^*}$, that is $$\langle \sqrt{{\bar{S}}^*} \rangle_{t_j} = \sum_{i=1}^j (\sqrt{{\bar{S}}^*_{t_i}} - \sqrt{{\bar{S}}^*_{t_{i-1}}})^2,$$ to the theoretical quadratic variation of $\sqrt{{\bar{S}}^*}$, that is $$\langle \sqrt{{\bar{S}}^*} \rangle_{t_j} = \frac{\alpha}{4\eta}(\exp (\eta t_j) - 1),$$ at the times $t=t_k$ and $t=t_{2k}$ where $k=\lfloor n/2\rfloor$. The initial estimates are found straightforwardly to be $$\begin{aligned}
\alpha_0 &= \langle \sqrt{{\bar{S}}^*} \rangle_{t_k} \frac{4\eta}{\exp(\eta t_k)-1} \\ \notag
\eta_0 &= \log \frac{\langle \sqrt{{\bar{S}}^*} \rangle_{t_{2k}}/\langle \sqrt{{\bar{S}}^*} \rangle_{t_k} - 1}{t_k-t_0}.\end{aligned}$$ With these initial estimates we calculate the logarithm of the likelihood function at points $\alpha_0 + i \delta \alpha$ and $\eta_0 + j\delta \eta$ for $i,j=-2,-1,0,1,2$, $\delta\alpha = \alpha_0/4$ and $\delta\eta = \eta_0 / 4$ and fit a quadratic form $$Q(x) = x^T A x - 2b^T x + c ,$$ where $x = \left( \begin{array}{c} \alpha \\ \eta \end{array}\right) $ is a 2-by-1 vector, $A$ is a negative definite 2-by-2 matrix, $b$ is a 2-by-1 vector and $c$ is a scalar. Subsequent estimates $\alpha_1 $ and $\eta_1$ are obtained as the matrix expression $A^{-1} b$ corresponding to the maximum of the quadratic form $Q$. Iteratively applying this estimation method gives the maximum likelihood estimates of the parameters $\alpha$ and $\eta$: $$\begin{aligned}
\alpha &= \MMMalphabar \, (\MMMalphabarSE ), \\ \notag
\eta &= \MMMetaSE\, (\MMMetaSE ) ,\end{aligned}$$ where the standard errors are shown in brackets. The maximum log likelihood is $\ell (\alpha, \eta) = \MMMloglkhd $. The Cramér-Rao inequality for the covariance matrix of the parameter estimates is $$VAR((\alpha,\eta)) \ge -\frac{1}{\nabla^2 \ell(\alpha,\eta)}$$ and as the number of observations becomes large the covariance matrix approaches the lower bound, which we use to calculate the standard errors.
Proof of Lemma \[Lem:ncx2\] {#App:B}
===========================
If $U$ is a non-central chi-squared random variable having $\nu$ degrees of freedom and non-centrality parameter $\lambda$, then it can be written as a chi-squared random variable $X_N$ having a random number of degrees of freedom $N=\nu+2P$ with $P$ being a Poisson random variable with mean $\lambda /2$, see e.g. [@JohnsonKoBa95]. It follows that $$\begin{aligned}
E\left( \frac{\lambda}{U}1_{U<x}\right) &= E\left( \frac{\lambda}{X_N}1_{X_N<x}\right) \\ \notag
&= E\left( E\left( \frac{\lambda}{X_N}1_{X_N<x}\bigg| N\right) \right) .\end{aligned}$$ Because $X_N $ is conditionally chi-squared distributed we have that $$E\left(\frac{1}{X_N} 1_{X_N<x} \bigg|N\right) = \frac{1}{N-2} E(1_{X_{N-2}<x}|N)$$ for a conditionally chi-squared random variable $X_{N-2}$ with $N-2$ degrees of freedom. Therefore $$E\left( \frac{\lambda}{U}1_{U<x}\right) = E\left( \frac{\lambda }{N-2} E(1_{X_{N-2}<x}|N) \right)$$ and substituting $\nu + 2P$, with $\nu=4$, for $N$ gives $$\label{Eqn:expn}
E\left( \frac{\lambda}{U}1_{U<x}\right) = \frac{1}{2} E\left( \frac{\lambda }{P+1} E(1_{X_{2+2P}<x}|P) \right) .$$ We observe that for any Poisson random variable $P$ with mean $\mu$, $$E\left( \frac{1}{1+P} f(P)\right) = \frac{1}{\mu} E\left( f(P-1)\right) - \frac{1}{\mu}\exp(-\mu) f(-1)$$ and making use of this, becomes, with $\mu = \lambda/2$ and $f(P) = E(1_{X_{2+2P}<x}|P)$, $$\begin{aligned}
\label{Eqn:expn2}
E\left( \frac{\lambda}{U}1_{U<x}\right) &= \frac{\lambda}{2} \left(\frac{1}{\mu} E\left( f(P-1)\right) - \frac{1}{\mu}\exp(-\mu) f(-1)\right) \\ \notag
&= E\left( E(1_{X_{2P}<x}|P)\right) - \exp(-\lambda /2) \\ \notag
&= \chi^2_{0,\lambda}(x) - \exp(-\lambda /2),\end{aligned}$$ which is the second expectation formula. Letting $x\to\infty$ in gives the first expectation formula. The third expectation formula follows straightforwardly by subtracting the second formula from the first formula.
Proof of Corollary \[Cor:expn\] {#App:C}
===============================
Letting $$\lambda = \frac{{\bar{S}}^*_{t}}{\varphi_T - \varphi_t}$$ we know that the distribution of the random variable $$U = \frac{{\bar{S}}^*_{T}}{\varphi_T - \varphi_t}$$ conditional upon ${\bar{S}}^*_{t}$ is a non-central chi-squared distribution with four degrees of freedom and non-centrality parameter $\lambda$. Applying Lemma \[Lem:ncx2\] gives the result.
Proof of Theorem \[Thm:D\] {#App:D}
==========================
Since $$\frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_T} - \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_{t_0}\exp(\eta(T-t_0))} > 0$$ if and only if $${\bar{S}}^*_T < {\bar{S}}^*_{t_0}\exp(\eta(T-t_0))$$ we can write $$\begin{aligned}
&E_t \left(
\Big( \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_T} - \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_{t_0}\exp(\eta(T-t_0))} \Big)^{+}
\right) \\ \notag
&= E_t \left(
\frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_T} 1_{{\bar{S}}^*_T < {\bar{S}}^*_{t_0}\exp(\eta(T-t_0))} \right)
- E_t \left( \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_{t_0}\exp(\eta(T-t_0))}1_{{\bar{S}}^*_T < {\bar{S}}^*_{t_0}\exp(\eta(T-t_0))} \right) .\end{aligned}$$ Applying Corollary \[Cor:expn\] and the fact that ${\bar{S}}^*_T$ is non-central chi-squared distributed with four degrees of freedom gives $$\begin{aligned}
&E_t \left(
\Big( \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_T} - \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_{t_0}\exp(\eta(T-t_0))} \Big)^{+}
\right) \\ \notag
&= \chi^2_{0,\lambda }\left(\kappa\right) - \exp (-\lambda /2) - \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_{t_0}\exp(\eta(T-t_0))} \chi^2_{4,\lambda }\left(\kappa \right)\end{aligned}$$ where $$\kappa = \frac{{\bar{S}}^*_{t_0}\exp(\eta(T-t_0))}{\varphi_T - \varphi_t},$$ which is the requested result.
Proof of Theorem \[Thm:E\] {#App:E}
==========================
The proof is analogous to that of the Black-Scholes European call option formula. We can write $\frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_T}$ as $$\frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_T} = \exp( -\frac{1}{2}\theta^2(T-t)+\theta\sqrt{T-t}Z)$$ for a standard normally distributed random variable $Z$. Thus the condition $$\frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_T} - \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_{t_0}\exp(\eta(T-t_0))} > 0$$ is equivalent to the condition $$Z > \frac{\log\left( \frac{{\bar{S}}^*_{t}\exp(\frac{1}{2}\theta^2(T-t))}{{\bar{S}}^*_{t_0}\exp(\eta (T-t_0))}\right) }{\theta\sqrt{T-t}} = -d_2.$$ We can rewrite the expectation as $$\begin{aligned}
&E_t\left( \exp( -\frac{1}{2}\theta^2(T-t)+\theta\sqrt{T-t}Z) 1_{Z>-d_2} - \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_{t_0}\exp(\eta(T-t_0))} 1_{Z>-d_2} \right) \\ \notag
&= \exp( -\frac{1}{2}\theta^2(T-t)) E_t\left( \exp( \theta\sqrt{T-t}Z) 1_{Z>-d_2}\right) \\ \notag
&- \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_{t_0}\exp(\eta(T-t_0))} E_t\left( 1_{Z>-d_2} \right) .\end{aligned}$$ We note that $E_t(\exp(\alpha Z)1_{Z>-d_2}) = \exp(\frac{1}{2}\alpha^2)(1-N(-d_2-\alpha))$ so that the expectation becomes $$(1-N(-d_2-\theta\sqrt{T-t})) - \frac{{\bar{S}}^*_{t}}{{\bar{S}}^*_{t_0}\exp(\eta(T-t_0))} (1-N(-d_2)) ,$$ which simplifies to the result.
[^1]: This research is supported by an Australian Government Research Training Program Scholarship.
|
---
abstract: 'Long-range magnetostatic interaction between wires strongly depends on their spatial position. This interaction, combined with applied tensile stress, influences the hysteresis loop of the system of wires through the stress dependence of their coercive fields. As a result, we obtain a set of stable magnetic states of the system, dependent on the applied field, applied stress and mutual positions of the wires. These states can be used to encode the system history.'
---
[**Stable states of systems of bistable magnetostrictive wires against applied field, applied stress and spatial geometry** ]{}\
[ P. Gawroński$^{*1}$, A. Chizhik$^{**2}$, J. M. Blanco$^3$ and K. Ku[ł]{}akowski$^{***1}$\
*[$^1$ Faculty of Physics and Applied Computer Science, AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Kraków, Poland\
$^2$ Departamento Fisica de Materiales, Facultad de Quimica, UPV, 1072, 20080 San Sebastian, Spain\
$^3$ Departamento Fisica Aplicada I, EUPDS, UPV/EHU, Plaza Europa, 1, 20018 San Sebastian, Spain\
$^*$E-mail: [email protected]\
$^{**}$E-mail: [email protected]\
$^{***}$E-mail: [email protected]\
]{}* ]{}
[*PACS numbers:*]{} 75.50.Kj, 75.60.Ej, 75.80.+q
[*Keywords:*]{} micromagnetism, amorphous wires, hysteresis, magnetostatics
Introduction
============
Amorphous magnetic wires are of interest for their actual and potential use, e. g. in sensors [@vah; @vaz; @moh]. As it is almost usual in micromagnetism, their properties depend on the technique of sample preparation and thermal treatment. Then, they are a good subject for computational science. The problem of the stray field and/or magnetization distribution of the wire have been treated either by means of a purely computational methods, where the wire volume is divided into magnetically polarized units [@her; @for], or within the dipolar or similar model approximation [@pir; @kac]. The latter approach is particularly suitable for magnetic arrays, formed as sets of parallel microwires perpendicular to the array [@vel]. Other authors used an approximation of a homogeneous interaction field [@sam; @sav]. As it was pointed out by Velazquez et al. [@pir], the homogeneous field does not work for short interwire distances. Moreover, in fact the spatial distribution of the magnetization near the wire ends remains unknown; and it is precisely in this area where the remagnetization process starts. As a consequence, our knowledge on the subtle process of the switching of the magnetic moment in the wire advances with difficulties, despite an obvious interest motivated by technology. Recent progress in the field is due to an application of some phenomenological theories, and by the obtained accordance of the results with experimental data [@var]. Although this procedure improved much our understanding of the underlying physical mechanisms, still questions seem to be not much less valuable than answers.
Here we are interested in the bistable hysteresis loops for various orientations of the wires with respect to each other and to the applied field. The motivation is the potential flexibility of the hysteresis loops, which we expect to result when we modify mutual positions and orientations of the wires. On the other hand, it is desirable to confront the experimental data with the model calculations. Even simplified, the model provides a reference point, and the departures between model and reality tells us where an established picture fails.
Our method here is to use a simplified model of the interaction, where the wires are magnetized homogeneously and the wire diameter is set to zero. This leaves the stray field of a wire reduced to the field created by two magnetic charges, placed at the wire ends. The approximation of an infinitely thin wire is justified as long as the wire-wire distance is much larger than the wire diameter. The resulting formula is more refined than the approximation of a homogeneous field, and much easier to use than the micromagnetic simulation. This advantage allows to calculate the interaction field for any mutual position and/or orientation of the wires.
Despite of its practical ease, there is an additional methodological argument for the assumption on the homogeneous distribution of the magnetization. Let us suppose the case of two parallel wires, which form two edges of a rectangle. This system has been investigated by several authors [@vaz; @vel; @sam]. The experimental results on the hysteresis clearly prove that the stray field of the wires act as to prefer the antiparallel orientations of their magnetic moments. It is reasonable to expect that the stray field of one wire acting at another wire should be evaluated near the end of the latter wire; it is at this point where the remagnetization process starts. Then, every deviation from the homogeneous distribution of the magnetization weakens the interaction field, what makes the model uneffective. Let us add that in our numerical simulations reported from Section 3 we are going to evaluate the interaction field not near the wires ends, but at the wires ends. This simplification is expected not to change results remarkably as long as the wires are not parallel.
We are interested in two questions: i) which geometrical configurations of the wires allow for the bistability, ii) what kinds of the hysteresis loops appear for various wire configurations? The former question appears as a consequence of the fact, that for some mutual positions and orientations of the wires, the effective (external + interaction) field causing the large Barkhausen jump at one wire end changes its sign at a point along of the wire; then the domain wall is stopped at this point and the bistability is lost. In this case some more complex behaviour of the hysteresis loop can be expected; for clarity we limit the subject of this text to the bistable case. The second question is a necessary introduction to consider magnetic properties of sets of magnetostatically interacting elements. Such systems are of potential interest e.g. for submicrometer magnetic dots [@cov]. We hope that the answer, even simplified, will activate the search of spatial configurations of the wires which can be useful for other new applications.
The condition applied here can seem to be too restrictive, because in fact the bistability can still be observed if the point at the wire where the field along the wire is zero may vanish at an increasing applied field before the domain wall reaches it. Here, however, we are not going to rely on these dynamical effects. We limit our interest to the cases where the bistability is observed even in the limit of small frequency, and the field amplitude only as large as to activate the switching process. On the other hand, it is known that at field below the order of 1 A/m the domain wall motion is much slower. However, this effect does not influence the shape of the hysteresis loop as long as the period 1/$f$ of the applied sinusoidal field is larger than the time of the remagnetization. Our discussion is limited to this case.
It is known that the wire properties depend on the applied tensile stress [@mit]. For the purposes of this work, most important agent is the stress dependence of the switching (coercive) field. We should repeat here that the condition of the bistability considered in this text is applied for the effective field (applied field + interaction) which triggers the domain wall motion. The enhancement of the switching field by the tensile stress demands an increase of the external field, what influences the bistability condition. Then we consider the tensile stress as an important and nontrivial factor which can change the system properties.
The layout of this text is as follows. In the next section the data are provided on the experimental dependence of the switching field of the wire on the applied tensile stress. In Section 3 we outline the question on the bistability of interacting wires, and we ilustrate this question by an analytical solution in the simplest case of parallel wires. In Section 4, conditions of the bistability are investigated numerically. This section contains also some examples of the hysteresis loops of various bistable systems, including the case with the applied tensile stress. Final conclusions close the text.
Experiment
==========
In order to capture the dependence of the switching field vs the tensile stress the axial hysteresis loop of as-cast wire of the nominal composition $Fe_{77.5}B_{15}Si_{7.5}$, the diameter $125$ $\mu m$ and the length of $100$ $mm$ was measured by means of the conventional induction method [@vzq]. The frequency of the external sine-like field was equal to $50$ $Hz$ and the amplitude of was set to $40$ $A/m$. One end of a wire was fixed to the sample holder, while to the other one a mechanical loading was attached in order to apply the additional tensile stress $\sigma $ during the magnetic measurement. The mechanical loading was being changed from zero to $600$ $g$. Obtained values of the switching field $H^*(\sigma)$ are shown in Fig. 1.
The bistability of parallel wires
=================================
The wires considered here are iron-based straight pieces of amorphous alloy, $L=2a$=10 cm long, with diameter of 2$R$=120 micrometers. Their magnetization is $M=0.7T/ \mu _0$=5.6$\times 10^5$ A/m. This kind of wires are well worked out in the literature [@che]. The magnetic pole at each end contains the magnetic charge $Q=\pm M\pi R^2$= $\pm $0.0063 Am. The switching (coercive) field is about 10 A/m. Suppose that a center of such a wire is placed at the coordination center, and the wire is parallel to OZ-axis. In the surrounding space, there are planes where the vertical component of the stray field is equal to the applied field, equal in this case to the switching field. If a system of two wires is to be bistable, the second wire cannot go through such a plane. If it does, the domain wall in the second wire is stopped.
In the (r-z) plane, the condition that the field created by one magnetic pole plus the external field $H$ cancel with the field created by another magnetic pole is
$$F(z+a)=F(z-a)+ \frac{4\pi H}{Q}$$
where $$F(y)=\frac{y}{(r^2+y^2)^{3/2}}$$
For $H$=0, the equation can be untangled. Expressing all lengths in units of $a$, what is a half of the wire length, and after some simple manipulation we get the equation
$$r^2 = \frac{((z+1)^2(z-1)^{2/3} - (z-1)^2(z+1)^{2/3}}{(z+1)^{2/3} - (z-1)^{2/3}}$$
One of the consequences of this condition is that parallel wires of exactly the same length and very close to each other cease their bistability if they are mutually shifted along their length. Let us consider the case when they are not shifted; then they form edges of a rectangle. The axial component of the stray field created at an end of one wire at distance $x$ from the closer end of another wire is
$$H = \frac{Q}{4\pi}\frac{x}{(x^2+d^2)^{3/2}}$$
where $d$ is the interwire distance. This component is maximal at the point $x$ where $2x^2=d^2$, and its value at the maximum varies with $d$ approximately as $Q/(32.6d^2)$. If $d<<L$, the contribution to the stray field from other wire end is negligible. However, this evaluation relies on an assumption that the remagnetization starts at the point at the wire where the stray field from another wire has a maximum. This assumption remains not justified. Also, the obtained values of the interaction field much excess the observed value, which is of order of 10 A/m for the distance $d$ of 1 mm [@gwr]. This can mean, that in fact the switching starts at smaller value of $x$, i.e. closer to the wire end. Anyway, the evaluation $H \propto d^{-2}$ seems to better reflect the experimental data of Ref. [@pir], than the formula used in Ref. [@sam] for microwires $$H_{int}=\frac{QL}{4\pi}(d^2+L^2)^{-3/2}$$ at least for small interwire distance $d$. The latter formula can be derived as the interaction between the opposite ends of the wires. We should add that the authors of Ref. [@sam] used is to test the stray field dependence on the wire length. The stress dependence of the point $x$ where the remagnetization starts could explain the observed variation of the interaction field with stress [@gwr]. The values of the interaction field, observed in this experiment, are as large as 500 A/m. This order of magnitude can be reproduced by the formula $Q/(32.6d^2)$, given above. Then, the observed difference between the conventional and the cold-drawn wires can be discussed in terms of the position of the point, where the remagnetization starts.
Conditions of the bistability - numerical solutions
===================================================
In Figs. 2-8 we show numerical solutions of the same problem for various mutual orientations of the wires. The value of the applied field is taken as to get the remagnetization. This means that the effective field at one end of the wire is equal to the switching field. Here again, the question is: will the domain wall reach the other wire end? Marked areas in the figures give the values of the parameters, where the answer is “no”.
Let us assume that the magnetic field is along the OZ axis. The spatial configurations are: wire A along the field, wire B in the same plane XZ forms an angle $\pi /4$ (Fig. 2); one wire along the field, another perpendicular, also in the plane XZ but with Y different by 2 mm (Fig. 3); two wires in the parallel planes XZ with distance 2 mm one from another, both wires form angles $\pm \pi /4$ with the field (Fig. 4); two wires in the same plane XZ, one perpendicular to the field, another forms angle $\phi$ with the former, as clock hands (Fig. 5); two wires in the same plane XZ, one parallel to the field, another forms angle $\phi$ with the former, as clock hands (Fig. 6); two wires in the same plane XZ form the same angle with the field, rotating as bird wings (Fig. 7); three wires along three edges of a tetrahedron, form the same angles with the field (Fig. 8). In the case shown in Fig. 6 we include also the tensile stress, as large as to enhance the switching field $H*$ from 10 to 15 A/m. This is shown to change remarkably the area where the bistability is absent. We also show some parameters of the obtained hysteresis loops. The width of the horizontal part of the loops which is responsible for the wire-wire interaction [@vel], is denoted as $\Delta$. The right-shift of the loop center along the field axis is denoted as $s$, and the width of the loop - as $w$. In several cases, the results reflect obvious properties of the systems. For example, for the wires almost perpendicular to the applied field, the hysteresis loop must be very wide. We note that when the wires are not parallel to the applied field, the measured and calculated $\Delta$ ceases its simple interpretation as the interaction field, but depends also on the orientation of wires with respect to the external field.
Conclusions
===========
To summarize main shortcomings of our approach: [*i)*]{} the wire-wire interaction is approximated with an assumption on the uniform wire magnetization, [*ii)*]{} our considerations are limited to the case of strict bistability, where the domain wall moves till the end of the wire at the same external field as it started to move. As it was remarked above, the former approximation can be justified if the distance between wires is much larger (at least one order of magnitude) than the wire diameter. The second limitation excludes many cases which can be of interest. In these cases the dynamics of domain walls depends on the field amplitude and frequency [@kru], and the numerical solution of the equation of motion of the domain wall seems to be the only proper method.
Even with the drawbacks listed above, the results of the model calculations point out that spatial configurations of bistable wires offer a rich variety of hysteresis loops. The parameters of the loops can be controlled by the modification of mutual positions of the wires, the angles formed by the wires and the applied magnetic field and of the applied tensile stress. The obtained hysteresis loop reveal stable states, which remain until the external field is enhanced above some critical values. Then, these states encode the information on previous conditions applied to the wire.\
K. K. and P. G. are grateful to Julian Gonz[á]{}lez and Arcady Zhukov for helpful discussions and kind hospitality.
[00]{} M. V[á]{}zquez and A. Hernando, J. Phys. D [**29**]{} (1996) 939. M. V[á]{}zquez, Physica B [**299**]{} (2001) 302. K. Mohri, T. Uchiyama, L. P. Shen, C. M. Cai and L. V. Panina, J. Magn. Magn. Mater. [**249**]{} (2002) 351. R. Hertel, J. Magn. Magn. Mater. [**249**]{} (2002) 251. H. Forster, T. Schrefl, W. Scholz, D. Suess, V. Tsiantos and J. Fidler, J. Magn. Magn. Mater. [**249**]{} (2002) 181. J. Vel[á]{}zquez, K. R. Pirota and M. V[á]{}zquez, IEEE Trans. Magn. [**39**]{} (2003) 3049. R. Varga, K. L. Garcia, M. V[á]{}zquez and P. Vojtanik, Phys. Rev. Lett. [**94**]{} (2005) 17201. A. Kaczanowski and K. Ku[ł]{}akowski, Micr. Eng. [**81**]{} (2005) 317. J. Vel[á]{}zquez, C. Garcia, M. V[á]{}zquez and A. Hernando, Phys. Rev. B [**54**]{} (1996) 9903. L. C. Sampaio, E. H. C. P. Sinnecker, G. R. C. Cernicchiaro, M. Knobel, M. Vazquez and M. Velazquez, Phys. Rev. B [**61**]{} (2000) 8976. T. A. Savas, M. Farhoud, H. I. Smith, M. Hwang and C. A. Ross, J. Appl. Phys. [**85**]{} (1999) 6160. R. P. Cowburn and M. E. Welland, Science [**287**]{} (2000) 1466. A. Mitra and M. V[á]{}zquez, J. Phys. D: Appl. Phys. [**23**]{} (1990) 228. M. V[á]{}zquez, J. Gonz[á]{}lez and A. Hernando, J. Magn. Magn. Mater. [**53**]{} (1986) 323. D.-X. Chen, N. M. Dempsey, A. Hernando and M. V[á]{}zquez, J. Phys. D: Appl. Phys. [**28**]{} (1995) 1022. C. Gomez-Polo, M. V[á]{}zquez and D.-X. Chen, Appl. Phys. Lett. [**62**]{} (1993) 108. P. Gawro[ń]{}ski, Z. Zhukov, J. M. Blanco, J. Gonz[á]{}lez and K. Ku[ł]{}akowski, J. Magn. Magn. Mater. [**290-291**]{} (2005) 595. G. Krupi[ń]{}ska, A. Zhukov, J. Gonz[á]{}lez and K. Ku[ł]{}akowski, Comp. Mater. Sci. [**36**]{} (2006) 268.
|
---
abstract: |
We analyze a few of the commonly used statistics based and machine learning algorithms for natural language disambiguation tasks and observe that they can be re-cast as learning linear separators in the feature space. Each of the methods makes a priori assumptions, which it employs, given the data, when searching for its hypothesis. Nevertheless, as we show, it searches a space that is as rich as the space of [*all*]{} linear separators. We use this to build an argument for a data driven approach which merely searches for a [*good*]{} linear separator in the feature space, without further assumptions on the domain or a specific problem.
We present such an approach - a sparse network of linear separators, utilizing the Winnow learning algorithm - and show how to use it in a variety of ambiguity resolution problems. The learning approach presented is attribute-efficient and, therefore, appropriate for domains having very large number of attributes.
In particular, we present an extensive experimental comparison of our approach with other methods on several well studied lexical disambiguation tasks such as context-sensitive spelling correction, prepositional phrase attachment and part of speech tagging. In all cases we show that our approach either outperforms other methods tried for these tasks or performs comparably to the best.
author:
- |
[*Dan Roth*]{}\
[Department of Computer Science]{}\
[University of Illinois at Urbana-Champaign]{}\
[Urbana, IL 61801]{}\
[[email protected]]{}
title: |
\
Learning to Resolve Natural Language Ambiguities:\
A Unified Approach
---
Introduction
============
Many important natural language inferences can be viewed as problems of resolving ambiguity, either semantic or syntactic, based on properties of the surrounding context. Examples include part-of speech tagging, word-sense disambiguation, accent restoration, word choice selection in machine translation, context-sensitive spelling correction, word selection in speech recognition and identifying discourse markers. In each of these problems it is necessary to disambiguate two or more \[semantically, syntactically or structurally\]-distinct forms which have been fused together into the same representation in some medium. In a prototypical instance of this problem, word sense disambiguation, distinct semantic concepts such as [interest rate]{} and [has interest in Math]{} are conflated in ordinary text. The surrounding context - word associations and syntactic patterns in this case - are sufficient to identify the correct form.
Many of these are important stand-alone problems but even more important is their role in many applications including speech recognition, machine translation, information extraction and intelligent human-machine interaction. Most of the ambiguity resolution problems are at the lower level of the natural language inferences chain; a wide range and a large number of ambiguities are to be resolved simultaneously in performing any higher level natural language inference.
Developing learning techniques for language disambiguation has been an active field in recent years and a number of statistics based and machine learning techniques have been proposed. A partial list consists of Bayesian classifiers [@gale-word-sense], decision lists [@Yarowsky94], Bayesian hybrids [@Golding95], HMMs [@Charniak93], inductive logic methods [@ZelleMo96], memory-based methods [@zdv-97] and transformation-based learning [@Brill95]. Most of these have been developed in the context of a specific task although claims have been made as to their applicativity to others.
In this paper we cast the disambiguation problem as a learning problem and use tools from computational learning theory to gain some understanding of the assumptions and restrictions made by different learning methods in shaping their search space.
The learning theory setting helps in making a few interesting observations. We observe that many algorithms, including naive Bayes, Brill’s transformation based method, Decision Lists and the Back-off estimation method can be re-cast as learning linear separators in their feature space. As learning techniques for linear separators these techniques are limited in that, in general, they cannot learn [*all*]{} linearly separable functions. Nevertheless, we find, they still search a space that is as complex, in terms of its VC dimension, as the space of all linear separators. This has implications to the generalization ability of their hypotheses. Together with the fact that different methods seem to use different a priori assumptions in guiding their search for the linear separator, it raises the question of whether there is an alternative - search for the best linear separator in the feature space, without resorting to assumptions about the domain or any specific problem.
Partly motivated by these insights, we present a new algorithm, and show how to use it in a variety of disambiguation tasks. The architecture proposed, [[*SNOW*]{}]{}, is a Sparse Network Of linear separators which utilizes the Winnow learning algorithm. A target node in the network corresponds to a candidate in the disambiguation task; all subnetworks learn autonomously from the same data, in an on line fashion, and at run time, they compete for assigning the correct meaning. The architecture is data-driven (in that its nodes are allocated as part of the learning process and depend on the observed data) and supports efficient on-line learning. Moreover, The learning approach presented is attribute-efficient and, therefore, appropriate for domains having very large number of attributes. All together, We believe that this approach has the potential to support, within a single architecture, a large number of simultaneously occurring and interacting language related tasks.
To start validating these claims we present experimental results on three disambiguation tasks. [*Prepositional phrase attachment ([PPA]{})*]{} is the task of deciding whether the Prepositional Phrase (PP) attaches to the noun phrase (NP), as in [Buy the car with the steering wheel]{} or to the verb phrase (VP), as in [Buy the car with his money]{}. [*Context-sensitive Spelling correction ([Spell]{})*]{} is the task of fixing spelling errors that result in valid words, such as [It’s not to late]{}, where [too]{} was mistakenly typed as [to]{}. [*Part of speech tagging ([POS]{})*]{} is the task of assigning each word in a given sentence the part of speech it assumes in this sentence. For example, assign N or V to [talk]{} in the following pair of sentences: [Have you listened to his (him) talk ?]{}. In all cases we show that our approach either outperforms other methods tried for these tasks or performs comparably to the best.
This paper focuses on analyzing the learning problem and on motivating and developing the learning approach; therefore we can only present the bottom line of the experimental studies and the details are deferred to companion reports.
The Learning Problem
====================
Disambiguation tasks can be viewed as general classification problems. Given an input sentence we would like to assign it a single property out of a set of potential properties. Formally, given a sentence $s$ and a predicate $p$ defined on the sentence, we let $C=\{c_1,c_2,\ldots c_m\}$ be the collection of possible values this predicate can assume in $s$. It is assumed that one of the elements in $C$ is the [*correct*]{} assignment, $c(s,p)$. $c_i$ can take values from if the predicate $p$ is the correct spelling of any occurrence of a word from this set in the sentence; it can take values from if the predicate $p$ is the attachment of the PP to the preceding VP ([v]{}) or the preceding NP ([n]{}), or it can take values from if the predicate is the meaning of the word [plant]{} in the sentence. In some cases, such as part of speech tagging, we may apply a collection $P$ of different predicates to the same sentence, when tagging the first, second, $k$th word in the sentence, respectively. Thus, we may perform a classification operation on the sentence multiple times. However, in the following definitions it would suffice to assume that there is a single pre-defined predicate operating on the sentence $s$; moreover, since the predicate studied will be clear from the context we omit it and denote the correct classification simply by $c(s)$.
A [*classifier*]{} $h$ is a function that maps the set $S$ of all sentences[^1], given the task defined by the predicate $p$, to a single value in $C$, $h:S {\rightarrow}C$.
In the setting considered here the classifier $h$ is selected by a training procedure. That is, we assume[^2] a class of functions ${{\cal H}}$, and use the training data to select a member of this class. Specifically, given a [*training corpus*]{} $S_{tr}$ consisting of [*labeled example*]{} ($s,c(s)$, a learning algorithm selects a hypothesis $h \in {{\cal H}}$, the classifier.
The performance of the classifier is measured empirically, as the fraction of correct classifications it performs on a set $S_{ts}$ of test examples. Formally, $$\label{eq:perf}
Perf(f) = |\{s \in S_{ts} | h(s) = c(s) \}| / |\{s \in S_{ts} \}|.
$$
A sentence $s$ is represented as a collection of [*features*]{}, and various kinds of feature representation can be used. For example, typical features used in correcting context-sensitive spelling are [*context words*]{} – which test for the presence of a particular word within $\pm k$ words of the target word, and [*collocations*]{} – which test for a pattern of up to $\ell$ contiguous words and/or part-of-speech tags around the target word. It is useful to consider features as sequences of [*tokens*]{} (e.g., words in the sentence, or pos tags of the words). In many applications (e.g., $n$-gram language models), there is a clear ordering on the features. We define here a natural partial order ${\prec}$ as follows: for features $f, g$ define $f {\prec}g \equiv f
\subseteq g$, where on the right end side features are viewed simply as sets of tokens[^3]. A feature $f$ is of [*order*]{} $k$ if it consists of $k$ tokens.
A definition of a disambiguation problem consists of the task predicate $p$, the set ${{\cal C}}$ of possible classifications and the set ${{\cal F}}$ of features. ${{\cal F}}^{(k)}$ denotes the features of order $k$. Let $|{{\cal F}}|=n$, and $x_i$ be the $i$th feature. $x_i$ can either be present ([*active*]{}) in a sentence $s$ (we then say that $x_i=1$), or absent from it ($x_i=0$). Given that, a sentence $s$ can be represented as the set of all [*active*]{} features in it $s=(x_{i_1},x_{i_2}, \ldots
x_{i_m})$.
From the stand point of the general framework the exact mapping of a sentence to a feature set will not matter, although it is crucially important in the specific applications studied later in the paper. At this point it is sufficient to notice that the a sentence can be mapped into a binary feature vector. Moreover, w.l.o.g we assume that $|C|=2$; moving to the general case is straight forward. From now on we will therefore treat classifiers as Boolean functions, $h:\{0,1\}^n {\rightarrow}\{0,1\}$.
Approaches to Disambiguation
============================
Learning approaches are usually categorized as statistical (or probabilistic) methods and symbolic methods. However, all learning methods are statistical in the sense that they attempt to make inductive generalization from observed data and use it to make inferences with respect to previously unseen data; as such, the statistical based theories of learning [@Vapnik95] apply equally to both. The difference may be that symbolic methods do not explicitly use probabilities in the hypothesis. To stress the equivalence of the approaches further in the following discussion we will analyze two “statistical” and two “symbolic” approaches.
In this section we present four widely used disambiguation methods. Each method is first presented as known and is then re-cast as a problem of learning a linear separator. That is, we show that, there is a linear condition $\sum_{x_i \in {{\cal F}}} w_i x_i > \theta$ such that, given a sentence $s=(x_{i_1},x_{i_2}, \ldots x_{i_m})$, the method predicts $c=1$ if the condition holds for it, and $c=0$ otherwise.
Given an example $s=(x_1,x_2 \ldots x_m)$ a probabilistic classifier $h$ works by choosing the element of $C$ that is most probable, that is $h(s) = argmax_{c_i \in C} Pr(c_i | x_1,x_2,\ldots x_m)\footnote{As
usual, we use the notation $Pr(c_i | x_1,x_2,\ldots x_m)$ as a
shortcut for $Pr(c=c_i|x_1=a_1,x_2=a_2,\ldots x_m=a_m).$},$ where the probability is the empirical probability estimated from the labeled training data. In general, it is unlikely that one can estimate the probability of the event of interest $(c_i | x_1,x_2,\ldots x_m)$ directly from the training data. There is a need to make some probabilistic assumptions in order to evaluate the probability of this event indirectly, as a function of “more frequent” events whose probabilities can be estimated more robustly. Different probabilistic assumptions give rise to difference learning methods and we describe two popular methods below.
### The naive Bayes estimation ([NB]{})
The naive Bayes estimation (e.g., [@DudaHa73]) assumes that given the class value $c \in C$ the features values are statistically independent. With this assumption and using Bayes rule the Bayes optimal prediction is given by: $ h(s) = argmax_{c_i \in C} \Pi_{i=1}^{m} Pr(x_j | c_i) P(c_i).$
The prior probabilities $p(c_i)$ (i.e., the fraction of training examples labeled with $c_i$) and the conditional probabilities $Pr(x_j
| c_i)$ (the fraction of the training examples labeled $c_i$ in which the $j$th feature has value $x_j$) can be estimated from the training data fairly robustly[^4], giving rise to the naive Bayes predictor. According to it, the optimal decision is $c=1$ when $$P(c=1) \Pi_{i} P(x_i|c=1) / P(c=0) \Pi_{i} P(x_i|c=0) > 1.$$ Denoting $p_i \equiv P(x_i=1|c=1), q_i \equiv P(x_i=1|c=0)$, $P(c=r) \equiv P(r)$, we can write this condition as $$\frac{P(1) \Pi_{i} p_i^{x_i} (1-p_i)^{1-x_i}}
{P(0) \Pi_{i} q_i^{x_i} (1-q_i)^{1-x_i}} =
\frac{P(1) \Pi_{i} (1-p_i)(\frac{p_i}{1-p_i})^{x_i}}
{P(0) \Pi_{i} (1-q_i)(\frac{q_i}{1-q_i})^{x_i}} > 1,$$ and by taking log we get that using naive Bayes estimation we predict $c=1$ if and only if
$$\log{\frac{P(1)}{P(0)}} + \sum_{i} \log{ \frac{1-p_i}{1-q_i} } +
\sum_{i}(\log{\frac{p_i}{1-p_i}} \frac{1-q_i}{q_i}) x_i > 0.$$ We conclude that the decision surface of the naive Bayes algorithm is given by a linear function in the feature space. Points which reside on one side of the hyper-plane are more likely to be labeled $1$ and points on the other side are more likely to be labeled $0$.
This representation immediately implies that this predictor is optimal also in situations in which the conditional independence assumption does no hold. However, a more important consequence to our discussion here is the fact that not all linearly separable functions can be represented using this predictor [@Roth98].
### The back-off estimation ([BO]{})
Back-off estimation is another method for estimating the conditional probabilities $Pr(c_i|s)$. It has been used in many disambiguation tasks and in learning models for speech recognition [@Katz87; @ChenGo96; @CollinsBr95]. The back-off method suggests to estimate $Pr(c_i|x_1,x_2,\ldots,x_m)$ by interpolating the more robust estimates that can be attained for the conditional probabilities of more general events. Many variation of the method exist; we describe a fairly general one and then present the version used in [@CollinsBr95], which we compare with experimentally.
When applied to a disambiguation task, [BO]{} assumes that the sentence itself (the basic unit processed) is a feature[^5] of maximal order $f = f^{(k)} \in{{\cal F}}$. We estimate $$Pr(c_i|s) = Pr(c_i|f^{(k)}) =
\sum_{\{f \in {{\cal F}}| f {\prec}f^{(k)} \}} \lambda_{f} Pr(c_i | f).$$ The sum is over all features $f$ which are more general (and thus occur more frequently) than $f^{(k)}$. The conditional probabilities on the right are empirical estimates measured on the training data, and the coefficients $\lambda_f$ are also estimated given the training data. (Usually, these are maximum likelihood estimates evaluated using iterative methods, e.g. [@Samuelsson96]).
Thus, given an example $s=(x_1,x_2 \ldots x_m)$ the [BO]{} method predicts $c=1$ if and only if $$\sum_{i=1}^{|{{\cal F}}|} \lambda_{i} (Pr(c=1 | x_i) - Pr(c=0 | x_i)) x_i
> 0,$$ a linear function over the feature space.
For computational reasons, various simplifying assumptions are made in order to estimate the coefficients $\lambda_f$; we describe here the method used in [@CollinsBr95][^6]. We denote by ${{\cal N}}(f^{(j)})$ the number of occurrences of the $j$th order feature $f^{(j)}$ in the training data. Then [BO]{}estimates $P= Pr(c_i |f^{(k)})$ as follows:
[If ${{\cal N}}(f^{(k)}) > 0$, $P = {{Pr}}(c_i |f^{(k)})$\
Else if $\sum_{f \in {{\cal F}}^{(k-1)}} {{\cal N}}(f) > 0$, $P = \frac{1}{|{{\cal F}}^{(k-1)}|} \sum {{Pr}}(c_i |f^{(k-1)})$\
Else if $\ldots$\
Else if $\sum_{f \in {{\cal F}}^{(1)}} {{\cal N}}(f) > 0$, $P = \frac{1}{|{{\cal F}}^{(1)}|} \sum {{Pr}}(c_i |f^{(1)})$ ]{}
In this case, it is easy to write down the linear separator defining the estimate in an explicit way. Notice that with this estimation, given a sentence $s$, only the highest order features active in it are considered. Therefore, one can define the weights of the $j$th order feature in an inductive way, making sure that it is larger than the sum of the weights of the smaller order features. Leaving out details, it is clear that we get a simple representation of a linear separator over the feature space, that coincides with the [BO]{} algorithm.
It is important to notice that the assumptions made in the [BO]{} estimation method result in a linear decision surface that is, in general, [*different*]{} from the one derived in the [NB]{} method.
### Transformation Based Learning ([TBL]{})
Transformation based learning [@Brill95] is a machine learning approach for rule learning. It has been applied to a number of natural language disambiguation tasks, often achieving state-of-the-art accuracy.
The learning procedure is a mistake-driven algorithm that produces a set of rules. Irrespective of the learning procedure used to derive the [TBL]{} representation, we focus here on the final hypothesis used by [TBL]{} and how it is evaluated, given an input sentence, to produce a prediction. We assume, w.l.o.g, $|C|=2$.
The hypothesis of [TBL]{} is an ordered list of transformations. A [*transformation*]{} is a rule with an antecedent $t$ and a consequent[^7] $c \in C$. The antecedent $t$ is a condition on the input sentence. For example, in [Spell]{}, a condition might be [word W occurs within $\pm k$ of the target word]{}. That is, applying the condition to a sentence $s$ defines a feature $t(s) \in {{\cal F}}$. Phrased differently, the application of the condition to a given sentence $s$, checks whether the corresponding feature is active in this sentence. The condition holds if and only if the feature is active in the sentence.
An ordered list of transformations (the [TBL]{} hypothesis), is evaluated as follows: given a sentence $s$, an initial label $c \in C$ is assigned to it. Then, each rule is applied, in order, to the sentence. If the feature defined by the condition of the rule applies, the current label is replaced by the label in the consequent. This process goes on until the last rule in the list is evaluated. The last label is the output of the hypothesis.
In its most general setting, the [TBL]{} hypothesis is not a classifier [@Brill95]. The reason is that the truth value of the condition of the $i$th rule may change while evaluating one of the preceding rules. However, in many applications and, in particular, in [Spell]{} [@ManguBr97] and [PPA]{} [@BrillRe94] which we discuss later, this is not the case. There, the conditions do not depend on the labels, and therefore the output hypothesis of the [TBL]{} method can be viewed as a classifier. The following analysis applies only for this case. Using the terminology introduced above, let $(x_{i_1},c_{i_1}),(x_{i_2},c_{i_2}), \ldots (x_{i_k},c_{i_k})$ be the ordered sequence of rules defining the output hypothesis of [TBL]{}. (Notice that it is quite possible, and happens often in practice, for a feature to appear more than once in this sequence, even with different consequents). While the above description calls for evaluating the hypothesis by sequentially evaluating the conditions, it is easy to see that the following simpler procedure is sufficient:
Search the ordered sequence in a reversed order. Let $x_{i_j}$ be the first active feature in the list (i.e., the largest $j$). Then the hypothesis predicts $c_{i_j}$.
Alternatively, the [TBL]{} hypothesis can be represented as a (positive) $1$-Decision-List ([p$1$-DL]{}) [@Rivest87], over the set ${{\cal F}}$ of features[^8].
If $x_{i_k}$ is active then predict $c_k$.\
Else If $x_{i_{k-1}}$ is active then predict $c_{k-1}$.\
Else $\ldots$\
Else If $x_{1}$ is active then predict $c_1$.\
Else Predict the initial value
Given the [p$1$-DL]{} representation (Fig \[fig:tbl\]), we can now represent the hypothesis as a linear separator over the set ${{\cal F}}$ of features. For simplicity, we now name the class labels $\{-1,+1\}$ rather than $\{0,1\}$. Then, the hypothesis predicts $c=1$ if and only if $\sum_{j=1}^k 2^{j} \cdot c_{i_j} \cdot x_{i_j} >0.$ Clearly, with this representation the active feature with the highest index dominates the prediction, and the representations are equivalent[^9].
### Decision Lists ([p$1$-DL]{})
It is easy to see (details omitted), that the above analysis applies to [p$1$-DL]{}, a method used, for example, in [@Yarowsky95]. The [BO]{} and [p$1$-DL]{} differ only in that they keep the rules in reversed order, due to different evaluation methods.
The Linear Separator Representation {#sec:ls}
===================================
To summarize, we have shown:
This is not to say that these methods assume that the data is linearly separable. Rather, all the methods assume that the feature space is divided by a linear condition (i.e., a function of the form $\sum_{x_i
\in {{\cal F}}} w_i x_i > \theta$) into two regions, with the property that in one of the defined regions the more likely prediction is $0$ and in the other, the more likely prediction is $1$.
As pointed out, it is also instructive to see that these methods yield [*different*]{} decision surfaces and that they cannot represent every linearly separable function.
Theoretical Support for the Linear Separator Framework {#sec:theory}
======================================================
In this section we discuss the implications these observations have from the learning theory point of view. In order to do that we need to resort to some of the basic ideas that justify inductive learning. Why do we hope that a classifier learned from the training corpus will perform well (on the test data) ? Informally, the basic theorem of learning theory [@Valiant84; @Vapnik95] guarantees that, if the training data and the test data are sampled from the same distribution[^10], good performance on the training corpus guarantees good performance on the test corpus.
If one knows something about the model that generates the data, then estimating this model may yield good performance on future examples. However, in the problems considered here, no reasonable model is known, or is likely to exist. (The fact that the assumptions discussed above disagree with each other, in general, may be viewed as a support for this claim.)
In the absence of this knowledge a learning method merely attempts to make correct predictions. Under these conditions, it can be shown that the error of a classifier selected from class ${\cal H}$ on (previously unseen) test data, is bounded by the sum of its training error and a function that depends linearly on the complexity of ${\cal
H}$. This complexity is measured in terms of a combinatorial parameter - the VC-dimension of the class ${\cal H}$ [@Vapnik82] - which measures the richness of the function class. (See [@Vapnik95; @KearnsVa94]) for details).
We have shown that all the methods considered here look for a linear decision surface. However, they do make further assumptions which seem to restrict the function space they search in. To quantify this line of argument we ask whether the assumptions made by the different algorithms significantly reduce the complexity of the hypothesis space. The following claims show that this is not the case; the VC dimension of the function classes considered by all methods are as large as that of the full class of linear separators.
Fact $1$ is well known; $2$ and $3$ can be derived directly from the definition [@Roth98].
The implication is that a method that merely searches for the optimal linear decision surface given the training data may, in general, outperform all these methods also on the test data. This argument can be made formal by appealing to a result of [@KearnsSc94], which shows that even when there is no perfect classifier, the optimal linear separator on a polynomial size set of training examples is optimal (in a precise sense) also on the test data. The optimality criterion we seek is described in Eq. $1$. A linear classifier that minimizes the number of disagreements (the sum of the false positives and false negatives classifications). This task, however, is known to be NP-hard [@HoffgenSi92], so we need to resort to heuristics. In searching for good heuristics we are guided by computational issues that are relevant to the natural language domain. An essential property of an algorithm is being feature-efficient. Consequently, the approach describe in the next section makes use of the [*Winnow*]{} algorithm which is known to produce good results when a linear separator exists, as well as under certain more relaxed assumptions [@Littlestone91].
The [[*SNOW*]{}]{} Approach
===========================
The [[*SNOW*]{}]{} architecture is a network of threshold gates. Nodes in the first layer of the network are allocated to input features in a data-driven way, given the input sentences. Target nodes (i.e., the element $c \in C$) are represented by nodes in the second layer. Links from the first to the second layer have weights; each target node is thus defined as a (linear) function of the lower level nodes. (A similar architecture which consists of an additional layer is described in [@GoldingRo96]. Here we do not use the “cloud” level described there.)
For example, in [Spell]{}, target nodes represent members of the confusion sets; in [POS]{}, target nodes correspond to different pos tags. Each target node can be thought of as an autonomous network, although they all feed from the same input. The network is [*sparse*]{} in that a target node need not be connected to all nodes in the input layer. For example, it is not connected to input nodes (features) that were never active with it in the same sentence, or it may decide, during training to disconnect itself from some of the irrelevant inputs.
Learning in [[*SNOW*]{}]{} proceeds in an on-line fashion[^11]. Every example is treated autonomously by each target subnetworks. Every labeled example is treated as positive for the target node corresponding to its label, and as negative to all others. Thus, every example is used once by all the nodes to refine their definition in terms of the others and is then discarded. At prediction time, given an input sentence which activates a subset of the input nodes, the information propagates through all the subnetworks; the one which produces the highest activity gets to determine the prediction.
A local learning algorithm, Winnow [@Littlestone88], is used at each target node to learn its dependence on other nodes. Winnow is a mistake driven on-line algorithm, which updates its weights in a multiplicative fashion. Its key feature is that the number of examples it requires to learn the target function grows linearly with the number of [*relevant*]{} attributes and only logarithmically with the total number of attributes. Winnow was shown to learn efficiently any linear threshold function and to be robust in the presence of various kinds of noise, and in cases where no linear-threshold function can make perfect classifications and still maintain its abovementioned dependence on the number of total and relevant attributes [@Littlestone91; @KivinenWa95].
Notice that even when there are only two target nodes and the cloud size [@GoldingRo96] is $1$ [[*SNOW*]{}]{} behaves differently than pure Winnow. While each of the target nodes is learned using a positive Winnow algorithm, a winner-take-all policy is used to determine the prediction. Thus, we do not use the learning algorithm here simply as a discriminator. One reason is that the [[*SNOW*]{}]{} architecture, influenced by the Neuroidal system [@Valiant94], is being used in a system developed for the purpose of learning knowledge representations for natural language understanding tasks, and is being evaluated on a variety of tasks for which the node allocation process is of importance.
Experimental Evidence
=====================
In this section we present experimental results for three of the most well studied disambiguation problems, [Spell]{}, [PPA]{} and [POS]{}. We present here only the bottom-line results of an extensive study that appears in companion reports [@GoldingRo98; @KrymolovskyRo98; @RothZe98].
### Context Sensitive Spelling Correction
Context-sensitive spelling correction is the task of fixing spelling errors that result in valid words, such as [*It’s not to late*]{}, where [*too*]{} was mistakenly typed as [*to*]{}.
We model the ambiguity among words by [*confusion sets*]{}. A confusion set $C = \{c_1, \ldots, c_n\}$ means that each word $c_i$ in the set is ambiguous with each other word. All the results reported here use the same pre-defined set of confusion sets [@GoldingRo96].
We compare [[*SNOW*]{}]{} against [TBL]{} [@ManguBr97] and a naive-Bayes based system ([NB]{}). The latter system presents a few augmentations over the simple naive Bayes (but still shares the same basic assumptions) and is among the most successful methods tried for the problem [@Golding95]. An indication that a Winnow-based algorithm performs well on this problem was presented in [@GoldingRo96]. However, the system presented there was more involved than [[*SNOW*]{}]{} and allows more expressive output representation than we allow here. The output representation of all the approaches compared is a linear separator.
The results presented in Table \[tab:spell\] for [NB]{} and [[*SNOW*]{}]{} are the (weighted) average results of $21$ confusion sets, 19 of them are of size $2$, and two of size $3$. The results presented for the [TBL]{}[^12] method are taken from [@ManguBr97] and represent an average on a subset of $14$ of these, all of size $2$.
Sets Cases Baseline [NB]{} [TBL]{} [[*SNOW*]{}]{}
------------ ------- ---------- -------- --------- ----------------
[**14**]{} 1503 71.1 89.9 88.5 93.5
[**21**]{} 4336 74.8 93.8 96.4
: [**[Spell]{} System comparison**]{}. The second column gives the number of test cases. All algorithms were trained on 80% of Brown and tested on the other 20%; Baseline simply identifies the most common member of the confusion set during training, and guesses it every time during testing.
\[tab:spell\]
### Prepositional Phrase Attachment
The problem is to decide whether the Prepositional Phrase (PP) attaches to the noun phrase, as in [Buy the car with the steering wheel]{} or the verb phrase, as in [Buy the car with his money]{}. Earlier works on this problem [@RRR94; @BrillRe94; @CollinsBr95] consider as input the four head words involved in the attachment - the VP head, the first NP head, the preposition and the second NP head (in this case, [buy, car, with]{} and [steering wheel]{}, respectively). These four-tuples, along with the attachment decision constitute the labeled input sentence and are used to generate the feature set. The features recorded are all sub-sequences of the $4$-tuple, total of $15$ for every input sentence. The data set used by all the systems in this in this comparison was extracted from the Penn Treebank WSJ corpus by [@RRR94]. It consists of 20801 training examples and 3097 separate test examples. In a companion paper we describe an extensive set of experiments with this and other data sets, under various conditions. Here we present only the bottom line results that provide direct comparison with those available in the literature[^13]. The results presented in Table \[tab:pp\] for [NB]{} and [[*SNOW*]{}]{} are the results of our system on the 3097 test examples. The results presented for the [TBL]{} and [BO]{} are on the same data set, taken from [@CollinsBr95].
------- ---------- -------- --------- -------- ----------------
Test Baseline [NB]{} [TBL]{} [BO]{} [[*SNOW*]{}]{}
cases
3097 59.0 83.0 81.9 84.1 83.9
------- ---------- -------- --------- -------- ----------------
: [**[PPA]{} System comparison**]{}. All algorithms were trained on 20801 training examples from the WSJ corpus tested 3097 previously unseen examples from this corpus; all the system use the same feature set.
\[tab:pp\]
### Part of Speech Tagging
A part of speech tagger assigns each word in a sentence the part of speech that it assumes in that sentence. See [@Brill95] for a survey of much of the work that has been done on [POS]{} in the past few years. Typically, in English there will be between $30$ and $150$ different parts of speech depending on the tagging scheme. In the study presented here, following [@Brill95] and many other studies there are $47$ different tags. Part-of-speech tagging suggests a special challenge to our approach, as the problem is a multi-class prediction problem [@RothZe98]. In the [[*SNOW*]{}]{} architecture, we devote one linear separator to each pos tag and each sub network learns to separate its corresponding pos tag from all others. At run time, all class nodes process the given sentence, applying many classifiers simultaneously. The classifiers then compete for deciding the pos of this word, and the node that records the highest activity for a given word in a sentence determines its pos. The methods compared use context and collocation features as in [@Brill95].
Given a sentence, each word in the sentence is assigned an initial tag, based on the most common part of speech in the training corpus. Then, for each word in the sentence, the network processes the sentence, and makes a suggestion for the pos of this word. Thus, the input for the predictor is noisy, since the initial assignment is not accurate for many of the words. This process can repeat a few times, where after predicting the pos of a word in the sentence we re-compute the new feature-based representation of the sentence and predict again. Each time the input to the predictors is expected to be slightly less noisy. In the results presented here, however, we present the performance without the re-cycling process, so that we maintain the linear function expressivity (see [@RothZe98] for details).
The results presented in Table \[tab:pos\] are based on experiments using $800,000$ words of the Penn Treebank Tagged WSJ corpus. About $550,000$ words were used for training and $250,000$ for testing. [[*SNOW*]{}]{}and [TBL]{} were trained and tested on the same data.
--------- ---------- --------- ----------------
Test Baseline [TBL]{} [[*SNOW*]{}]{}
cases
250,000 94.4 96.9 96.8
--------- ---------- --------- ----------------
: [**[POS]{} System comparison**]{}. The first column gives the number of test cases. All algorithms were trained on $550,000$ words of the tagged WSJ corpus. Baseline simply predicts according to the most common pos tag for the word in the training corpus.
\[tab:pos\]
Conclusion
==========
We presented an analysis of a few of the commonly used statistics based and machine learning algorithms for ambiguity resolution tasks. We showed that all the algorithms investigated can be re-cast as learning linear separators in the feature space. We analyzed the complexity of the function space in which each of these method searches, and show that they all search a space that is as complex as the space of all linear separators. We used these to argue motivate our approach of learning a sparse network of linear separators ([[*SNOW*]{}]{}), which learns a network of linear separator by utilizing the Winnow learning algorithm. We then presented an extensive experimental study comparing the [[*SNOW*]{}]{} based algorithms to other methods studied in the literature on several well studied disambiguation tasks. We present experimental results on [Spell]{}, [PPA]{} and [POS]{}. In all cases we show that our approach either outperformed other methods tried for these tasks or performs comparably to the best. We view this as a strong evidence to that this approach provides a unified framework for the study of natural language disambiguation tasks.
The importance of providing a unified framework stems from the fact the essentially all ambiguity resolution problems that are addressed here are at the lower level of the natural language inferences chain. A large number of different kinds of ambiguities are to be resolved simultaneously in performing any higher level natural language inference [@Cardie96]. Naturally, these processes, acting on the same input and using the same “memory”, will interact. A unified view of ambiguity resolution within a single architecture, is valuable if one wants understand how to put together a large number of these inferences, study interactions among them and make progress towards using these in performing higher level inferences.
Blum, A. 1995. Empirical support for [W]{}innow and weighted-majority based algorithms: results on a calendar scheduling domain. In [*Proc. 12th International Conference on Machine Learning*]{}, 64–72. Morgan Kaufmann.
Brill, E., and Resnik, P. 1994. A rule-based approach to prepositional phrase attachment disambiguation. In [*Proc. of COLING*]{}.
Brill, E. 1995. Transformation-based error-driven learning and natural language processing: A case study in part of speech tagging. 21(4):543–565.
Cardie, C. 1996. . Springer. 315–328.
Charniak, E. 1993. . MIT Press.
Chen, S., and Goodman, J. 1996. An empirical study of smoothing techniques for language modeling. In [*Proc. of the Annual Meeting of the ACL*]{}.
Collins, M., and Brooks, J. 1995. Prepositional phrase attachment through a backed-off model. In [*Proceedings of Third the Workshop on Very Large Corpora*]{}.
Duda, R. O., and Hart, P. E. 1973. . Wiley.
Gale, W. A.; Church, K. W.; and Yarowsky, D. 1993. A method for disambiguating word senses in a large corpus. 26:415–439.
Golding, A. R., and Roth, D. 1996. Applying winnow to context-sensitive spelling correction. In [*Proc. of the International Conference on Machine Learning*]{}, 182–190.
Golding, A. R., and Roth, D. 1998. A winnow-based approach to word correction. . Special issue on Machine Learning and Natural Language; to appear.
Golding, A. R. 1995. A bayesian hybrid method for context-sensitive spelling correction. In [*Proceedings of the 3rd workshop on very large corpora, ACL-95*]{}.
H[ö]{}ffgen, K., and Simon, H. 1992. Robust trainability of single neurons. In [*Proc. 5th Annu. Workshop on Comput. Learning Theory*]{}, 428–439. New York, New York: ACM Press.
Katz, S. M. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. 35(3):400–401.
Kearns, M., and Schapire, R. 1994. Efficient distribution-free learning of probabilistic concepts. 48:464–497.
Kearns, M., and Vazirani, U. 1992. . MIT Press.
Kivinen, J., and Warmuth, M. K. 1995. Exponentiated gradient versus gradient descent for linear predictors. In [*Proceedings of the Annual ACM Symp. on the Theory of Computing*]{}.
Krymolovsky, Y., and Roth, D. 1998. Prepositional phrase attachment. In Preparation.
Littlestone, N. 1988. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. 2:285–318.
Littlestone, N. 1991. Redundant noisy attributes, attribute errors, and linear threshold learning using [W]{}innow. In [*Proc. 4th Annu. Workshop on Comput. Learning Theory*]{}, 147–156. San Mateo, CA: Morgan Kaufmann.
Mangu, L., and Brill, E. 1997. Automatic rule acquisition for spelling correction. In [*Proc. of the International Conference on Machine Learning*]{}, 734–741.
Ratnaparkhi, A.; Reynar, J.; and Roukos, S. 1994. A maximum entropy model for prepositional phrase attachment. In [*ARPA*]{}.
Rivest, R. L. 1987. Learning decision lists. 2(3):229–246.
Roth, D., and Zelenko, D. 1998. Part of speech tagging using a network of linear separators. Submitted.
Roth, D. 1998. Learning to resolve natural language ambiguities:a unified approach. Long Version, in Preparation.
Samuelsson, C. 1996. Handling sparse data by successive abstraction. In [*Proc. of COLING*]{}.
Valiant, L. G. 1984. A theory of the learnable. 27(11):1134–1142.
Valiant, L. G. 1994. . Oxford University Press.
Valiant, L. G. 1998. Projection learning. In [*Proc. of the Annual ACM Workshop on Computational Learning Theory*]{}, xxx–xxx.
Vapnik, V. N. 1982. . New York: Springer-Verlag.
Vapnik, V. N. 1995. . New York: Springer-Verlag.
Yarowsky, D. 1994. Decision lists for lexical ambiguity resolution: application to accent restoration in [Spanish]{} and [French]{}. In [*Proc. of the Annual Meeting of the ACL*]{}, 88–95.
Yarowsky, D. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In [*Proceedings of ACL-95*]{}.
Zavrel, J.; Daelemans, W.; and Veenstra, J. 1997. Resolving pp attachment ambiguities with memory based learning.
Zelle, J. M., and Mooney, R. J. 1996. Learning to parse database queries using inductive logic proramming. In [*Proc. National Conference on Artificial Intelligence*]{}, 1050–1055.
[^1]: The basic unit studied can be a paragraph or any other unit, but for simplicity we will always call it a sentence.
[^2]: This is usually not made explicit in statistical learning procedures, but is assumed there too.
[^3]: There are many ways to define features and order relations among them (e.g., restricting the number of tokens in a feature, enforcing sequential order among them, etc.). The following discussion does not depend on the details; one option is presented to make the discussion more concrete.
[^4]: Problems of sparse data may arise, though, when a specific value of $x_i$ observed in testing has occurred infrequently in the training, in conjunction with $c_j$. Various [*smoothing*]{} techniques can be employed to get more robust estimations but these considerations will not affect our discussion and we disregard them.
[^5]: The assumption that the maximal order feature is the classified sentence is made, for example, in [@CollinsBr95]. In general, the method deals with multiple features of the maximal order by assuming their conditional independence, and superimposing the [NB]{} approach.
[^6]: There, the empirical ratios are smoothed; experimentally, however, this yield only a slight improvement, going from 83.7% to 84.1% so we present it here in the pure form.
[^7]: The consequent is sometimes described as a [*transformation*]{} $c_i {\rightarrow}c_j$, with the semantics – if the current label is $c_i$, relabel it $c_j$. When $|C|=2$ it is equivalent to simply using $c_j$ as the consequent.
[^8]: Notice, the order of the features is reversed. Also, multiple occurrences of features can be discarded, leaving only the last rule in which this feature occurs. By “positive” we mean that we never condition on the absence of a feature, only on its presence.
[^9]: In practice, there is no need to use this representation, given the efficient way suggested above to evaluate the classifier. In addition, very few of the features in ${{\cal F}}$ are active in every example, yielding more efficient evaluation techniques (e.g., [@Valiant98])
[^10]: This is hard to define in the context of natural language; typically, this is understood as texts of similar nature; see a discussion of this issue in [@GoldingRo96].
[^11]: Although for the purpose of the experimental study we do not update the network while testing.
[^12]: Systems are compared on the same feature set. [TBL]{} was also used with an enhanced feature set [@ManguBr97] with improved results of 93.3% but we have not run the other systems with this set of features.
[^13]: [[*SNOW*]{}]{} was evaluated with an enhanced feature set [@KrymolovskyRo98] with improved results of 84.8%. [@CollinsBr95] reports results of 84.4% on a different enhanced set of features, but other systems were not evaluated on these sets.
|
---
abstract: 'This paper describes the subjective experiments and subsequent analysis carried out to validate the application of one of the most robust and influential video quality metrics, Video Multimethod Assessment Fusion (VMAF), to 360VR contents. VMAF is a full reference metric initially designed to work with traditional 2D contents. Hence, at first, it cannot be assumed to be compatible with the particularities of the scenario where omnidirectional content is visualized using a Head-Mounted Display (HMD). Therefore, through a complete set of tests, we prove that this metric can be successfully used without any specific training or adjustments to obtain the quality of 360VR sequences actually perceived by users.'
author:
- 'Marta Orduna, César Díaz, Lara Muñoz, Pablo Pérez, Ignacio Benito, and Narciso García[^1][^2][^3][^4]'
bibliography:
- 'bibliografia.bib'
title: 'Video Multimethod Assessment Fusion (VMAF) on 360VR contents'
---
[Design and implementation of QoE analysis of Super Multiview Video with Head-Mounted Devices]{}
VMAF, 360VR content, video quality, subjective experiments.
Introduction
============
Reality (VR) applications try to provide an immersive experience to the user by creating a realistic-looking world, which can be static or responsive to the user’s actions [@burdea2003virtual]. Among the available sensory feedbacks, visual information is clearly the most important one to help the perception of being physically present in a non-physical world [@mirror]. However, the rendering of high quality video imposes critical technical restrictions. On the one hand, its synthesis demands important computational resources and, on the other hand, its transmission requires very high bit rates. While local computing power seems to be widely available, video delivery assuring the suppression of incompatible sensory input does not [@schubert]. So, many VR applications have been restricted to operate with local video information, although synthesized video could be generated online from delivered abstract representations. Moreover, the synchronized presentations of all multimedia data streams should not be forgotten [@yuan].
Recently 360VR content has stemmed as one of the most relevant scenarios related to VR. Specifically, its visualization by a Head-Mounted Display (HMD) allows a 3 Degrees-of-Freedom (DoF) scenario, used by a wide variety of applications in very different areas like education, medicine, or entertainment. Although, different applications consider content locally hosted, leading edge proposals required content located elsewhere, either stored or live recorded, and streamed to the client whenever required. Adaptive Bit Rate (ABR) streaming techniques are widely used [@abr], but the delivery of omnidirectional content with an acceptable quality is still a challenge in this scenario due to the amount of resources required. Typically, contents with at least 4K resolution and 60 fps are required to provide good Quality of Experience (QoE), guaranteeing an immersive and engaging experience [@sreedhar2016viewport; @el2016streaming; @highframerate]. However, these high requirements in terms of image resolution and encoding quality lead to very high bit rates when they are encoded and delivered in the same way as traditional 2D content. This fact presents a serious problem considering the bandwidth required to stream this kind of content [@diaz2018viability].
Therefore, to relax these strict conditions, different approaches can be considered. First, the design of new quality ladders leading to different perceptible levels of quality in 360VR contents. Second, efficient delivery schemes that take advantage of the intrinsic characteristics and nature of 360VR visualization, in the form of HMDs. In particular, existing schemes are typically based on the fact that only a portion of the received 360VR sequence, called Field Of View (FOV), is viewed by the user, and the specific portion depends on his/her point of view with respect to the scene at that particular moment [@willemsen2009effects; @gaddam2016tiling]. Therefore, only the area that is viewed by the user needs to be provided with high quality, decreasing the required overall bit rate. Moreover, other approaches take into account the users’ behavior assuming that users tend to look at certain orientations or elements in the scene with higher probability than others. In this case, the content is prepared considering saliency or attention maps, leading to a better use of the bit rate [@aladagli2017predicting], [@rai2017dataset]. Additionally, other proposals exploit the peculiarities of the type of projection used to map the spherical image onto before the encoding and transmission processes: equirectangular, cubemap, pyramidal, equiangular cubemap... [@el2016streaming; @corbillon2017viewport; @brown2017bringing]. Indeed, each projection impacts in a different way the quality of the different areas of the omnidirectional image. These proposals then aim at smartly differentiating and handling the information, mostly spatially, in terms of coding and/or transmission, so as to provide QoE to users and save bit rate simultaneously.
All these approaches require a quality metric that offers reliable results in the sense that it should be able to capture the quality actually perceived by users when these strategies are put into practice with several targets: test the strategy itself and properly select and adjust the parameters that influence its performance. Thus, a significant effort has been made to adapt some of the most popular and useful quality metrics of the traditional 2D world to 360VR scenarios.
Indeed, there exist several works in the literature referring to modifications of the Peak Signal-to-Noise Ratio (PSNR) metric to fit the specific features of 360VR content. Specifically, Lakshman et al. [@yu2015framework] proposed a method called Sphere based PSNR computation (S-PSNR) where the distorted frame is projected onto a sphere before computing its distortion. In this way, for each projected point on the sphere, the associated pixels in the plane domain are calculated to compute the PSNR. Based on the S-PSNR, other methods have targeted the approximation of the average quality over all possible user points of view which are related with different viewports, weighting the values obtained for a given viewport taking into account the probability of the users of looking in that direction. For instance, Sun et al. [@wspsnr] proposed the use of the Weighted to Spherically PSNR (WS-PSNR) metric, where the weights assigned to an area decreases as this area gets away from the equator and closer to the poles. Similarly, Zakharchenko et al. [@cpp] proposed the Craster Parabolic Projection PSNR (CPP-PSNR) metric, where the weights are assigned to different areas based on the craster parabolic projection. In contrast, Ghaznavi et al. [@ghaznavi2016360] introduced the Uniformly Sampled Spherical PSNR (USS-PSNR) metric, where an uniform and equal weight sampling of the decoded video on the sphere is implemented. Hence, the sample density changes based on latitude and longitude. Anyhow, the main problem with this kind of metrics is that they still have the same problem as the original PSNR, they do not take into account any Human Visual System (HVS) characteristics.
With the aim of including subjective aspects in the way video quality is measured, a Multi-Scale SSIM (MS-SSIM) extension was proposed by Wang et al. [@wang2003multiscale]. Starting from the comparison of the three traditional SSIM terms (luminance, contrast, and structure) between the original and the distorted sequences, this extension incorporated information regarding image details at different resolutions and several viewing conditions that some subsequent works have adjusted to be used with omnidirectional content. Specifically, the version by Corbillon et al. [@corbillon2017viewport] use different encoded versions of the same viewport whereas the proposal by Tran et al. [@tran2018study] is based on different encoded versions of the whole 360VR scene. Nevertheless, although the approximation of the perceived quality carried out by MS-SSIM is in general acceptable and outperforms the results of PSNR, the complexity of applying this index to omnidirectional contents of high resolutions complicates its use [@tran2018study].
Based on this overview, none of the modifications of traditional objective metrics offers useful enough evaluations in terms of reliability and resource consumption. For this reason, we have focused our work on the extension to omnidirectional video one of the most influential metrics used today for traditional contents: the Video Multimethod Assessment Fusion (VMAF) metric developed by Netflix [@liu2013visual; @lin2014fusion; @vmafblog]. VMAF is a Full-Reference (FR) metric based on different elementary metrics combined by a machine-learning algorithm, offering a good prediction of the human quality perception [@vmafblog]. Recent studies have validated its direct use on environments different from the one it was intended to without any specific training in this sense. Concretely, Rassol et al. [@vmafvalidation4k] carried out subjective quality tests to validate the application of VMAF to traditional contents with 4K resolution, a resolution for which the metric is not trained, obtaining good results when trying to predict the VMAF score. Bampis et al. [@bampis2017learning] used the dataset created for VMAF to implement their quality predictor and compare the results obtained by VMAF with other typical metrics. Likewise, Bampis et al. [@vmafextensionSTVMAF] proposed the SpatioTemporal-VMAF (ST-VMAF), an extension to the VMAF metric consisting in expanding the analysis of temporal features in video sequences to enhance the metric results. The significantly good results provided by VMAF with different type of non-immersive contents and viewing conditions led to considering its application without making any specific adjustments to assess omnidirectional content, thus avoiding generating a large and rich specific 360VR video dataset, carrying out numerous subjective quality assessments and performing the corresponding training and testing stages. Hence, saving time and resources.
The rest of the paper is structured as follows. Section \[sec:work\_approach\] describes in detail the approach we have taken to validate the use of VMAF to assess the quality of 360VR content. Section \[sec:application\_vmaf\_360vr\] introduces the first stage of our procedure. In Section \[sec:validation\_vmaf\_360vr\], we present the preparation, carrying out and analysis of the subjective quality assessment to validate the use of VMAF for 360VR contents. At the end of this paper, Section \[sec:conclusions\] summarizes general conclusions.
\
Work approach {#sec:work_approach}
=============
The objective of this work is the validation of the direct application of the FR VMAF metric to omnidirectional content without any specific training or adaptations in this sense. To do so, we presume that there is a monotonic relationship between the well-known application of VMAF on traditional 2D contents and its proposed new application on 360VR contents. Therefore, the validation can be carried out on a reduced set of adequately selected values.
The validation is performed in two steps. First, we encode a number of 360VR Source Sequences (SRCs) with constant Quantization Parameter (QP) covering the whole range of possible values. Later on, we simply apply the original VMAF metric to these Processed Video Sequences (PVS’s) to obtain the variation of the score with the encoding parameter. It is posed in this way considering the high impact of QP in the QoE of the users. Indeed, in general, the higher the QP value is, the less detail is retained in the encoded image. This usually translates into lower QoE but also lower bit rate usage [@qp]. Secondly, we verify through subjective tests that the users’ perception fits with minimum discrepancy the VMAF scores obtained in the first step. Instead of performing a sweep over the whole range of QP values, i.e., showing to the user all the PVS’s created in the first step, to search for Just-Noticeable Differences (JND), we present a subset of them and analyze their responses in terms of average rate and tendency. We do so assuming that the VMAF-vs-QP curve is monotonically decreasing by the nature of the encoding. This fact enables the possibility of adjusting it with a finite number of key operating points. These points correspond to anchor VMAF scores in the curve for all the used contents.
We have focused on the equirectangular projection for our study, as it is the one most commonly used today. In addition, attention maps obtained from ordinary 360VR content visualization sessions show that users tend to look at the areas near the equator with a higher probability [@maugey2017saliency]. Since the distortion introduced by the equirectangular projection is far lower in these areas, the characteristics of the zone are closer to those of traditional 2D contents. Thus, we can assume that a robust metric designed for 2D content can offer acceptable results for common 360VR content with homogeneous encoding.
The following sections describe in depth both steps.
Application of VMAF to 360VR contents {#sec:application_vmaf_360vr}
=====================================
Here, we present and show the reasons why we use this process through which we obtain the reference VMAF-vs-QP curve for 360VR contents. It is divided into two main parts: the test material subsection, where the created database and the main features of the SRCs are presented, and the experimental results subsection, where the VMAF scores are presented and analyzed.
![Spatial and Temporal Information indicators for all contents.\[fig:SI\_TI\_DATA\]](images/si-ti.png){width="\columnwidth"}
Test material
-------------
The first step for this analysis was to prepare a wide range of 360VR contents selected with different features in terms of color, texture, camera motion, composition, and type of content in the scenes [@machajdik2010affective], in accordance to Recommendation ITU-R BT.500-13 [@bt2012methodology]. It is important to note the relevance of the dataset content selection because a varied material with an absence of defects must be assured to obtain stable results in this analysis and in the following subjective analysis (Section \[sec:validation\_vmaf\_360vr\]). Additionally, the selected clips should not present any relevant changes between frames, avoiding the need for an accurate temporal pooling mechanism. Furthermore, a minimum level of visual comfort should also be guaranteed. To that end, we did not consider clips that included abrupt movements in the scene, a poor stitching, or unbearable effects that could disturb subjects and affect their rates.
We used nine SRCs from as many immersive VR video sources in equirectangular format. Seven of them were obtained from a database made publicly available by the Virtual Human Interaction Lab from Stanford University [@databaseclips] and one from the dataset for exploring user behaviors in VR spherical video streaming created by Wu et al. [@femaledataset]. The last one came from a private source. Figure \[fig:frames\_Example\] depicts descriptive screenshots of the first eight sequences. All nine clips had a duration of 10 seconds. This length is justified by previously conducted studies conclusions, where we detected that this is the average time that it usually takes users to properly explore a 360VR scene, that is, to find and check the anchor points in the 360 scene that he/she uses as reference to compare the quality between different versions. Furthermore, this duration matches the suggestions of Recommendation ITU-R BT.500-13 [@bt2012methodology]. Moreover, the original resolution of all the sequences is 4K (3840x1920), which was kept constant for all tested qualities throughout the experiment. As the original sources had different framerates, all clips were changed to 25 fps to build a homogeneous dataset. We next describe the main characteristics of the selected contents:
1. “AbandonedBuilding”: it is mainly a static content with notable texture. The only motion in the scene is related to a moving curtain.
2. “Alaska”: this content’s main feature relies on the motion of the camera, since it is on a sailing boat.
3. “Beach”: this content presents a typical beach landscape. The most relevant feature is the appearance of some titles, which can capture the user’s attention.
4. “CaribbeanVacation”: this content shows a cruise with people in a cafeteria. A video is shown to the public, trying to capt the attention of the user.
5. “FemaleBasket”: this content presents a basketball game with people cheering.
6. “Happyland”: this content is characterized by the proximity of some children moving around the camera.
7. “Sunset”: this content can be considered an exploratory content. The camera is on a sailing cruise. However, the camera motion is not perceptible because of the height of the cruise.
8. “Waterfall”: this content shows a landscape with a quite large waterfall that is rather close to the camera.
9. “Lions”: this content shows a lion moving very close around the camera.
{width="85.00000%"}
All SRCs were characterized in terms of their spatial and temporal complexity, using the Spatial Information (SI) and Temporal Information (TI) indicators, respectively, as expressed in Recommendation ITU-T P.910 [@itutp910]. Their values are presented in Figure \[fig:SI\_TI\_DATA\].
To obtain the full range of scores, all SRCs were encoded with ITU-T H.265/High Efficiency Video Coding (HEVC) using fixed QPs ranging from 1 to 51 [@ituh265]. As a result, we obtained 51 PVSs per SRC with bit rates ranging from 310 Mbps to 370 kbps. A summary of the created dataset is presented in Table \[tab:dataset\_qptest\]. This set of 459 (51 times 9) sequences were the inputs to the VMAF computing algorithm.
[|l|l|]{} Number of reference videos & 9\
Duration & 10 seconds\
Encoding & H.265/HEVC\
Resolution & 4K (3840x1920)\
Hypothetical Reference Circuits (HRCs) & QP range (1-51)\
Framerate & 25 fps\
\
VMAF results\[sec:objresults\]
------------------------------
In this subection, we present the results of computing VMAF metric to the whole set of PVSs. To that end, we used the VMAF Development Kit (VDK) that can be found available in a public repository [@vmafgithub]. Particularly, we employed VDK version 1.3.3 and VMAF version 0.6.1. As has been justified previously, due to the absence of scene changes in the selected clips, the arithmetic mean was used as temporal pooling mechanism, since it is a representative value for those sequences.
Figure \[fig:vmaf\_values\] shows the VMAF final scores for all contents in the whole range of tested QP values. It can be seen that the quality measured by this metric decreases monotonically with QP. Furthermore, the curve decreases slightly for the highest qualities (low QP values), more sharply for medium qualities (medium QP values), and dramatically for low qualities (high QP values). Besides, as already mentioned, the effect of changing the QP value varies with the characteristics of the content, resulting in a different VMAF curve for each of the SRCs.
\
Validation of VMAF for 360VR contents through subjective quality assessment\[sec:validation\_vmaf\_360vr\]
==========================================================================================================
We describe in this section the subjective quality test conducted to validate the results obtained with VMAF. As mentioned above, VMAF is a metric prepared to work with traditional 2D contents. In this work, we evaluate to what extent it can be used with omnidirectional contents. To that end, we designed an experiment consisting in presenting a subset of the PVS’s used in the previous step that are located closest to several strategic VMAF scores to a number of subjects. For each version, subjects were asked to evaluate the perceived quality. In this way, we obtained subjective quality rates for those strategic points along the QP range. These evaluations are used to check how close the given rates are from the objectively computed VMAF scores for 360VR contents.
Furthermore, it is first noteworthy mentioning that to date there are no official recommendations for subjective metrics to measure the QoE in 360VR scenarios, where HMDs are used as displays. Indeed, the first draft created by ITU-T Study group 12 [@G360VR] was published in early 2018 but the final document is still under development. In this way, the subjective assessment carried out in this work is based on the information obtained from recommendation documents related to traditional contents which have been highly tested: ITU-R BT.500-13 [@bt2012methodology], ITU-T P.910 [@itutp910], and ITU-T P.913 [@p913].
Test Material
-------------
As mentioned, the test material for the subjective quality assessment is a subset of the PVS’s generated and used in the previous step. In particular, we used the PVS’s corresponding to six different quality levels: five distorted and one reference sequences. So, a total of 54 (six qualities, nine SRCs) are presented to each subject. Concretely, considering the VMAF curve in Figure \[fig:vmaf\_values\], the five distorted PVS’s selected in the validation step are those closest to the following key VMAF scores:
- VMAF equal to 90. This value is located where the curve begins to decrease slightly.
- VMAF equal to 80 and 70. These values are located where the curve decreases more sharply.
- VMAF equal to 50 and 30. These values are located where the curve decreases more dramatically.
Additionally, with respect to the reference sequences, on the one hand, we have no access to the original raw videos, but to encoded, and therefore degraded to some extent, sequences. On the other hand, references must comply with the same restrictions as the rest of sequences in the experiment, namely, that are encoded using a fixed uniform QP value. Therefore, we cannot directly use the available SRCs, but clips picked from the already generated PVS’s database. So, for each content, we have selected a reference that scores higher than 90 in the VMAF scale, since the reference clip needs to offer the best quality presented to the user during the test, and which is encoded, when possible, with a similar bit rate to that of the original video. In this way, all selected sequences provide VMAF scores that range between 92 and 95.
The six qualities are denoted from A to F, where A is the reference (best quality version), and B to F are the five distorted versions associated with the VMAF scores 90, 80, 70, 50, and 30, respectively.
Equipment
---------
The tests have been carried out in the smartphone Samsung Galaxy S8 with the last model of Samsung Gear VR glasses. This decision is based on the fact that consumer electronics devices are the most used for the 360VR content visualization application [@device].
Environment
-----------
The test area is set according to ITU-R BT.500-13 [@bt2012methodology], creating an immersive space around the subject. In the set environment, we use a common HMD which only tracks the rotational movements [@el2016streaming]. That is, it provides the 3 DoF that characterize this scenario. The selected location is in the middle of a room where the subject has no limitations to spin around.
The position is an important component of these subjective tests. For that, a swivel chair is used, as this kind of chairs allows subjects to move rather freely to see around them, facilitating the exploration of content. Naturally, the same chair is used for all subjects.
Observers
---------
A total of 24 observers (8 females, 16 males) participated in this experiment. All of them with normal or corrected vision (glasses or contact lens are compatible with the equipment). The age of the subjects ranges from 21 to 36, with an average age of 26. Furthermore, the Pearson correlation between the data provided by each subject and the average of all resulted in that no subject was removed because of being considered an outlier [@p913].
Methodology
-----------
A Single-Stimulus (SS) method is applied in this experiment, specifically the ACR-HR (Absolute Category Rating with Hidden Reference) [@itutp910]. In the conducted tests, there is no training session in terms of showing the expected maximum and minimum qualities to the subjects, because we want to observe the real absolute quality that they perceive. In this sense, it is possible that the best quality offered is not considered excellent by most subjects because of several factors: the specific features of the devices employed in the experiments, the network limitations, the quality of the original videos, and others. This effects can be later on considered or even partially cancelled in the subsequent analysis, thanks to the use of the hidden references. Indeed, with this method, a reference version of each content is randomly presented to subjects, without being paired with any distorted versions, and it is rated like any other [@p913]. Later on, we can use the rates given to these hidden references to restrict as much as possible the exogenous factors listed above during the analysis of the results.
The ACR-HR method uses the same five-level rating scale as the ACR method. According to Recommendation ITU-T P.910 [@itutp910], the numbers may only optionally be displayed on the scale. Here, in our experiment, only the category (“Excellent”, “Good”, “Fair”, “Poor” and “Bad”) is displayed.
Test session
------------
Subjects use a developed application program that allows for watching contents and rating them one after the other without having to remove his/her glasses or interact outside the 360VR environment [@applicationtest]. This app then enables a more immersive and engaging experience for the subjects. Subjects are instructed at the beginning of the test session and guided if they have any problems with the app or the methodology during the test.
Each test session is composed of a total of 54 video clips (45 distorted and 9 reference videos) with a duration of 10 seconds each one. All videos are viewed by every subject. The duration of the whole test is around 15 minutes, assuming a period of approximately 5 seconds to vote each video clip. The voting period length is user-driven and so is not limited beforehand.
A different randomization of the PVS’s is used for each session to reduce contextual effects. An observer can watch the same quality in two consecutive videos. However, subjects cannot watch the same clip with different qualities consecutively.
\
\
\
\
--------------------- ---------------------- ------------------ ---------------------- ------------------
**PEARSON** **PEARSON** **RMSE** **RMSE**
(QB, QC, QD, QE, QF) (QB, QC, QD, QE) (QB, QC, QD, QE, QF) (QB, QC, QD, QE)
*AbandonedBuilding* 0.995 0.997 3.433 1.983
*Alaska* 0.992 0.994 5.661 2.488
*Beach* 0.992 0.991 4.213 2.470
*CaribbeanVacation* 0.961 0.997 6.982 6.787
*FemaleBasket* 0.984 1.000 7.097 1.764
*Happyland* 0.940 0.979 9.338 9.991
*Lions* 0.987 0.997 4.029 4.446
*Sunset* 0.996 0.998 5.016 5.490
*Waterfall* 0.996 0.990 5.511 4.295
**0.983** **0.994** **5.698** **4.413**
--------------------- ---------------------- ------------------ ---------------------- ------------------
Experimental results\[sec:subjresults\]
---------------------------------------
Given that the ACR-HR method was implemented, both the Mean Opinion Score (MOS) and the Differential Quality Score (DMOS) are computed from the evaluations provided by the subjects. The final scores per content and quality are depicted in Figure \[fig:mos\] and Figure \[fig:dmos\], respectively. Moreover, 95% Confidence Intervals (CI) are included to properly measure the agreement between subjects [@streijl2016mean], according to the Recommendation ITU-R BT.500-13 [@bt2012methodology].
In Figure \[fig:VMAF\_DMOS\], the VMAF scores and the normalized DMOS are presented for each content. So, we can properly and easily compare the curves obtained for each of the measurements. To compute the normalized values, we have considered that the normalized DMOS associated with the reference clip equals the specific VMAF score of this sequence, and the rest of the values are calculated from it. In this way, we completely remove all external influences in the quality perceived by users. It is worth mentioning that the absence of raw video sources in our test material influences our analysis in terms of the choice of the reference sequence for the subjective assessment and, consequently, the DMOS normalization. However, the alternative of acquiring a new specific database of raw video sources, with its associated problematic acquisition and stitching processes, is beyond the scope of this work.
Through the comparison of the VMAF and DMOS curves for the different contents, we can study the performance of the VMAF metric for omnidirectional content. We can see that the shape of the curves is very similar and the gap between both is quite small. Therefore, we can conclude that the subjective rates obtained in our experiment fit the VMAF scores to a great extent for almost the whole range of qualities. Only for “Happyland” and, more moderately, “CaribbeanVacation”, we can really notice a greater gap between the VMAF and DMOS curves.
Nevertheless, we can see that there is a deviation of the DMOS curves with respect to the VMAF curves in the lowest range of qualities (high QP values). The most plausible reason to that is that the perceived video quality goes into a saturation region. That is, users statistically barely perceive any differences between sequences encoded with very high QP value. It is caused by artifacts that appear and are annoying to the user, making much more difficult for him/her to discern between such distorted contents. This saturation effect is further boosted by the characteristics of the HMD. In addition, this effect is also justified considering the computation of VMAF. The CIs associated with the VMAF score are notably higher for low qualities, decreasing the reliability of the results.
To validate these findings, we have computed the Pearson’s Linear Correlation Coefficient (PLCC) and the Root Mean Square Error (RMSE) between the VMAF and DMOS values. These results are included in Table \[tab:pearson\]. Both measures, PLCC and RMSE, are obtained for qualities ranging from B to F and also from B to E, due to the deviation commented previously. We can confirm the extremely high correlation between the VMAF scores and the normalized DMOS, which is even higher when the last QP is not considered.
Therefore, we can assure that VMAF works properly with 360VR content with homogeneous encoding, providing remarkably good results with no specific training focused on this type of content.
Conclusions\[sec:conclusions\]
==============================
We have presented an exhaustive study on the feasibility of directly applying the original VMAF metric to assess the quality of omnidirectional contents watched using an HMD. Based on the assumption that VMAF scores decrease monotonically with the QP, due to the effect of this encoding parameter in the resulting sequence, we have carried out an experiment consisting of two main steps. First, we have used the original implementation to obtain the VMAF score of a number of 360VR sequences encoded with constant QP in the whole range of possible values so as to capture how it varies with the encoding parameter. Secondly, we have validated the obtained VMAF scores through a subjective assessment. We have done so by creating a second curve per content from a finite number of scores corresponding to several operating points, which have been selected sufficiently spaced. These values are the normalized DMOS obtained in the subjective tests for the subset of input sequences encoded for the specific QP anchor points. The minimum divergence of the two curves in most cases allows us to conclude that VMAF works sufficiently correctly with this homogeneous 360VR content, without performing any particular adjustments to prepare the metric accordingly. However, one can avoid the creation of a specific dataset with rich 360VR content of an acceptable quality and retraining the machine learning algorithm to obtain an omnidirectional-content-aware VMAF metric, which, additionally, would be very heavy in terms of computing and time resources.
[^1]: Manuscript received xxxx, 2018; revised xxxx, 2018.
[^2]: M. Orduna, C. Díaz, L. Muñoz and N. García are with the Grupo de Tratamiento de Imágenes, Information Processing and Telecommunications Center and ETSI Telecomunicación, Universidad Politécnica de Madrid, 28040 Madrid, Spain (e-mail: [email protected], [email protected], [email protected], [email protected])
[^3]: P. Pérez and I. Benito are with Nokia Bell Labs, María Tubau 9, 28050 Madrid, Spain (e-mail: [email protected], ignacio.benito\[email protected])
[^4]: This work has been partially supported by the Ministerio de Ciencia, Innovación y Universidades (AEI/FEDER) of the Spanish Government under project TEC2016-75981 (IVME) and the Spanish Administration agency CDTI under project IDI-20170739 (AAVP).
|
---
abstract: 'Coherent time-frequency visualization reveals symmetry-selective vibronic (mixed exciton/lattice) quantum beats at cryogenic temperature after a single-cycle terahertz (THz) pumping in MAPbI$_3$ perovskite. Above a critical threshold, a Raman phonon mode distinctly modulates the very [*narrow*]{}, middle region with [*persistent*]{} coherence for more than ten times longer than the two sides that predominately couple to infrared (IR) modes. Such spectral-temporal asymmetry and selectivity are inconsistent with a single exciton model, but in excellent agreement with a simulation of the Rashba-type, [*three-fold*]{} fine structure splitting of middle optically-forbidden, dark excitonic states and two bright ones, lying above and below. These hint “Rashba engineering”, i.e., periodic brightening and modulation of the spin-split excitons and Rashba parameters, by phonon symmetry and coherence.'
author:
- 'Z. Liu$^{1\ast}$, C. Vaswani$^{1\ast}$, X. Yang$^{1}$, X. Zhao$^{1}$, Y. Yao$^{1}$, Z. Song$^{2}$, D. Cheng$^{1}$, Y. Shi$^{3}$, L. Luo$^{1}$, D. -H. Mudiyanselage$^{1}$, C. Huang$^{1}$, J.-M. Park $^{1}$, J. Zhao$^{3}$, Y. Yan$^{2}$, K.-M. Ho$^{1}$, J. Wang$^{1\dag}$'
title: 'Single-Cycle Terahertz Driven Quantum Beats Reveal Symmetry-Selective Control of Excitonic Fine Structure in Perovskite'
---
Exciton fine structure splitting (FSS) in metal halide perovskites is controlled by Rashba- [@Stranks] and/or exchange-type [@Becker] effects by lifting spin and momentum degeneracies. Although their presence still represents an outstanding problem in bulk methylammonium lead iodide (MAPbI$_3$), such effects are hypothesized to strongly influence its revolutionary magnetic and photovoltaic properties, e.g., long photocarrier lifetimes [@Zheng; @Etienne] and spin/charge diffusion lengths [@J.; @Li; @Giovanni; @spin]. Specifically, the Rashba effects in MAPbI$_3$, Fig. 1(a), lead to the excitonic FSS from spin split bands, Fig. 1(b), where the four fold degeneracy of the 1[*s*]{} exciton is lifted by the Rashba terms $\alpha _{e(h)} {{\vec \sigma_{e(h)}}}\cdot \hat{n}\times i\vec{\bigtriangledown}_{r_{e(h)}}$ [@Isarov]. Here $\alpha _{e}$($\alpha _{h}$) and $\sigma_{e}$($\sigma_{h}$) denote the Rashba parameter and Pauli matrices for the $\mathbf J$=1/2 ($\mathbf S$=1/2) conduction (valance) band, respectively (inset, Fig. 1(c)). The resulting lowest-lying, [*three-fold*]{} splitting is characterized as two bright exciton states lying above and below two degenerate dark states in MAPbI$_3$ [@note]. However, the challenge for identifying them lies in the small exciton FSS that is hidden beneath an inhomogeneously broadened bandgap absorption and further masked by the underlying random lattice fluctuations [@Etienne; @Beecher]. For example, the bight/dark state splitting (three dash lines, Fig. 1(c)) is expected to be only few nm and 10 times smaller than the broad emission bandwidth ($>$30 nm). Particularly, the presence of middle optically-forbidden, dark states (black dotted line), the hallmark spectroscopy feature, and dynamics of such internal structure, remain elusive in perovskites despite intense recent study [@Innocenzo; @Milot; @Liu; @Tom; @Isarov; @Fu; @wilhelm].
![(a) Experimental schematics. (b) The bandstructure modulation during one phonon cycle of a Raman mode $\omega_\mathrm{R}$ (supplementary [@sup]). (c) Deep-subgap THz pump spectrum (gray shade) shown together with the photoluminescence (PL) spectrum (red line) at 4.1 K. Inset: An illustration of Rashba-type MAPbI$_3$ bands. Three dashed lines under the broad PL emission linewidth mark the predicted two bright (yellow and green) and two degenerate dark states (black) (main text). (d) 2D false-colour plot of $\Delta R/R$ at 760 nm probe induced by THz pump at 4.1 K. Time (at $E_{THz}=938$ KV/cm) and field dependence of $\Delta R/R$ amplitudes. ](Fig1.pdf){width="90mm"}
Coherent quantum beat spectroscopy driven by an intense single-cycle THz pulse, Fig. 1(a), is extremely relevant for discovery and control of the excitonic FSS. First, the deep sub-bandgap pump spectrum centered at $\sim$6 meV or 1.5 THz (gray, Fig. 1(c)) can lead to coherent excitation of phonons [@phonon] with minimum heating and carrier generation. Such non-thermal pumping allows the determination of the genuine ground state, few nm electronic splitting, in contrast to, e.g., optical pumping that generates high density hot carriers, large $>$10s of nm excitonic shift and short-lived, sub-ps oscillations due to the strong dephasing and heating [@Miyata; @optical]. Second, the driven “ultrafast-ultrasmall" lattice displacement senses the local dynamic dielectric environment and affects excitons that can be visualized by quantum beats in [*fs*]{} and [*THz*]{} scales. Third, the dark states can be periodically brightened by symmetry breaking via coherent phonons, unlike thermal lattice motions averaging to zero. This allows the determination of the excitonic FSS by their [*symmetry-selective*]{} coupling to IR or Raman phonon modes. This is predicted (Fig. 1(b)) by three simulated snapshots of electronic bands during one phonon cycle of a specifically-tailored Raman symmetry ($\omega_\mathrm{R}$ mode $\sim$0.8 THz). This shows subcycle brightening of dark excitons by periodically modulating the Rashba parameters and spin-split bands that are not observed (Supplementary, Fig. S7).
Here we show quantum beats induced by an intense single-cycle THz field in bulk MAPbI$_3$ crystals. This reveals and controls the excitonic FSS via their [*symmetry-selective coupling*]{} to predominantly Raman or IR phonon coherence. Fig. 1(d) shows a 2D false-colour plot of electric field dependent, transient differential reflection signals $\Delta R/R(\Delta t_{pp}, E_{THz})$ at 760 nm probe wavelength and 4.1K as a function of time delay $\Delta t_{pp}$ up to 5.7 ps and $E_{THz}$ pump up to $\sim$1000kV/cm. This excites a pronounced oscillatory behavior with complex beating patterns superimposed on a slow amplitude decay after the sub-ps pump, as shown in the $\Delta R/R$ dynamics at E$_{THz}=$938 kV/cm (Fig. 1(e)). Furthermore, time-cut (red circle, Fig. 1f) plot at $\Delta t_{pp}$=3.2 ps (after the pump) clearly shows a nonlinear THz field dependence. This manifests as a critical threshold E$_{th}\sim$200 kV/cm, $\sim$2 times of the field used in prior study [@Kim], and a complete saturation of excitonic signals at E$_{sat}\sim$1000 kV/cm. These THz-driven non-perturbative quantum beats are only discovered in perovskites until now due to the intense achieved here [@Yang].
{width="90mm"}
The spectral-temporal behavior of THz pump-induced $\Delta R/R$ offer insights distinguished from optical pumping [@Thouin; @zhai]. The white light continuum ranging from 730 nm to 770 nm is used to probe excitonic states in MAPbI$_3$ after THz excitation $E_{THz}$= 938kV/cm and at 4.1 K (Fig. 2(a)). The spectra exhibit a broad bi-polar lineshape (Fig. 2(b)) of $\sim$30 nm width, i.e., a positive $\Delta R/R$ change switches to negative at 755 nm. Most intriguingly, the positive ($\sim$751 nm), negative peaks ($\sim$757 nm) and zero crossing ($\sim$755nm) (dashed lines) show negligible spectral shift during the large $\Delta R/R$ decay, as shown at time delay $\Delta t_{pp}$= 0.22 ps (red diamond) and 3.7 ps (blue circle). The much narrower $\sim$3 nm separation marked (Fig. 2(b)) is 10 times smaller than the PL (Fig. 1(c)) linewidth and spectral shift ($>$100 nm) in the prior excited state studies by optical pumping [@Innocenzo; @optical]. These imply that such [*stable*]{} three-fold positions seen by THz non-thermal pumping are generic and determined only by [*ground state*]{} excitonic FSS. This conclusion is further corroborated below. We present next wavelength-dependent quantum coherence and transient state relaxations that support the hidden three-fold structure inside the “single” broad exciton peak. First, there exhibit persistent coherent oscillations only within a narrow linewidth $\sim$3.5 nm, as shown in zoomed $\Delta R/R$ 2D plot near the zero crossing position (Fig. 2(c)) of a broad, $\sim$30 nm resonance. The oscillations are very different from those outside this narrow spectral region. This becomes much more pronounced by comparing three wavelength-cut dynamics at long times, i.e., the middle, 755 nm probe trace (black, Fig. 2(d)) exhibits remarkably long-lasting quantum oscillations shown for the first 5 ps and for the extended time scales up to many tens of ps (split axis), while the 760 nm (blue, Fig. 2(e)) and 750 nm (red, Fig. 2(f)) traces on the two sides near negative and positive peaks (Fig. 2(b)) have much shorter-lived quantum beats mostly within $\sim$5 ps. Second, although the two exciton sidebands at 760 nm and 750 nm show a quasi-symmetric temporal beating pattern within $\sim$5 ps, their $\Delta R/R$ signals after dephasing show different relaxation times with clearly longer-lived signal at the lower, 760 nm trace than the 750 nm one (inset, Fig. 2(f)). Such asymmetry on two sides and “sharp" appearance of new oscillations (Fig. 3(c)) in the middle cannot be accounted by a single exciton oscillator. These distinguishing features are consistent with the periodically brightened, dark states in the center with persistent coherence and narrow linewidth since they are much less coupled to the dielectric environment. They are distinguished from the bright states on both sides that are expected to exhibit shorter coherence and broader linewidth.
Fig. 2(g) reveals further the distinct wavelength-dependent phonon modes which disclose a [*symmetry-selective*]{} coupling to the excitonic FSS in MAPbI$_3$. The Fourier transformation (FT) spectra of quantum beats are shown for three excitonic states at 760 nm (blue), 755 nm (black) and 750 nm (red), respectively. The low-frequency Raman spectrum (gray shade) from the same conditions is shown together to identify the mode symmetry. The FT spectrum at the center band clearly displays a pronounced peak at $\omega_\mathrm{R}=$0.8 THz that matches very well with the dominant transverse optical Raman mode from octahedral twist of PbI$_6$ cage [@Leguy]. Besides the main peak, there exist multiple secondary peaks mostly below 2 THz that correlate with the THz Raman modes $\omega_\mathrm{Raman}$ (red lines). In strong contrast, the $\omega_\mathrm{R}$ mode is strongly suppressed in the FT spectra of the side excitonic bands at 750 nm and 760 nm, which, instead, exhibit multiple strong peaks $\geq$2 THz, quasi-symmetrically present (black dash lines), i.e., $\omega_\mathrm{I}=$2 THz, 2.3 THz, 2.66 THz, 3.1 THz, and 3.4 THz. They are absent in the measured THz Raman spectrum but mostly consistent with the calculated IR phonon modes $\omega_\mathrm{IR}$ [@Leguy; @Brivio; @Oscorio; @Mosconi]. These distinct comparisons indicate the preferred coupling of the center (side) excitonic bands to the $\omega_\mathrm{R}$ ($\omega_\mathrm{I}$) phonons of two different symmetries. In addition, the 750nm and 760 nm FT spectra don’t scale each other, indicative of two different bright exciton states with slightly asymmetric coupling to $\omega_\mathrm{IR}$ [@sup]. These again support the complex excitonic structure that the middle dark states are brightened periodically by coherent phonons of Raman symmetry (Fig. 2(g)) and the bright ones are on two sides with a narrow bright/dark state splitting $\sim$3 nm in Figs. 2(b).
![Continuous wavelet transform of THz pump driven $\Delta$R/R dynamics at probe wavelength 760 nm **(a)** and 755 nm **(b)** at 4.1 K. Shown together there are the time-integrated power spectra from time-frequency analysis of the 760 nm (c, green circles) and 755 nm (d, blue circles). Note that the exponential amplitude decay components are removed from the $\Delta R/R$ signals before CWT is performed [@sup].](Fig3.pdf){width="90mm"}
To further identify the evolution of the [*symmetry-selective*]{} persistent coherence of bright and dark states, we performed Continuous Wavelet Transforms (CWT chronograms, supplementary [@sup]) of the data at two representative probe wavelengths for bright and dark states, 760 nm (Fig. 3(a)) and 755 nm (Fig. 3(b)) at 4.1 K for the initial and for the extended 10s of ps time scales (split axis). Such time-frequency space transformation allows for the determination of the times at which various vibronic modes appear and diminish, by obtaining the spectra with a finite time interval continuously moving cross the oscillatory signals [@sup]. Two distinct phonon bands, labeled as Raman-like ($<$2 THz) and IR-like (2-4 THz), in the time-integrated CWT power spectra are shown in Figs. 3(c) and 3(d) (green and blue circles), which correspond to the predominantly Raman and IR modes based on their frequency range shown in Fig. 2(g). The power spectra evolve distinctly different for the two bands. Specifically, the $IR$ band, enhanced by coupling to the side excitonic states at 760 nm (Fig. 3(a)) while suppressed at 755 nm (Fig. 3(b)), forms quasi-instantaneously with THz pumping but quickly dies out within 1 ps, e.g., at $\omega_\mathrm{IR}$ frequency (red dash line, Fig. 3(a)). This indicates a very strong vibronic dephasing associated with the bright excitons that are coupled by IR phonons. In contrast, the $Raman$ band, especially at the $\omega_\mathrm{R}$ frequency (marked by red dash line), is enhanced by coupling to the middle excitonic state (Fig. 3(b)), which is characterized by a slow buildup and surprisingly long-lived, tens of ps vibronic coherence, for more than an order of magnitude longer than bright states. This is consistent with the optically-forbidden dark states at the center (red dash line, Fig. 3(b)). Note that there are other fine details, e.g., both chronograms near zero delay exhibit short-lived features in both excitonic states in Figs. 3(a) and 3(b) which we attribute to the direct coupling between the pump and IR phonon mode, both centered $\sim$1 THz [@Luo; @Sender; @vorakiat]. Therefore, coherent time-frequency responses clearly disentangle inhomogeneously smeared dark-bright exciton splitting via their [*symmetry-selective*]{} coupling to different phonons. To put the observations on a strong footing, we have performed simulations based on an exciton FSS model including Rashba physics, as proposed in Ref. [@Isarov; @note4]. Although the symmetry breaking source is still debated in perovskites, it can arise at the surfaces and/or internal interfaces, especially at cryogenic temperature due to the twinning of structure domains. The Hamiltonian has the following form in the relative coordinate [@sup] $$\begin{aligned}
H=-\frac{\nabla_{r}^{2}}{2\mu}+V(r)+(\alpha_{e}\sigma_{e}-\alpha_{h}\sigma_{h})\cdot\hat{n}\times i\nabla_{r}\end{aligned}$$ where $r=r_{e}-r_{h}$ with $r_{e}(r_{h})$ for the electron (hole) ($e$ and $h$) spatial coordinate. $\nabla$ is the Laplace operator. The reduced mass $\mu$ is defined by $\mu^{-1}=m_e^{-1}+m_h^{-1}$. The Rashba term comes from a combined effect of spin-orbit interaction and inversion symmetry breaking (ISB), with ISB field in the $\hat{n}$ direction, which is much stronger than $e$-$h$ exchange interaction $\sim$0.01 meV in MAPbI$_3$ [@Becker]. The optical spectrum of exciton $\Phi$ can be understood by the oscillator strength $f_{\Phi}\propto\frac{\left|P_\Phi\right|^2}{\omega_\Phi}$, with $\omega_{\Phi}$ for exciton oscillator frequency and $P_{\Phi}$ for optical matrix element. The coupling between the exciton and phonon modes is investigated through phonon-modulated model parameters, including $\mu, \alpha_{e}, \alpha_{h}$ and $P_{\Phi}$. They exhibit sinusoidal modulations in time domain following the frozen phonon-induced atomic movement, as shown by DFT calculations (see supplementary section [@sup]). Therefore, model simulations reported next use parameters in sinusoidal time modulations with amplitude from these DFT results.
{width="90mm"}
The lowest energy solution of the above exciton Hamiltonian gives 1[*s*]{} exciton without Rashba interaction, which has four fold degeneracy due to the electron doublet and hole doublet [@Isarov]. With the inclusion of non-zero Rashba term in MAPbI$_3$, the quartet ground state splits to a dark doublet, $\left | \Phi \right \rangle_{2,3}$, sandwiched between two bright states, $\left | \Phi \right \rangle_{1}$ and $\left | \Phi \right \rangle_{4}$ as marked in Fig. 1(c). Note, however, that the dark doublet is not entirely optically inactive because some amount of atomic orbital character mixing in the $e$ and $h$ states produce relatively small yet finite dipole matrix elements, according to our DFT band structure calculations [@sup].
When different phonon modes in the system are excited by THz pump, the bright and dark exciton states respond in different manners. In Fig. 4, we plotted the coupling between the four lowest energy exciton states and the two most pronounced phonons identified, Raman $\omega_\mathrm{R}$ and IR $\omega_\mathrm{I}$ modes. By comparing the periodical control of the Rashba parameters under these two modes in Figs. 4(a) and 4(b), we clearly notice that the $\omega_\mathrm{R}$ has much bigger modulation to $\alpha _{e}$ and $\alpha _{h}$. We attribute such distinct enhancement to the facts that the $\omega_\mathrm{R}$ mode introduces a much larger twist to the PbI$_6$ cage than the $\omega_\mathrm{I}$ mode and that the Pb-I frame directly determines the electronic band structure of MAPbI$_3$ near the Fermi level. As a result, it is clearly visible, that the $\omega_\mathrm{R}$ mode changes the bandgap and oscillator strength of the dark states $\left | \Phi \right \rangle_{2,3}$ significantly, which gives rise to a pronounced $\omega_\mathrm{R}$ phonon modulation (blue circles), based on the model calculation in Fig. 4(c), i.e., subcycle coherent brightening of the dark excitons. In contrast, there is nearly no response to the $\omega_\mathrm{I}$ mode of the same amplitude (black circles). This is in excellent agreement with the experimental observations in Figs. 3(b) and 3(d). Moreover, we summarize the simulated results for the phonon amplitude dependence of the oscillator strength for the excitonic FSS states when coupled with $\omega_\mathrm{I}$ (Fig. 4(d)) and $\omega_\mathrm{R}$ (Fig. 4(e)) modes, respectively. For the $\omega_\mathrm{I}$ coupling in Fig. 4(d), the bight states $\left | \Phi \right \rangle_{1,4}$ (rectangle and triangle) exhibit bigger responses than the dark states $\left | \Phi \right \rangle_{2,3}$ (circles). The trend is clearly reversed for the $\omega_\mathrm{R}$ coupling in Fig. 4(e). These simulations clearly explain the key mode-selective coupling data in Fig. 2g [@note3].
Finally, Figs. 4(f)-4(i) show that a minimal working model for the experimental observed $\Delta R/R$ requires two bright oscillators, instead of a single one. First, the presence of the two bright excitonic states produces simulation results in excellent agreement with experiment, as shown in Figs. 4(f) and 4(g). The two bright oscillators are centered at 750.9 and 756.7 nm, which correspond to the positive and negative peaks (Fig. 2(a)), as marked by $\#$2 and $\#$3 in Fig. 4(f). The two oscillators only exhibit very small band gap renormalization (BGR) shifts, $\Delta w$ =0.1nm and 0.3nm, respectively, for the highest THz pumping, which are within the spectral resolution with no visible shift, consistent with the experiment (Fig. 2(b)). Note also that the very narrow dark states (marked as $\#$1) in the center will not affect the transient spectra which are determined by the bright states. Instead they can only be resolved with the quantum beat spectra and coherence times shown in Figs. 2 and 3. Second, although one may also produce reasonably the $\Delta R/R$ spectra using a single oscillator model (supplementary [@sup]) using BGR shift $\sim$0.1 nm, such a model give rise to an opposite mode coupling behavior, i.e., the $\omega_\mathrm{R}$ mode couple smaller in the center of the exciton resonance than at least one of the sides, as shown in Figs. 4(h) and 4(i). These curves are generated from two hypothetic situations, i.e., amplitude (bleaching) and spectral (BGR) modulations with either $\omega_\mathrm{R}$ or $\omega_\mathrm{I}$ (supplementary) [@note2]. The results clearly show that the $\omega_\mathrm{R}$ mode cannot dominate the quantum beat spectra at the center of the exciton resonance, i.e., the single exciton model can not even get qualitatively close to our data.
In conclusion, we show THz engineering of the exciton fine structures in perovskite by phonon symmetry and coherence. The results are consistent with a model of periodical phonon modulation of Rashba parameters and spin-slit bands. Our results provide implications for merging quantum control, spintronics and coherent effects to explore multi-function perovskite device.
This work was supported by the Ames Laboratory, the US Department of Energy, Office of Science, Basic Energy Sciences, Materials Science and Engineering Division under contract \#DE-AC02-07CH11358 (THz quantum beat spectroscopy and theory). THz Instrument was supported in part by the National Science Foundation Award No. 1611454.. Sample development at University of Toledo was supported by National Science Foundation DMR 1400432. Theoretical work at USTC (J.Z. and Y.S.) was supported by National Natural Science Foundation of China under contract number 11620101003, and National Key R$\&$D Program of China 2016YFA0200604 and 2017YFA0204904.
[99]{} S. D. Stranks and P. Plochocka, Nat. Mater. **17**, 381 (2018). M. A. Becker, R. Vaxenburg, G. Nedulcu, P. C. Sercel, A. Shabaev, M. J. Mehl, J. G. Michopoulos, S. G. Lambrakos. N. Bernstein, J. L. Lyons, T. St$\ddot{o}$ferle, R. F. Mahrt, M. V. Kovalenko, D. J. Norris, G. Rain$\grave{o}$, and A. L. Efros, Nature **553**, 189 (2018). F. Zheng, L. Z. Tan, S. Liu, and A. M. Rappe, Nano Lett. **15**, 7794 (2015). T. Etienne, E. Mosconi, and F. De Angelis, J. Phys. Chem. Lett. **7**, 1638 (2016). J. Li and P. M. Haney, Phys. Rev. B **93**, 155432 (2016). D. Giovanni, H. Ma, J. Chua, M. Gr$\ddot{a}$tzel, R. Ramesh, S. Mhaisalkar, N. Mathews, and T. C. Sum, Nano.Lett. **15**, 1553 (2015). P. Odenthal, W. Talmadge, N. Gundlach, R. Wang, C. Zhang, D. Sun, Z. -G. Yu, Z. V. Vardeny and Y. S. Li, Nat. Phys. **13**, 894 (2017). M. Isarov, L. Z. Tan, M. I. Bodnarchuk, M. V. Kovalenko, A. M. Rappe, and E. Lifshitz, Nano Lett. **17**, 5020 (2017). Note that the $e$-$h$ exchange interaction has been ignored here due to its very small value, on the order of 0.01 meV, for MAPbI$_3$ [@Becker] (see supplementary for details [@sup]). A. N. Beecher, O. E. Semonin, J. M. Skelton, J. M. Frost, M. W. Terban, H. Zhai, A. Alatas, J. S. Owen, A. Walsh, and S. J. L. Billinge, ACS Energy Lett. **1**, 880 (2016). V. D$^,$Innocenzo, G. Grancini, M. J. P. Alcocer, A. R. S. Kandada, S. D. Stranks, M. M. Lee, G. Lanzani, H. J. Snaith, and A. Petrozza, Nat. Commun. **5**, 3586 (2014). R. L. Milot, G. E. Eperon, H. J. Snaith, M. B. Johnston, and L. M. Hertz, Adv. Funct. Mater. **25**, 6218 (2015). Z. Liu, K. C. Bhamu, L. Liang, S. Shah, J. -M. Park, D. Cheng, M. Long, R. Biswas, F. Fungara, R. Shinar, J. Shinar, J. Vela, and J. Wang, MRS Commun. **8**, 961 (2018). T. J. Savenije, C. S. Ponseca, L. Kunneman, M. Abdellah, K. Zheng, Y. Tian, Q. Zhu, S. E. Canton, I. G. Scheblykin, T. Pullerits, A. Yartsev, and V. Sundstr$\ddot{o}$m, J. Phys. Chem. Lett. **5**, 2189 (2014). M. Fu, P. Tamarat, H. Huang, J. Even, A. L. Rogach, and B. Lounis, Nano Lett. **17**, 2895 (2017). D. Niesner, M. Wilhelm, I. Levgen, A. Osvet, S. Shrestha, M. Batentschuk, C. Brabec, and T. Fauster, Phys. Rev. Lett. **117**, 126401 (2016). M. Kozina, M. Fechner, P. Marsik, T. van Driel, J. M. Glownia, C. Bernhard, M. Radovic, D. Zhu, S. Bonetti, U. Staub, and M. C. Hoffmann, Nat. Phys. **15**, 387 (2019). K. Miyata, D. Meggiolaro, M. T. Trinh, P. P. Joshi, E. Mosconi, S. C. Jones, F. De Angelis, and X. -Y. Zhu, Sci. Adv. **3**, e1701217 (2017). T. Ghosh, S. Aharon, L. Etgar, and S. Ruhman, J. Am. Chem. Soc. **139**, 18262 (2017). H. Kim, J. Hunger, E. C$\acute{a}$novas, M. Karakus, Z. Mics, M. Grechko, D. Turchinovich, S. H. Parekh, and M. Bonn, Nat. Commun. **8**, 687 (2017). X. Yang, C. Vaswani, C. Sundahl, M. Mootz, P. Gagel, L. Luo, J. H. Kang, P. P. Orth, I. E. Perakis, C. B. Eom, and J. Wang, Nat. Mater. **17**, 586 (2018). F. Thouin, D. A. Valverde-Ch$\acute{a}$vez, C. Quarti, D. Cortecchia, I. Bargigia, D. Beljonne, A. Petrozza, C. Silva, and A. R. Srimath Kandada, Nat. Mater. **18**, 349 (2019). Y. Zhai, S. Baniya, C. Zhang, J. Li, P. Haney, C.X. Sheng, E. Ehrenfreund, and Z. V. Vardeny, Sci. Adv. **3**, e1700704 (2017). A. M. A. Leguy, A. R. Go$\tilde{n}$i, J. M. Frost, J. Skelton, F. Brivio, X. Rodr[í]{}guez-Mart[í]{}nez, O. J. Weber, A. Pallipurath, M. I. Alonso, M. Campoy-Quiles, M. T. Weller, J. Nelson, A. Walsh, and P. R. F. Barnes, Phys. Chem. Chem. Phys. **18**, 27051 (2016). F. Brivio, J. M. Frost, J. M. Skelton, A. J. Jackson, O. J. Weber, M. T. Weller, A. R. Goñi, A. M. A. Leguy, P. R. F. Barnes, and A. Walsh, Phys. Rev. B **92**, 144308 (2015). M. A. P$\acute{e}$rez-Osorio, R. L. Milot, M. R. Filip, J. B. Patel, L. M. Hertz, M. B. Johnston, and F. Guistino, J. Phys. Chem. C **119**, 25703 (2015). E. Mosconi, C. Quarti, T. Ivanovska, G. Ruani, and F. De Angelis, Phys. Chem. Chem. Phys. **16**, 16137 (2014). L. Luo, L. Men, Z. Liu, Y. Mudryk, X. Zhao, Y. Yao, J. M. Park, R. Shinar, J. Shinar, K. M. Ho, I. E. Perakis, J. Vela, and J. Wang, Nat. Commun. **8**, 15565 (2017). M. Sendner, P. K. Nayak, D. A. Egger, S. Beck, C. M$\ddot{u}$ller, B. Epding, W. Kowalsky, L. Kronik, H. J. Snaith, A. Pucci, and R. Lovrin$\check{c}$i$\acute{c}$, Mater. Horiz. **3**, 613 (2016). C. La-o-vorakiat, H. Xia, J. Kadro, T. Salim, D. Zhao, T. Ahmed, Y. M. Lam, J. X. Zhu, R. A. Marcus, M. E. Michel-Beyerle, and E. E. M. Chia, J. Phys. Chem. Lett. **7**, 1 (2016). See Supplemental Material for details. The exciton FSS splitting due to exchange interaction will be on the order of 0.1meV for bulk crystal which are neglected in the simualtion. The similar selective coupling behaviors are also seen for other IR and Raman modes marked in Fig. 2g (not shown). THz pump-induced broadenings can be negligible in the simulation due to the large exciton linewidth.
|
epsf -18pt \#1 \#1\#2[3.6pt]{} 0[[[R\^0]{}]{}]{} \#1[[[[\#1 ]{}]{}]{}]{} \#1[[\[\[\#1\]\]]{}]{} \#1[[Eq. (\[\#1\])]{}]{}
[**Are ultrahigh energy cosmic rays\
a signal for supersymmetry?**]{}
Daniel J. H. Chung[^1]\
[*Department of Physics and Enrico Fermi Institute\
The University of Chicago, Chicago, Illinois 60637, and\
NASA/Fermilab Astrophysics Center\
Fermi National Accelerator Laboratory, Batavia, Illinois 60510*]{}\
Glennys R. Farrar[^2]\
[*Department of Physics and Astronomy\
Rutgers University, Piscataway, New Jersey 08855*]{}\
Edward W. Kolb[^3]\
[*NASA/Fermilab Astrophysics Center\
Fermi National Accelerator Laboratory, Batavia, Illinois 60510, and\
Department of Astronomy and Astrophysics and Enrico Fermi Institute\
The University of Chicago, Chicago, Illinois 60637*]{}\
> We investigate the possibility that cosmic rays of energy larger than the Greisen–Zatsepin–Kuzmin cutoff are not nucleons, but a new stable, massive, hadron that appears in many extensions of the standard model. We focus primarily on the $S^0$, a $uds$-gluino bound state. The range of the $S^0$ through the cosmic background radiation is significantly longer than the range of nucleons, and therefore can originate from sources at cosmological distances.
>
> PACS number(s): 96.40, 98.70, 11.30.P
**I. INTRODUCTION**
Detection of cosmic rays of energies above $10^{20}\eV$ \[\[measure\],\[flyseye\]\] has raised yet unsettled questions regarding their origin and composition. The first problem is that it is difficult to imagine any astrophysical site for the cosmic accelerator (for a review, see Ref. \[\[bierman\]\]). The Larmour relation for a particle of charge $Z$, $(E/10^{18}\eV) = Z (R/\kpc)
(|\vec{B}|/\mu{\mbox{Gauss}})$, sets the scales for the required size, $R$, and magnetic field strength, $|\vec{B}|$, of the accelerator. One would expect any sources with sufficient $R|\vec{B}|$ to accelerate particles to ultrahigh energies to appear quite unusual in other regards.
A second issue is the composition of the observed cosmic rays. The shower profile of the highest energy event\[\[flyseye\]\] is consistent with its identification as a hadron but not as a photon\[\[halzen\]\]. Ultrahigh-energy[^4] (UHE) events observed in air shower arrays have muonic composition indicative of hadrons\[\[measure\]\]. The problem is that the propagation of hadrons – neutrons, protons, or nuclei – over astrophysical distances is strongly affected by the existence of the cosmic background radiation (CBR). Above threshold, cosmic-ray nucleons lose energy by photoproduction of pions, $N\gamma\rightarrow N\pi$, resulting in the Greisen–Zatsepin–Kuzmin (GZK) cutoff in the maximum energy of cosmic-ray nucleons. If the primary is a heavy nucleus, then it will be photo-disintegrated by scattering with CBR photons. Indeed, even photons of such high energies have a mean-free-path of less than 10 Mpc due to scattering from CBR and radio photons\[\[elbert\]\]. Thus unless the primary is a neutrino, the sources must be nearby (less than about 50 Mpc). This would present a severe problem, because unusual sources such as quasars and Seyfert galaxies typically are beyond this range.
However, the primary cannot be a neutrino because the neutrino interaction probability in the atmosphere is very small. This would imply an implausibly large primary flux, and worse yet, would imply that the depths of first scattering would be uniformly distributed in column density, contrary to observation. The suggestion that the neutrino cross section grows to a hadronic size at UHE\[\[halzen\]\] has recently been shown to be inconsistent with unitarity and constraints from lower energy particle physics\[\[burdman\]\].
Since UHE cosmic rays should be largely unaffected by intergalactic or galactic magnetic fields, by measuring the incident direction of the cosmic ray it should be possible to trace back and identify the source. Possible candidate sources within $10^\circ$ of the UHE cosmic ray observed by the Fly’s Eye were studied in Ref. .[^5] The quasar 3C 147 and the Seyfert galaxy MCG 8-11-11 are attractive candidates. Lying within the $1\sigma$ error box of the primary’s incoming direction, the quasar 3C 147 has a large radio luminosity ($7.9 \times 10^{44} \erg \sec^{-1}$) and an X-ray luminosity of about the same order of magnitude, indicative of a large number of strongly accelerated electrons in the region. It also produces a large Faraday rotation, with rotation measure $\mbox{RM} =
-1510 \pm 50 \rad
\meter^{-2}$, indicative of a large magnetic field over large distances. It is noteworthy that this source is within the error box of a UHE event seen by the Yakutsk detector. However, 3C 147 lies at a red shift of about $z=0.545$, well beyond $z < 0.0125$ adopted in Ref. as the distance upper limit for the source of UHE proton primaries. Just outside the $2 \sigma$ error box of the primary’s incoming direction is the Seyfert galaxy MCG 8-11-11. It is also unusual, with large X-ray and low-energy gamma-ray luminosities ($4.6
\times10^{44}\erg\sec^{-1}$ in the $20-100 \keV$ region and $7
\times10^{46} \erg\sec^{-1}$ in the $0.09-3 \MeV$ region). At a redshift of $z=0.0205$, it is much closer than 3C 147, but it is still too distant for the flux to be consistent with the observed proton flux at lower energies.
Briefly stated, the problem is that there are no known candidate astronomical sources within the range of protons, neutrons, nuclei, or even photons. Yet there are good candidate sources at 100-1000 Mpc. In this paper we propose that the answer to this cosmic-ray conundrum may be that UHE cosmic rays are not known particles but a new species of particle we denote as the uhecron, $U$. The meager information we have about the cosmic ray events allows us to assemble a profile for the properties of the uhecron:
1\) The uhecron interacts strongly: Although there are only a handful of UHE events, the observed shower development and muonic content suggests a strongly interacting primary.
2\) The uhecron is stable or very long lived: Clearly if the particle originates from cosmological distance, it must be stable, or at least remarkably long lived, with $\tau \ga 10^6{\rm s}(m_U/3\,\mbox{GeV})
(L/1\, \mbox{Gpc})$ where $L$ is the distance to the source.
3\) The uhecron is massive, with mass greater than about 2 GeV: If the cosmic ray is massive, the threshold energy for pion production increases, and the energy lost per scattering on a CBR photon will decrease. We will go into the details of energy loss later in the paper, but this general feature can be understood from simple kinematics. In $U \gamma
\rightarrow U\pi$, the threshold for pion production is $s_{\rm min} =
m_U^2 + m_\pi^2 + 2m_U m_\pi$. In the cosmic-ray frame where the $U$ has energy $E_U \gg m_U$ and the photon has energy $E_\gamma \sim 3T $ (where $T = 2.4\times10^{-4}\eV$ is the temperature of the CBR), $s
\simeq m_U^2 + 4 E_\gamma E_U$. Thus, the threshold for pion production, $s\geq s_{\rm min}$, results in the limit $E_U\ga m_\pi
m_U/(2 E_\gamma)$. More generally, the threshold for producing a resonance of mass $M_R = M_U + \Delta$ is $E_U = \Delta m_U/(2
E_\gamma)$. For $E_\gamma=3T$, and if the uhecron is the proton, the threshold for pion photoproduction is $E_U \approx 10^{20}\eV$. Of course the actual threshold is more involved because there is a distribution in photon energy and scattering angle, but the obvious lesson is that if the mass of the primary is increased, the threshold for pion production increases, and the corresponding GZK cutoff will increase with the mass of the cosmic ray. Furthermore, since the fractional energy loss will be of order $ m_\pi/m_U$, a massive uhecron will lose energy via pion-photoproduction at a slower rate than a lighter particle. Another potential bonus if the cosmic ray is not a neutron or a proton is that the cross section for $U \gamma
\rightarrow U \pi$ near threshold may not be strongly enhanced by a resonance such as $\Delta(1232)$, as when the $U$ is a nucleon. Although there may well be a resonance in the $U \pi$ channel, it might not have the strength or be as near the pion-photoproduction threshold as the $\Delta(1232)$ is in the pion-nucleon channel.
4\) We will assume the uhecron is electrically neutral: Although not as crucial a requirement as the first three, there are three advantages if the uhecron is neutral. The first is that it will not lose energy through $e^+e^-$ pair production off the CBR photons. Another advantage of a neutral particle is that because it will be unaffected by intergalactic and galactic magnetic fields, its arrival direction on the sky will point back to its source. Thirdly, there will be no energy losses due to synchrotron or bremsstrahlung radiation. Of course because a neutral particles will not be accelerated by normal electromagnetic mechanisms, it is necessary to provide at least a plausibility argument that they can be produced near the source. For instance, they may be produced as secondaries in collisions induced by high-energy protons.
In this paper we analyze the possibility that a supersymmetric baryon $S^0$ ($uds$-gluino bound state whose mass is 1.9-2.3 GeV – see below) is the uhecron instead of the proton, as first proposed in Ref. \[\[gf96\]\]. The $S^0$ has strong interactions, it can be stable, it is more massive than the nucleon, and it is neutral with vanishing magnetic moment \[\[gf96\]\]. Remarkably, this particle is not experimentally excluded. The light gluino required in this scenario would have escaped detection. Experimental limits and signatures are discussed in \[\[gf96\]\] and the reviews of Farrar \[\[gf97\], \[farrarprd95\]\].
If UHE cosmic rays are $S^0$s, we will show that their range is at least an order of magnitude greater than that of a proton, putting MCG 8-11-11 (and possibly even 3C 147) within range of the Fly’s Eye event.
While the main thrust of this paper is an investigation into the scenario where the $S^0$ is the uhecron, most of our analysis can also be applied to the case where the uhecron is much more massive than assumed for the $S^0$. Extensions of the standard model often predict new heavy, e.g., multi-TeV, colored particles which in some instances have a conserved or almost-conserved quantum number. Bound to light quarks these form heavy hadrons, the lightest of which would be stable or quasistable. Such a particle would propagate through the CBR without significant energy loss because the threshold energy for inelastic collisions is proportional to its mass. Some mechanisms for uhecron production discussed below would be applicable for a new very massive hadron. However such a particle probably would not be an acceptable candidate for the uhecron because its interaction in the atmosphere is quite different from that of nucleons, nuclei, or an $S^0$. Although it is strongly interacting, its fractional energy loss per collision in the Earth’s atmosphere is only of order $(1\,{\rm GeV}/M)$, where $M$ is the mass of the heavy hadron.[^6] Thus if the uhecron energy deposition spectrum is indeed typical of a nucleon or nucleus, as present evidence suggests, we cannot identify the uhecron with a very massive stable hadron. The maximum uhecron mass consistent with observed shower properties is presently under investigation\[\[afk\]\].
**II. PRODUCTION OF UHE $S^0$s**
We first address the question of whether there is a plausible scenario to produce UHE $S^0$s. This is a tricky question, since there is no clear consensus on the acceleration mechanism even if the primary particle is a proton. Here we simply assume that somehow UHE protons are produced, and ask if there is some way to turn UHE protons into UHE $S^0$s. Our intent is not to establish the viability of any particular mechanism but to see that finding a satisfactory mechanism is not dramatically more difficult than it is for protons.
Assuming that there exists an astrophysical accelerator that can accelerate protons to energies above $10^{21}\eV$, one can envisage a plausible scenario of $S^0$ production through proton collision with hadronic matter surrounding the accelerator. A $p$-nucleon collision will result in the production of $R_p$’s, the $uud$-gluino state whose mass is about 200 MeV above the $S^0$. The $R_p$ decays to an $S^0$ and a $\pi^+$,[^7] with the $S^0$ receiving a momentum fraction of about $(m_{S^0}/m_{R_p})^2$. From a triple Regge model of the collision, one estimates that the distribution of the produced $R_p$’s as a function of the outgoing momentum fraction $x$ is $d \sigma/d x \sim
(1-x)^{1 - 2 \alpha} (s')^{\alpha_P - 1}$ as $x$ approaches unity. Here, $s' = (1-x)s$ and $\alpha$ is the Regge intercept of the SUSY-partner of the Pomeron. Thus, $\alpha = \alpha_P - 1/2 =
\epsilon + 1/2$, where $\epsilon \approx 0.1$ is the amount the pomeron trajectory is above 1 at high energies. Hence, we parameterize the $S^0$ production cross section in a $p$-nucleon collision as $d\sigma/dx = A E_p^\epsilon$; $x$ is the ratio of the $S^0$ energy to the incident energy. Parameterizing the high energy proton flux from the cosmic accelerator as $dN_p/dE_p = B
E_p^{-\gamma}$, we have a final $S^0$ flux of $dN_{S^0}/dE = \kappa n
L A B E^{-\gamma+\epsilon}$, where $n L$ is the matter column density with which the proton interacts to produce an $R_p$ and $\kappa$ is of order 1 (for $\gamma = 2$, $\kappa = 0.4$). Note that the produced $S^0$’s are distributed according to a spectrum that is a bit flatter than the high energy proton spectrum.
A disadvantage to this “beam-dump” $S^0$ production mechanism is the suppression factor of about $A E^\epsilon/\sigma_{p p}$, where $\sigma_{p p}$ is the proton-proton total diffractive cross section. This suppression could be of order $10^{-1}$ to $10^{-2}$ for typical energies. However the produced $S^0$’s enjoy a compensating advantage. The large column densities characteristic of most candidate acceleration regions makes it hard to avoid energy degradation of protons before they escape. That is, $ L ( n_p \sigma_{p N} + n_e \sigma_{p e} + n_\gamma
\sigma_{p \gamma})$ may be much greater than unity. By contrast, $S^0$’s may escape with little or no energy loss. Their electromagnetic interactions are negligible, and analogy with glueball wavefunctions suggests that $\sigma_{S^0 N}$ could be as small as $10^{-1}\sigma_{pN}$ . Thus the emerging $S^0$ and nucleon fluxes could be of the same order of magnitude. This would be necessary for a very distant source such as 3C 147 to be acceptable, since the required particle flux for the detected flux on Earth already pushes its luminosity limit. Assuming that the $3.2 \times 10^{20}\eV$ event of Fly’s Eye came during its exposure to 3C 147, the resulting time-averaged flux is $11
\eV\cm^{-2}\sec^{-1}$, which is greater than the X-ray luminosity of 3C 147 .
In connection with the “beam dump" mechanism, we note that it is possible to have at the source a nucleon flux significantly greater than the $S^0$ flux, and yet at Earth still have a large enough $S^0$ flux to account for the high energy end of the spectrum without being inconsistent with the rest of the observed cosmic ray spectrum. To see this is possible, suppose as an illustrative example that the $S^0$ spectrum for energies above $10^{20}{\rm eV}$ is a smooth extrapolation of the proton spectrum at energies below the GZK cutoff, i.e., if $J_{p}(E)
= A E^{-3}$ for $E < 10^{19.6}$ eV then $J_{S^0}(E) = A E^{-3}$ for $E >
10^{20}$ eV. Denoting the $S^0$–to–proton suppression factor by $\eta$, the proton flux for $E > 10^{20}$ eV is then $J_{p}(E) = \eta^{-1} A E^{-3}$. Protons of energy greater than the GZK cutoff (here taken to be $10^{19.6}$eV) will bunch up in the decade in energy below the GZK cutoff \[\[hs\],\[gqw\]\]. The total number in the pile-up region will receive a contribution from protons from the source above the GZK cutoff as well as those originally in the pile-up region. With $\eta=10^{-2}$, there will be equal contributions from the pile-up protons and the protons originally below the source. The statistics of the number of events with energy above $10^{18.5}$eV are too poor to exclude this scenario; indeed there is some indication of a bump in the spectrum in this region\[\[measure\]\].
Note that even for a point source as far away as 1200 Mpc (e.g. 3C 147), the required flux of high energy protons at the accelerator is not unacceptable. For instance extrapolating the spectrum as $7.36\times10^{18} E^{-2.7} ~{\rm eV/m^2/sr/sec}$ and using our pessimistic efficiency for $S^0$ production (factor of 1/100), requires the high energy proton luminosity of the source to be $\sim 10^{47}$ ergs/sec. This is indeed a high value, but not impossible.
Another possible mechanism of high energy $S^0$ production is the direct acceleration of charged light SUSY hadrons (mass around $2-3\GeV$), such as $R_p$ and $R_\Omega$, whose lifetime is about $2 \cdot 10^{-10} - 2 \cdot 10^{-11} $ sec . Due to the large time-dilation factor ($E/m\approx 10^{11}$), whatever electromagnetic mechanism accelerates the protons may also be able to accelerate the high energy SUSY hadrons. Then, one can imagine that the high energy tail of the hadronic plasma which gets accelerated by some electromagnetic mechanism will consist of a statistical mixture of all light strong-interaction-stable charged hadrons. In that case the flux of the resulting $S^0$ will have the same spectrum as the protons, differing in magnitude by a factor of order unity, which depends on the amount of SUSY hadrons making up the statistical mixture. Conventional shock wave acceleration mechanisms probably require a too long time scale for this mechanism to be feasible (e.g., Ref. ). However, some electromagnetic “one push” mechanisms similar to the one involving electric fields around pulsars \[\[onepush\]\] may allow this kind of acceleration if the electric field can be large enough. It is certainly tantalizing that the time scale of the short time structure of pulsars and gamma ray bursts is consistent with the scale implied by the time-dilated lifetime of charged $R$-baryons.
A somewhat remote possibility is that there may be gravitational acceleration mechanisms which would not work for a charged particle (because of radiation energy losses and magnetic confinement) but would work for a neutral, zero magnetic moment particle such as an $S^0$. For example, if $S^0$’s exist in the high energy tail of the distribution of accreting mass near a black hole (either by being gravitationally pulled in themselves or by being produced by a proton collision), they may be able to escape with a large energy. A charged particle, on the other hand, will not be able to escape due to radiation losses. Unfortunately, this scenario may run into low flux problems due to its reliance on the tail of an energy distribution.
A final possibility is the decay of a long-lived superheavy relics of the big bang, which would produce all light particles present in the low-energy world, including the $S^0$. For instance if such relics decay via quarks which then fragment, as in models such as Ref., the $S^0$/nucleon ratio is probably in the range $10^{-1} $ to $10^{-2}$ based on a factor of about $10$ suppression in producing a 4-constituent rather than 3-constitutent object, and possibly some additional suppression due to the $S^0$’s higher mass.[^8]
Of the scenarios considered above, only the last two are conceivably relevant for a super-heavy (0.1 - 1000 TeV) uhecron. Although the energy in $p$–nucleon collisions ($\sqrt{s} = \sqrt{2E_p m_p} \sim
10^3$TeV for primary proton energy of $10^{21}$eV) is sufficient for superheavy particle production, the production cross section is too small for the “beam dump" mechanism to be efficient.[^9] Also, the direct acceleration mechanism is not useful for a superheavy uhecron unless it is itself charged or is produced in the decay of a sufficiently long-lived charged progenitor. Even if a sufficient density of superheavy hadrons could be generated in spite of the small production cross section, the time scale required for the early stages of acceleration could be too long since it is proportional to $\beta^2$. This leaves the decay of a superheavy relic (either a particle or cosmic defect) as the most promising source of uhecrons if their masses are greater than tens of GeV.
**III. PROPAGATION OF UHE COSMIC RAYS**
To calculate the energy loss due to the primary’s interaction with the CBR, we follow the continuous, mean energy loss approximation used in Refs. and . In this approximation we smooth over the discrete nature of the scattering processes, neglecting the stochastic nature of the energy loss, to write a continuous differential equation for the time evolution of the primary energy of a single particle. The proper interpretation of our result is the mean energy of an ensemble of primaries traveling through the CBR. We shall now delineate the construction of the differential equation.
For an ultrahigh energy proton (near $10^{20}\eV$ in CBR frame[^10]), three main mechanisms contribute to the depletion of the particle’s energy: pion-photoproduction, $e^+ e^-$ pair production, and the cosmological redshift of the momentum. Pion-photoproduction consists of the reactions $p \gamma \rightarrow
\pi^0 p$ and $p \gamma \rightarrow \pi^+ n$. Pion-photoproduction, which proceeds by excitation of a resonance, is the strongest source of energy loss for energies above about $10^{20}\eV$, while below about $10^{19.5}\eV$, $e^+ e^-$ pair production dominates. For the scattering processes (pion-photoproduction and $e^+ e^-$ pair production), the mean change in the proton energy ($E_p$) per unit time (in the CBR frame) is $$\frac{dE_p(\mbox{scatter})}{dt} = -
\sum_{\mbox{events}} (\mbox{mean event rate}) \times \Delta E
\label{eq:lossrate}$$ where the sum is over distinct scattering events with an energy loss of $\Delta E$ per event. The mean event rate is given by = f(E\_) dE\_dwhere $\gamma=E_p/m_p$ is necessary to convert from the event rate in the proton frame (proton’s rest frame), where we perform the calculation, to the CBR frame, $d\sigma/d\xi$ is the differential cross section in the proton frame,[^11] and $f$ is the number of photons per energy per volume in the proton frame. To obtain $f$ we start with the isotropic Planck distribution and then boost it with the velocity parameter $\beta$ to the proton frame $$n(E_\gamma, \theta)=\frac{1}{(2
\pi)^3} \left[\frac{2 E_\gamma^2}{\exp[\gamma E_\gamma (1+ \beta \cos
\theta)/T]-1} \right]
\label{eq:protonplanck}$$ where $\theta$ is the angle that the photon direction makes with respect to the boost direction. Integrating over the solid angle[^12] and taking the ultrarelativistic limit, we find f= . \[eq:phonum\] For $\Delta E$, the energy loss per event in the CBR frame, we can write E(, p\_r) =m\_p where $p_r$, which may depend on $E_\gamma$ and $\cos \theta$, is the recoil momentum of the proton and $\theta$ is the angle between the incoming photon direction and the outgoing proton direction. Putting all these together, the energy loss rate due to scattering given by becomes $$\begin{aligned}
\frac{dE_p(\mbox{scatter})}{dt} & = &
-\gamma^{-1} \int dE_\gamma f(E_\gamma) \cr
& & \times\sum_i \int
d\xi_i \frac{d \sigma_i}{d \xi_i} (E_\gamma, \xi_i)\, \Delta E
(\cos \theta(E_\gamma, \xi_i), p_r(E_\gamma, \xi_i))
\label{eq:scatterdepdt}\end{aligned}$$ where only functions yet to be specified are the recoil momentum and the differential cross section (for each type of reaction $i$).
For the reaction involving the production of a single pion, the recoil momentum of the protons in the proton frame can be expressed as $$\begin{aligned}
p_r(E_\gamma, \cos \theta) & = & \cr
& & \hspace*{-3em} \frac{2 q^2 E_\gamma \cos \theta \pm
(E_\gamma + m_p)
\sqrt{4 E_\gamma^2 m_p^2
\cos^2 \theta - 4 m_\pi^2 m_p (E_\gamma+m_p)+m_\pi^4}}{2\,
[ (E_\gamma + m_p)^2 -
E_\gamma^2 \cos^2 \theta]}
\label{eq:pr}\end{aligned}$$ where $q^2=m_p (m_p+E_\gamma) - m_\pi^2/2$. When the photon energy $E_\gamma$ is approximately at threshold energy of $m_\pi + m_\pi^2/2
m_p$ and the proton recoils in the direction $\theta=0$, the recoil momentum is about $m_\pi$. The recoil momentum is a double valued function, where the negative branch corresponds to the situation where most of the photon’s incoming momentum is absorbed by the pion going out in the direction of the incoming photon. Thus, since the positive branch will be more effective in retarding the proton (in the CBR frame), we will neglect the negative branch to obtain a conservative estimate of the “cutoff” distance. It is possible to work out the kinematics for multipion production, but for our purpose of making a reasonably conservative estimate, it is adequate to use as the recoil momentum even for multipion production.[^13]
The pion-photoproduction cross section has been estimated by assuming that the $s$-wave contribution dominates, which would certainly be true near the threshold of the production. The cross section is taken to be a sum of a Breit-Wigner piece and two non-resonant pieces: $$\begin{aligned}
\sigma(\mbox{pion}) & = & 2 \sigma_{1\pi} \ \Theta\! \left (E_\gamma -
m_\pi - \frac{m_\pi^2}{2 m_p}\right) +
2 \sigma_{\mbox{multipion}} \nonumber \\
\sigma_{1\pi} & = &
\frac{4 \pi}{p_{cm}^2} \left[ \frac{m_\Delta^2 \Gamma(\Delta
\rightarrow \gamma p) \Gamma(\Delta \rightarrow \pi
P)}{(m_\Delta^2-s)^2 + m_\Delta^2 \Gamma_{\mbox{tot}}^2} \right] +
\sigma_{\mbox{nonres}} \nonumber \\
\Gamma(\Delta \rightarrow X p) & = & \frac{p_{cm}^X \omega_X}{8
m_\Delta \sqrt{s}} \nonumber \\
\Gamma_{\mbox{tot}} & = & \frac{p_{\mbox{cm}}^\pi}{\sqrt{s}}
\frac{ 2 m_\Delta^2 \overline{\Gamma}_{\mbox{tot}}}{\sqrt{[m_\Delta^2 -
(m_\pi+m_p)^2]\, [m_\Delta^2-(m_p-m_\pi)^2]}} \nonumber \\
\sigma_{\mbox{nonres}} & = & \frac{1}{16 \pi s}
\frac{\sqrt{[s-(m_p+m_\pi)^2] \, [s-(m_p-m_\pi)^2]}}{(s-m_p^2)} |{\cal
M}(p \gamma \rightarrow \pi p)|^2 \nonumber \\
\sigma_{\mbox{multipion}} & = & a \tanh\left( \frac{E_\gamma -
E_{\mbox{multi}}}{m_\pi}\right) \Theta(E_\gamma - E_{\mbox{multi}}) \end{aligned}$$ where $\omega_X$ is defined through $4 \pi \omega_X \equiv \int d\Omega |
{\cal M}(\Delta \rightarrow X p) |^2$, ${\cal M}$ denotes an invariant amplitude, the center of momentum momentum is given as usual by $$p_{cm}^X=\sqrt{\frac{[s-(m_p+m_X)^2]\, [s-(m_p-m_X)^2] }{4s}},$$ and $\sigma_{\mbox{multipion}}$ is a crude approximation[^14] for the contribution from the multipion production whose threshold is at $E_{\mbox{multi}} = 2 (m_\pi +m_\pi^2/m_p)$. The $\sigma_\pi$ component of the cross section is fit[^15] to the $p \pho
\rightarrow n \pi^0$ data of Ref. , while the amplitude $a$ for $\sigma_{\mbox{multipion}}$ is estimated from the $p \pho
\rightarrow X p$ data for energies $E_\gamma \ga 0.6\GeV$. The numerical values of the parameters resulting from the fit are $(\omega_\gamma \omega_\pi) = 0.086\GeV^4, |{\cal M}(p\gamma
\rightarrow \pi p)|=0.018$, $\overline{\Gamma}_{\mbox{tot}}=0.111\GeV$, $m_\Delta=1.23\GeV$, and $a=0.2$mb. The factor of 2 multiplying $\sigma_\pi$ accounts for the two reactions $p \gamma \rightarrow \pi^0 p$ and $p
\gamma \rightarrow \pi^+ n$, since a neutron behaves, to first approximation, just like the proton. For example, the dominant pion-photoproduction reactions involving neutrons are $n \gamma \rightarrow \pi^0 n$ and $n \gamma
\rightarrow \pi^- p$ which have similar cross sections as the analogous equations for protons. Thus, we are really estimating the energy loss of a nucleon, and not just a proton.
Taking the $p \gamma \rightarrow e^+ e^- p$ differential cross section from Ref. (as done in Ref. ), we use[^16] $$\begin{aligned}
\frac{d \sigma( \mbox{pair})}{dQ d\eta} & = & \Theta(E_\pho-2 m_e) \times
\frac{4 \alpha^3 }{E_\gamma^2}\frac{1}{Q^2} \left\{ \ln \left(
\frac{1-w}{1+w} \right) \left[ \left(1- \frac{E_\gamma^2}{4 m_e^2
\eta^2}\right) \right. \right. \cr
& &\hspace{-5em} \left. \times \left( 1- \frac{1}{4 \eta^2} +
\frac{1}{2\eta Q} -
\frac{1}{8 Q^2 \eta^2} - \frac{Q}{\eta} + \frac{Q^2}{2 \eta^2} \right)
+ \frac{E_\gamma^2}{8 m_e^2 \eta^4} \right]
\nonumber
\\ & & \hspace{-5em} \left. +
w \left[ \left(1 - \frac{E_\gamma^2}{4 m_e^2 \eta^2} \right) \left(
1- \frac{1}{4 \eta^2} + \frac{1}{2\eta Q} \right) + \frac{1}{\eta^2}
\left( 1 - \frac{ E_\gamma^2}{2 m_e^2 \eta^2} \right) (-2 Q \eta +
Q^2) \right] \right\} ,\end{aligned}$$ where $w =\left[1-1/(2Q \eta - Q^2)\right]^{1/2}$. The recoil momentum is contained in $Q=p_r/2 m_e$, and the photon energy is contained in $\eta=E_\gamma \cos \theta /2 m_e$.
The final ingredient in our energy loss formula is the redshift due to Hubble expansion. We assume a matter-dominated, flat FRW universe with no cosmological constant. Thus, the cosmological scale factor is proportional to $t^{2/3}$. The energy loss for relativistic particles (such as our high energy proton) due to redshift is then given by = - . \[eq:redshift\] Furthermore, note that the expansion of the universe causes the temperature to vary with time as $t^{-2/3}$.
Adding Eqs. (\[eq:scatterdepdt\]) and (\[eq:redshift\]), we have the proton energy loss equation = + , whose integration from some initial cosmological time $t_i$ to the present time $t_0$ gives the present energy of the proton that was injected with energy $E_i$ at time $t_i$. Note that we are interested in plotting $E_p(t_0)$ as a function of $t_0-t_i$ with $t_0$ fixed, which is not equivalent to fixing $t_i$ and varying $t_0$ because there is no time translational invariance in a FRW universe. Note also that we need to set the Hubble parameter $h$ (where the Hubble constant is $100 h \km\sec^{-1}\Mpc^{-1}$) in our calculation because the conversion between time and the redshift depends on $h$. To show the degree of sensitivity of our results to $h$ we will calculate the energy loss for $h=0.5$ and $h=0.8$.
Now, suppose the primary cosmic ray is an $S^0$ instead of a proton. The $e^+ e^-$ pair production will be absent (to the level of our approximation) because of the neutrality of $S^0$. Furthermore, the mass splitting between $S^0$ and any one of the nearby resonances that can be excited in a $\pho S^0$ interaction is larger than the proton-$\Delta$ mass splitting, leading to a further increase in the attenuation length of the primary. Perhaps most importantly, the mass of $S^0$ being about two times that of the proton increases the attenuation length significantly because of two effects. One obvious effect is seen in , where the fractional energy loss per collision to leading approximation is proportional to $p_r/m_p$ while $p_r$ has a maximum value of about $m_\pi$. Replacement of $m_p\rightarrow m_{S^0}$ obviously leads to a smaller energy loss per collision. The second effect is seen in Eqs.(\[eq:phonum\]) and (\[eq:scatterdepdt\]), where for the bulk of the photon energy integration region, a decrease in $\gamma$ (in the exponent) resulting from an increase in the primary’s mass suppresses the photon number. In fact, it is easy to show that if we treat the cross section to be a constant, the pion-photoproduction contribution to the right hand side of can be roughly approximated as - (-y/2) ( 1+ + ) where $y=m_\pi m_p/(E_p T)$, clearly showing a significant increase in the attenuation length as $m_p$ is replaced by $m_{S^0}$.
\
\
\
\
The relevant resonances for the $S^0 \gamma$ collisions are spin-1 $R_\Lambda$ and $R_\Sigma$ \[\[gf96\]\] (whose constituents are those of the usual $\Lambda$ and $\Sigma$ baryons, but in a color octet state, coupled to a gluino ). There are two R-baryon flavor octets with $J=1$. Neglecting the mixing between the states, the states with quarks contributing spin $3/2$ have masses of about $385-460 \MeV$ above that of the $S^0$ and the states with quarks contributing spin $1/2$ have masses of about $815-890 \MeV$ above that of the $S^0$. If we require that the photino is a significant dark matter component so $1.3 \leq M_{\r0}/m_{\pho} \leq 1.6$ according to Ref. , and take the mass of $\r0$ to be about $1.6-1.8 \GeV$ as expected, then $m_\pho$ lies in the range $0.9 \sim 1.3\GeV$. If we assume that $S^0$ is minimally stable, we have $m_{S^0} \approx m_p
+ m_\pho$ resulting in $m_{S^0}$ in the range $1.9$ to $2.3\GeV$. The other resonance parameters are fixed at the same values as those for the protons.
In Fig. 1, we show the proton energy and the $S^0$ energy today (with $h=0.5$) if it had been injected at a redshift $z$ (or equivalently from the corresponding distance[^17]) with an energy of $10^{22}\eV$, $10^{21}\eV$, and $10^{20}\eV$. To explore the interesting mass range, we have set the $S^0$ mass to $1.9
\GeV$ in the upper plot while we have set it to $2.3 \GeV$ in the lower plot. For the cosmic rays arriving with $10^{20}\eV$, the distance is increased by more than thirty times, while for those arriving with $10^{19.5}\eV$, the distance is increased by about fifteen times. In Fig. 2, we recalculate the energies with $h=0.8$.
Using the mean energy approximation, we can also calculate the evolved spectrum of the primary $S^0$ spectrum observed on Earth given the initial spectrum at the source (where all the particles are injected at one time). With the source at $z=0.54$ (the source distance for 3C 147) and the initial spectrum having a power law behavior of $E^{-2}$, the evolved spectrum is shown in Fig. 3. We see that even though there is significant attenuation for the $S^0$ number at $3 \times
10^{20}\eV$ for most of the cases shown, when the overall cross section (which was originally estimated quite conservatively) is reduced by a factor of half, the bump lies very close to the Fly’s Eye event. Moreover, taking the Fly Eye’s event energy to be $2.3 \times
10^{20}\eV$ which is within $1 \sigma$ error range, we see that the $S^0$ can easily account for the Fly Eye’s event. For sources such as MCG 8-11-11, $S^0$ clearly can account for the observed event without upsetting the proton flux at lower energies.
\
**IV. CONCLUSIONS**
We have considered the suggestion that the very long-lived or stable new hadron called $S^0$, a $uds$-gluino bound state predicted in some supersymmetric models, can account for the primary cosmic ray particles at energies above the GZK cutoff. We noted ways that conventional acceleration mechanisms might result in acceptable fluxes of high energy $S^0$’s. We also found that the $S^0$ can propagate at least fifteen to thirty times longer through the CBR than do nucleons, for the same amount of total energy loss. Thus, if $S^0$’s exist and there exists an acceleration mechanism which can generate an adequate high-energy spectrum, $S^0$’s can serve as messengers of the phenomena which produce them, allowing MCG 8-11-11 Seyfert galaxy or 3C 147 quasar to be viable sources for these ultrahigh energy cosmic rays.
Although much of the relevant hadronic physics in the atmospheric shower development will be similar to that for the proton primaries, some subtle signatures of an $S^0$ primary are still expected. Because an $S^0$ is expected to have a cross section on nucleons or nuclei somewhere between $1/10$ and $4/3$ of the $p$-$p$ cross section, the depth of the shower maximum may be a bit larger than that due to the proton. Furthermore, because it is about twice as massive as the proton, it deposits its energy a bit more slowly than a proton, broadening the distribution of the shower. There may be further signatures in the shower development associated with the different branching fractions to mesons, but we leave that numerical study for the future.
A prediction of this scenario which can be investigated after a large number of UHE events have been accumulated is that UHE cosmic rays primaries point to their sources. If there are a limited number of sources, multiple UHE events should come from the same direction. Also, the UHE cosmic-ray spectrum from each source should exhibit a distinct energy dependence with a cutoff (larger than the GZK cutoff) at an energy which depends on the distance to the source. The systematics of the spectrum in principle could reveal information about both masses of supersymmetric particles and the primary spectrum of the source accelerator.
We noted that the mass range for a new hadron which can account for the observed properties of UHE cosmic ray events is limited: it must be at least 2 GeV in order to evade the GZK bound, yet small enough that the atmospheric shower it produces will mimic an ordinary hadronic shower.
**ACKNOWLEDGMENTS**
DJHC thanks Angela Olinto and Ted Quinn for useful discussions about aspects of UHE cosmic rays. We thank D. Goulianos and P. Schlein for discussions about Regge lore and high energy $p-p$ scattering, and A. Riotto for discussions about extra colored particles and alternate candidates for the uhecron. DJHC and EWK were supported by the DOE and NASA under Grant NAG5-2788. GRF was supported by the NSF (NSF-PHY-94-2302).
\#1\#2\#3[Phys. Reports [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Rev. Lett. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Rev. D [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Lett. [**\#1B**]{}, \#2 (\#3)]{} \#1\#2\#3[Nucl. Phys. [**B\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Astrophys. J. [**\#1**]{}, \#2 (\#3)]{} \#1\#2\#3[Astrophys. J. Lett. [**\#1**]{}, \#2 (\#3)]{}
(400,50)(0,0) (50,0)[(350,0)[300]{}]{}
1. \[measure\] J. Linsley, ; N. Hayashida et al., ; D. J. Bird et al., .
2. \[flyseye\] D. J. Bird et al., .
3. \[bierman\] P. L. Bierman, J. Phys. G: Nucl. Part. Phys. [**23**]{}, 1 (1997).
4. \[halzen\] F. Halzen et al., Astropart. Phys. [**3**]{}, 151 (1995).
5. \[elbert\] J. W. Elbert and P. Sommers, .
6. \[burdman\] G. Burdman et al., hep-ph/9709399.
7. \[gf96\] G. R. Farrar, .
8. \[gf97\] G. R. Farrar, hep-ph/971077.
9. \[farrarprd95\] G. Farrar, .
10. \[afk\] I. F. Albuquerque, G. R. Farrar, and E. W. Kolb, in preparation.
11. \[hs\] C. T. Hill and D. N. Schramm, .
12. \[gqw\] J. Geddes, T. Quinn, and R. Wald, astro-ph/9506009.
13. \[albuquerque\] I. Albuquerque et al., .
14. \[rachen\] J. P. Rachen and P. L. Biermann, Astron. Astrophys. [**272**]{}, 161 (1993).
15. \[onepush\] K. S. Cheng, C. Ho, and M. Ruderman, ; Ref. \[3\].
16. \[kr\] V. A. Kuzmin and V. A. Rubakov, astro-ph/9709178.
17. \[berezinsky\] V. Berezinsky and M. Kachelreiss, hep-ph/9709485.
18. \[plaga\] R. Plaga, .
19. \[jwc\] J. W. Cronin, Nucl. Phys. Proc. Suppl. [**28B**]{}, 234 (1992).
20. \[crossdata\] [*Total Cross-sections for Reactions of High Energy Particles*]{}, ed. H. Schopper\
(Springer Verlag, New York, 1988).
21. \[jost\] R. Jost, J. M. Luttinger, and M. Slotnick, .
22. \[bucella\] F. Bucella, G. R. Farrar, A. Pugliese, .
23. \[cfk\] D. J. H. Chung, G. R. Farrar, and E. W. Kolb, .
[^1]: Electronic mail: [[email protected]]{}
[^2]: Electronic mail: [[email protected]]{}
[^3]: Electronic mail: [[email protected]]{}
[^4]: We use the term ultra-high energy to mean energies beyond the GZK cutoff (discussed below) which can be taken to be $10^{19.6}$ eV.
[^5]: Ten degrees is taken as the extreme possible deflection angle due to magnetic fields for a proton of this energy.
[^6]: In the infinite momentum frame for the heavy hadron, this is the fractional momentum carried by light partons since they have the same velocity as the heavy parton, but their mass is of order $\Lambda_{QCD}$. It is the momentum of these light partons which is redistributed in a hadronic collision. Of course a hard collision with the heavy quark would produce a large fractional energy loss, but the cross section for such a collision is small: $\approx
\alpha_s^2/E^2$.
[^7]: The decay $R_p \rightarrow S^0 \pi$ was the subject of an experimental search . However the sensitivity was insufficient in the mass and lifetime range of interest ($m(R_p) = 2.1 - 2.5$ GeV, $\tau(R_p) = 2 \cdot 10^{-10} - 2
\cdot 10^{-11} $ sec ) for a signal to have been expected.
[^8]: After our work was completed, Ref. appeared with an estimate of the production of gluino-hadrons from the decay of cosmic necklaces. Note that their pessimism regarding the light gluino scenario is mostly based on arguments which have been rebutted in the literature (see for example Refs. and ).
[^9]: The cross section is proportional to the initial parton density at $x \sim
M_U/\sqrt{s}$ times the parton-level cross section, which scales as $M_U^{-2}$.
[^10]: Let this be the frame in which CBR has a isotropic distribution.
[^11]: The differential $d \xi$ is $dQ d\eta$ ($Q$ and $\eta$ are defined below) for the $e^+ e^-$ pair production while it is $d \cos \theta$ for the pion-photoproduction.
[^12]: The exact angular integration range is unimportant as long as the range encompasses $\cos \theta = -1$ (where the photon distribution is strongly peaked in the ultrarelativistic limit) since we will be taking the ultrarelativistic limit.
[^13]: For example, one can easily verify that the maximum proton recoil during one pion production is greater than the maximum proton recoil during two pion production.
[^14]: The functional form was chosen to account for the shape of the cross section given in Ref. .
[^15]: The fit is qualitatively good, but only tolerable quantitatively. The fit to the data in the range between $0.212 \GeV$ and $0.4 \GeV$ resulted in a reduced $\chi_{16}^2 \sim 50$ (due to relatively small error bars). This is sufficient for our purposes since our results should depend mainly upon the gross features of the cross section.
[^16]: We ignore that $n$ does not pair produce $e^+ e^-$. However, this has consequences only for energies below about $10^{19.5}$ eV.
[^17]: Marked are the luminosity distances $d_L=H_0^{-1} q_0^{-2}
\left[ z q_0 + (q_0-1) ( \sqrt{2q_0 z +1} -1) \right]$ where the deceleration parameter $q_0$ is 1/2 in our $\Omega_0=1$ universe.
|
[Microlocalization and stationary phase.]{}\
$\ $\
[by Ricardo García López [^1]]{}
[**0. Introduction**]{}
In this paper we make an attempt to define analogues of the local Fourier transformations defined by G. Laumon for $\ell$-adic sheaves in the case of formal differential systems defined over a field $K$ of characteristic zero (cf. [@Lau]). Our main goal is to prove a stationary phase formula expressing the formal germ at infinity of the Fourier transform of a holonomic $K[t]\langle
\partial_t\rangle$-module ${{\mathbb M}}$ in terms of the formal germs defined by ${{\mathbb M}}$ at its singular points. When $K$ is the field $\mathbb C$ of complex numbers and the module ${{\mathbb M}}$ is of exponential type (in Malgrange’s sense, see [@Mal2]), this was done by B. Malgrange, (see loc. cit. and [@Mal1]). In this case the only transformation needed is the one given by the microlocalization functor (which corresponds to the transformation labelled $(0,\,\infty')$ by Laumon, this is probably known). The main point of section 1 below is to treat the case of a $K[t]\langle
\partial_t\rangle$-module with arbitrary slopes at infinity. In order to do this, one is forced to introduce another type of microlocalization (corresponding to Laumon’s $(\infty, \infty')$ transformation) to keep track of the contribution coming from the germ at infinity defined by ${{\mathbb M}}$. A trascendental construction of this $(\infty, \infty')$ transformation was explained by B. Malgrange to the author, the construction we give here is algebraic, although we cannot avoid the use of some trascendental arguments in the proof. When the base field $K$ is the field of complex numbers, we establish a $1$-Gevrey variant of the formal stationary phase formula and we use it to give a decomposition theorem for meromorphic connections (the decompositions obtained are much rougher than the decomposition according to formal slopes, but they hold at the $s$-Gevrey level, $s>0$). In section 2 we define an analogue of Laumon’s $(\infty,0')$ local Fourier transform and we use it for the study of the singularity at zero of the Fourier transform of ${{\mathbb M}}$ and to establish a long exact sequence of vanishing cycles. In section 3 we make a modest attempt to transpose part of the above constructions into the $p$-adic setting. We define a ring of $p$-adic microdifferential operators of finite order, we prove a division theorem for them and we show that, in some cases, the corresponding microlocalization functor has the relation one would expect with the $p$-adic Fourier transform (in a sense which is made precise in the introduction to section 3).
We will use the formalism of formal slopes and a few other results from $\mathcal D$-module theory for which to refer e.g. to [@Sa0], [@Sa]. Our proof of the formal stationary phase formula follows the leitfaden of the one given by C. Sabbah in [@Sa] for modules with regular singularities. I thank C. Sabbah for his corrections and remarks on a previous version of these notes. Part of this work was done while the author was invited to the University of Minnesota, I thank G. Lyubeznik and S. Sperber for their hospitality. I thank W. Messing and G. Christol for their useful remarks.
[**1. Formal stationary phase**]{}
We will use the following notations:
1. Unless otherwise stated, all modules over a non-commutative ring will be left modules. We denote by ${{\mathbb M}}$ a holonomic module over the Weyl algebra ${\mathbb W_t}= K[t]\langle\partial_{t}\rangle$ (that is, we assume that for all elements $m\in {{\mathbb M}}$ there is an operator $P\in{\mathbb W_t}-\{ 0\}$ such that $P\cdot m=0$). The rank of ${{\mathbb M}}$ is defined as $\mbox{rank}({{\mathbb M}}):=\dim_{K (t)}K
(t)\otimes_{K[t]}{{\mathbb M}}$. Let $\overline{K}$ be an algebraic closure of $K$. There is a maximal Zariski open subset $U\subset \mathbb
A^1_{\overline{K}}$ such that the restriction of ${{\mathbb M}}$ to $U$ is of finite type over the ring of regular functions on $U$, by definition the set of singular points of ${{\mathbb M}}$ is $Sing({{\mathbb M}})=\mathbb A^1_{\overline{K}}-U$. We will assume the points of $Sing({{\mathbb M}})$ are in $K$.
2. The Fourier antiinvolution is the morphism of $K$-algebras ${\mathbb W_t}\to\mathbb W_{\eta}$ given by $t\mapsto-\partial_{\eta}\,,\
\partial_t\mapsto\eta$. The Fourier transform of ${{\mathbb M}}$ is defined as the $\mathbb W_{\eta}$-module ${\widehat{\mathbb M}}:=\mathbb
W_{\eta}\otimes_{{\mathbb W_t}}{{\mathbb M}}$, where $\mathbb W_{\eta}$ is regarded as a right ${\mathbb W_t}$-module via the Fourier morphism. If $m\in {{\mathbb M}}$, we put $\widehat{m}=1\otimes m \in {\widehat{\mathbb M}}$.
3. We will set $\mathcal{K}_{\eta^{-1}} :=K[[\eta^{-1}]][\eta]$ and we consider on this field the derivation $\partial_{\eta^{-1}}=-\eta^2\partial_{\eta}$. If $\mathcal V$ is a $\mathcal{K}_{\eta^{-1}}$-vector space, a connection on $\mathcal V$ is a $K$-linear map $\nabla:\mathcal V \to \mathcal V
$ satisfying the Leibniz rule $$\nabla(\alpha\cdot v)=\partial_{\eta^{-1}}(\alpha)\cdot v+ \alpha
\cdot \nabla (v)\ \ \mbox{for all}\ \
\alpha\in\mathcal{K}_{\eta^{-1}}\,,\ v\in\mathcal V.$$
4. We set ${{\mathbb M}}_{\infty}= K[[t^{-1}]]\langle
\partial_t \rangle\otimes _{K[t^{-1}]\langle
\partial_t \rangle}{{\mathbb M}}\,[t^{-1}]$, and for $c\in K$ we put $t_c = t-c$, ${{\mathbb M}}_{\,c} = K[[t_c]]\langle
\partial_{t_c} \rangle \otimes_{{\mathbb W_t}}{{\mathbb M}}$.
We will consider the following rings:
1. [*The ring ${\mathcal F ^{\,(\,c,\,\infty)}}$ of formal microdifferential operators:*]{}
Let $c\in K$. For $r\in \mathbb Z$, we denote by ${\mathcal F ^{\,(\,c,\,\infty)}}[r]$ the set of formal sums $$\sum_{i\le \,r} a_i(t_c)\,\eta^i \ \ \mbox{where}\ \ a_i(t_c) \in
K[[t_c]]\,,r\in \mathbb Z.$$ We put ${\mathcal F ^{\,(\,c,\,\infty)}}=\cup_r\,{\mathcal F ^{\,(\,c,\,\infty)}}[r]$. For $P,Q \in{\mathcal F ^{\,(\,c,\,\infty)}}$, their product is defined by the formula $$P \cdot Q = \sum_{\alpha \geqslant\, 0}\ \frac{1}{\alpha\, !}\ \
\partial_{\eta}^{\,\alpha}P\cdot
\partial^{\,\alpha}_{t_c} Q \,
\, \in {\mathcal F ^{\,(\,c,\,\infty)}}.$$ (where the product on the right hand side is the usual, commutative product). With this multiplication, ${\mathcal F ^{\,(\,c,\,\infty)}}$ becomes a filtered ring and ${\mathcal F ^{\,(\,c,\,\infty)}}[0]$ is a subring. One has morphism of $K$-algebras given by $$\begin{aligned}
\varphi^{(\,c,\,\infty\,)}: {\mathbb W_t}&\longrightarrow& {\mathcal F ^{\,(\,c,\,\infty)}}\,,\\
t\ \ &\mapsto&\ t_c+c\\
\partial_t\ \ &\mapsto&\ \eta\end{aligned}$$ which endows ${\mathcal F ^{\,(\,c,\,\infty)}}$ with a structure of $({\mathbb W_t},{\mathbb W_t})$-bimodule.
2. [*The ring ${\mathcal F ^{\,(\,\infty,\,\infty)}}$:*]{}
For $r\in \mathbb Z$, we denote by ${\mathcal F ^{\,(\,\infty,\,\infty)}}[r]$ the set of formal sums $$\sum_{i\le \, r} \ a_i(t^{-1})\, \eta^i \ \mbox{ where } \ \
a_i(t^{-1})\in K[[t^{-1}]]\,,r\in \mathbb Z.$$ We put ${\mathcal F ^{\,(\,\infty,\,\infty)}}=\cup_r\,{\mathcal F ^{\,(\,\infty,\,\infty)}}[r]$. If $P,Q\in {\mathcal F ^{\,(\,\infty,\,\infty)}}$, their product is given by $$P \ast Q = \sum_{\alpha \ge\, 0}\ \frac{1}{\alpha\, !}\ \
\partial^{\,\alpha}_{\eta}P \cdot
\partial^{\,\alpha}_{t}Q$$ Again, ${\mathcal F ^{\,(\,\infty,\,\infty)}}$ is a filtered ring and ${\mathcal F ^{\,(\,\infty,\,\infty)}}[0]$ is a subring. One has a morphism of $K$-algebras $$\begin{aligned}
\varphi^{(\,\infty,\,\infty\,)}: K[t^{-1}]
\langle\partial_{t}\rangle &\longrightarrow& {\mathcal F ^{\,(\,\infty,\,\infty)}}\\
t^{-1}\ \ &\mapsto&\ t^{-1}\\
\partial_{t}\ \ &\mapsto&\ \eta\end{aligned}$$ (notice that on the ring $K[t^{-1}] \langle\partial_{t}\rangle$, one has the relation $[\partial_{t}, t^{-1}]=-t^{-2}$). The morphism $\varphi^{(\,\infty,\,\infty\,)}$ endows ${\mathcal F ^{\,(\,\infty,\,\infty)}}$ with a structure of $(\,K[t^{-1}]\langle\partial_{t}\rangle\,,\,K[t^{-1}]\langle
\partial_{t}\rangle\,)$ - bimodule.
If $P=\sum_{i\in \mathbb Z} a_i(t_c)\,\eta^i\in {\mathcal F ^{\,(\,c,\,\infty)}}$, the order of $P$ is the largest integer $r$ such that $a_r(t_c)\neq 0$, and we define the principal symbol of $P$ as $\sigma(P)=a_{r}(t_c)\,\eta^{r}$. We define similarly the order and the principal symbol of an operator $P\in{\mathcal F ^{\,(\,\infty,\,\infty)}}$. Principal symbols are multiplicative, in the sense that $\sigma(P\cdot
Q)=\sigma(P)\cdot\sigma(Q)$.
We recall next some results which are well-known for the rings ${\mathcal F ^{\,(\,c,\,\infty)}}$. The proofs for ${\mathcal F ^{\,(\,\infty,\,\infty)}}$ follow a similar pattern (using the fact that the graded ring associated to the filtration on ${\mathcal F ^{\,(\,\infty,\,\infty)}}$ is isomorphic to $K[[t^{-1},x]][x^{-1}]$), and therefore they are omitted.
(1.1)[**Division theorem**]{} (cf. e.g. [@Bjo Ch.4, 2.6.]): [*Let $F\in{\mathcal F ^{\,(\,c,\,\infty)}}$ and assume that $\sigma(F)=t_c^m
\,b(t_c)$ where $b(0)\neq 0$. Then, for all $G\in{\mathcal F ^{\,(\,c,\,\infty)}}$ there exist unique $Q\in {\mathcal F ^{\,(\,c,\,\infty)}}$ and $R_0,\dots ,R_{m-1}\in \mathcal
K_{\eta^{-1}}$ such that $$G= Q\cdot F + R_{m-1}\, t_c^{m-1} + \dots + R_0.$$ The same statement holds for the ring ${\mathcal F ^{\,(\,\infty,\,\infty)}}$, replacing $t_c$ by $t^{-1}$.*]{}
[**Proposition**]{} (cf. e.g. [@Bjo Ch.4, 2.1 and 2.9]): [*The rings ${\mathcal F ^{\,(\,c,\,\infty)}}$ and ${\mathcal F ^{\,(\,\infty,\,\infty)}}$ are left and right Noetherian.*]{}
[**Proposition:**]{} (cf. e.g. [@Bjo Ch.5, §5]): [*${\mathcal F ^{\,(\,c,\,\infty)}}$ is a flat left and right ${\mathbb W_t}$-module. ${\mathcal F ^{\,(\,\infty,\,\infty)}}$ is a flat left and right $K[t^{-1}]\langle
\partial_t\rangle$-module.*]{}
We will consider the following modules:
i)[*The (ordinary) microlocalization of ${{\mathbb M}}$ at $c\in K$*]{}: It is defined as $${\mathcal F ^{\,(\,c,\,\infty)}}({{\mathbb M}}):= {\mathcal F ^{\,(\,c,\,\infty)}}\otimes_{{\mathbb W_t}} {{\mathbb M}}.$$ where ${\mathcal F ^{\,(\,c,\,\infty)}}$ is viewed as a right ${\mathbb W_t}$-module via $\varphi^{(c,\,\infty)}$. It has a structure of $\mathcal
K_{\eta^{-1}}$-vector space with a connection given by left multiplication by\
$\eta^2 \cdot (t_c+c)=\eta^2\cdot t$. Notice that $${\mathcal F ^{\,(\,c,\,\infty)}}({{\mathbb M}})\cong {\mathcal F ^{\,(\,c,\,\infty)}}\otimes_{K[[t_c]]\langle
\partial_{t_c}\rangle}(K[[t_c]]\langle
\partial_{t_c}\rangle\otimes_{{\mathbb W_t}} {{\mathbb M}})={\mathcal F ^{\,(\,c,\,\infty)}}\otimes_{K[[t_c]]\langle
\partial_{t_c}\rangle} {{\mathbb M}}_c\,,$$ thus ${\mathcal F ^{\,(\,c,\,\infty)}}({{\mathbb M}})$ depends only on the formal germ ${{\mathbb M}}_c$.
ii)[*The $(\,\infty\,, \infty\,)$-microlocalization of ${{\mathbb M}}$*]{}: It is defined as $${\mathcal F ^{\,(\,\infty,\,\infty)}}({{\mathbb M}}):= {\mathcal F ^{\,(\,\infty,\,\infty)}}\otimes_{K[t^{-1}]\langle \partial_t\rangle}
{{\mathbb M}}\,[t^{-1}]\,,$$ where ${\mathcal F ^{\,(\,\infty,\,\infty)}}$ is viewed as a $K[t^{-1}]\langle
\partial_t\rangle$-module via the morphism $\varphi^{(\,\infty,\,\infty\,)}$. Again, it has a structure of $\mathcal K_{\eta^{-1}}$-vector space with a connection, defined by $$\nabla(\alpha\otimes m):=\partial_{\eta^{-1}}(\alpha) \otimes m
\,+\, \eta^{2}\alpha\otimes t\cdot m\,,$$ and it depends only on ${{\mathbb M}}_{\infty}$, since one has $${\mathcal F ^{\,(\,\infty,\,\infty)}}({{\mathbb M}}):={\mathcal F ^{\,(\,\infty,\,\infty)}}\otimes_{K[[t^{-1}]]\langle\partial_t\rangle}{{\mathbb M}}_{\infty}\,.$$
By flatness of $\varphi^{(\,0,\,\infty\,)}$ and $\varphi^{(\,\infty,\,\infty\,)}$, both microlocalizations define exact functors.
For the proof of the formal stationary phase formula we will need the following result:
[**Proposition 1**]{}: [*Let$Q(t^{-1},\partial_t)=\sum_{v=1}^n b_v(t^{-1})\,\partial^v_t \in
K[t^{-1}]\langle
\partial_t\rangle$ be such that there is at least an index $v\in
\{1,\dots,n\}$ with $b_v(0)\neq 0$. Then there is an isomorphism of $K[t^{-1}]\langle
\partial_t\rangle$-modules $${\mathcal F ^{\,(\,\infty,\,\infty)}}\otimes_{K[t^{-1}]\langle
\partial_t\rangle}\frac{K[t^{-1}]\langle
\partial_t\rangle}{K[t^{-1}]\langle
\partial_t\rangle\cdot Q}\cong {\mathcal F ^{\,(\,\infty,\,\infty)}}\otimes_{K[t^{-1}]\langle
\partial_t\rangle}\frac{K[t, t^{-1}]\langle
\partial_t\rangle}{K[t, t^{-1}]\langle
\partial_t\rangle\cdot Q}\,.$$*]{}
[*Proof:*]{} We show first that the natural map $$\gimel: \frac{K[t^{-1}]\langle
\partial_t\rangle}{K[t^{-1}]\langle
\partial_t\rangle\cdot Q}\longrightarrow\frac{K[t, t^{-1}]\langle
\partial_t\rangle}{K[t, t^{-1}]\langle
\partial_t\rangle\cdot Q}$$ is injective. Assume we have $A(t,t^{-1},\partial_t)\in
K[t,t^{-1}]\langle
\partial_t\rangle$ such that $A(t,t^{-1},\partial_t)\cdot
Q(t^{-1},\partial_t)\in K[t^{-1}]\langle \partial_t\rangle$, we have to show that in fact $A\in K[t^{-1}]\langle
\partial_t\rangle$. Write $A=\sum_u a_u(t,t^{-1})\,\partial_t^u$ with $a_u(t,t^{-1})\in K[t,t^{-1}]$. Let $v_0$ be the largest index with $b_{v_0}(0)\neq 0$ and let $k_0\in\mathbb N$ be the largest exponent of $t$ appearing in the Laurent polynomials $\{a_u\}_u$ (if $a_u\in K[t^{-1}]$ for all $u$, we are done). Let $u_0$ be the largest index such that $a_{u_0}$ contains a monomial $\beta t^{k_0}\neq 0$, $\beta\in K$. Set $j_0=u_0+v_0$. The coefficient of $\partial_t^{j_0}$ in $A\cdot Q$ is $$\sum_{{u,v,\alpha}\atop{j_0=u+v-\alpha}}\frac{1}{\alpha\,!}\,u(u-1)\dots(u-\alpha+1)\,a_u\,
\frac{d^{\alpha}b_v}{d\,t^{\alpha}}$$ The monomial $\beta b_{v_0}(0)t^{k_0}$ appearing in the summand corresponding to $u=u_0, v=v_0, \alpha=0$ cannot be cancelled, because in the other summands either $u>u_0$, and then in $a_u\,
\frac{d^{\alpha}b_v}{d\,t^{\alpha}}$ all powers of $t$ appear with exponent strictly smaller than $k_0$, or else $u\leqslant u_0$, and then $\frac{d^{\alpha}b_v}{d\,t^{\alpha}}\in t^{-1}K[t^{-1}]$, thus the exponents of $t$ in these summands are also strictly smaller than $k_0$. Since $A\cdot Q\in K[t^{-1}]\langle
\partial_t\rangle$, we conclude that $k_0=0$, which proves the injectivity of $\gimel$.
By flatness of ${\mathcal F ^{\,(\,\infty,\,\infty)}}$ over $K[t^{-1}]\langle
\partial_t\rangle$, the map $\mbox{Id}_{{\mathcal F ^{\,(\,\infty,\,\infty)}}}\otimes \gimel$ is injective as well. In order to show that it is a surjection, we prove first the following statement: $$\begin{aligned}
&\mbox{{\it Claim}: For all }\ P\in K[t,t^{-1}]\langle
\partial_t\rangle \
,\mbox{ there exists a polynomial } &\\ &p(x)\in K[x]-\{ 0\}
\mbox{ such that} \ \ \ p(\partial_t)\cdot P \in
K[t,t^{-1}]\langle
\partial_t\rangle\cdot Q + K[t^{-1}]\langle \partial_t\rangle &\end{aligned}$$ [*proof of the claim:*]{} Let us denote by $\Omega\subseteq K[t,t^{-1}]\langle
\partial_t\rangle$ the set of differential operators $P\in K[t,t^{-1}]\langle
\partial_t\rangle$ satisfying the condition of the claim, notice that $\Omega$ is closed under adition. It suffices to show that $t^i\in\Omega$ for all $i\geqslant 1$, because if there is a $p(x)$ with $p(\partial_t)\cdot
t^i=\alpha(t,t^{-1},\partial_t)\cdot Q + \beta(t^{-1},\partial_t)$ then, multiplying on the right by $\partial_t^j$, we obtain that $\partial_t^j\cdot t^i\in\Omega$ for all $i, j\geqslant 0$, and then we are done. We prove $t^i\in \Omega$ by induction on $i\geqslant 1$: By our hypothesis on $Q$ there exists a non-zero polynomial $p(x)\in K[x]$ such that $Q=
p(\partial_t)+\beta(t^{-1},\partial_t)$ with $\beta(t^{-1},\partial_t)\in t^{-1}K[t^{-1}]\langle
\partial_t\rangle$ (we will denote $p\,'=\frac{dp}{dx}$). Now, multiplying this equality on the left by $t$ and using $t\,p(\partial_t)=p(\partial_t)\,t-p\,'(\partial_t)$, case $i=1$ follows. For the induction step, assume we have $$p(\partial_t)\cdot t^i =\alpha(t,t^{-1},\partial_t)\cdot Q +
\beta(t^{-1},\partial_t)$$ We have also $p(\partial_t)\,t^{i+1}=t\,p(\partial_t)t^i+p\,'(\partial_t)\,t$, substituting, $$p(\partial_t)\,t^{i+1}=t(\alpha\,Q+\beta)+p\,'(\partial_t)\,t$$ Now from the case $i=1$ follows that we have $t\,\beta(t^{-1},\partial_t)\in\Omega$ and $p\,'(\partial_t)\,t\in\Omega$, so the claim is proved. Given $$F(t^{-1},\eta)\otimes
P(t,t^{-1},\partial_t)\in{\mathcal F ^{\,(\,\infty,\,\infty)}}\otimes\frac{K[t, t^{-1}]\langle
\partial_t\rangle}{K[t, t^{-1}]\langle
\partial_t\rangle\cdot Q}$$ choose $p(x)\neq 0$ such that $p(\partial_t)\cdot P\in
K[t,t^{-1}]\langle
\partial_t\rangle\cdot Q + K[t^{-1}]\langle \partial_t\rangle$. By the division theorem $p(\eta)$ is invertible in ${\mathcal F ^{\,(\,\infty,\,\infty)}}$, thus we have $F\otimes P = F\cdot p(\eta)^{-1}\otimes
p(\partial_t)\,P\in \mbox{Im}\, [\,\mbox{Id}_{{\mathcal F ^{\,(\,\infty,\,\infty)}}}\otimes
\gimel]$, and then the proposition is proved. $\Box$
[*Definition*]{}: If $\mathbb N$ is a $\mathbb W_{\eta}
$-module, its formal germ at infinity is the $\mathcal
K_{\eta^{-1}}$-vector space $\mathcal N_{\infty}=\mathcal K
_{\eta^{-1}}\otimes_{K[\eta]} \mathbb N$, which is endowed with the connection defined by $$\nabla (\alpha \otimes n)=\partial_{\eta^{-1}}(\alpha)\otimes n -
\alpha \otimes \eta^2\partial_{\eta}n.$$ The main result of this section is:
[**Theorem**]{} (formal stationary phase): [*Let $K$ be a field of characteristic zero, let ${{\mathbb M}}$ be a holonomic $K[t]\langle
\partial_t \rangle$-module. Then, after a finite extension of the base field $K$, the map $$\Upsilon : {\widehat{\mathcal M}_{\infty}}\longrightarrow\bigoplus_{c\in\,
Sing{{\mathbb M}}\cup\{\infty\}} {\mathcal F ^{\,(\,c,\,\infty)}}({{\mathbb M}})$$ given by $\Upsilon (\alpha\otimes \widehat{m}) = \oplus_c \
\alpha\otimes m$ is an isomorphism of $\mathcal
K_{\eta^{-1}}$-vector spaces with connection.* ]{}
[*Proof:*]{} The connections on the right hand side have been chosen so that the map is a morphism of $\mathcal
K_{\eta^{-1}}$-vector spaces with connection, we have to show it is an isomorphism. We will assume all ${\mathbb W_t}$-modules appearing in what follows have $K$-rational singularities (which can be achieved after a finite extension of $K$). Consider first the Dirac modules $\delta_c= {\mathbb W_t}/{\mathbb W_t}(t-c)$. It is easy to check that we have $$\begin{aligned}
&&{\mathcal F ^{\,(\,\infty,\,\infty)}}(\delta_c)=0\,,\ \mathcal F ^{\,(\,d,\,\infty)}(\delta_c)=0
\mbox{ if }d\neq c\,, \\
&&\mathcal K _{\eta^{-1}}\otimes \widehat{\delta_c}=\mathcal K
_{\eta^{-1}} = {\mathcal F ^{\,(\,c,\,\infty)}}(\delta_c)\,,\end{aligned}$$ so the theorem follows in this case.
For an arbitrary holonomic module $\mathbb M$, there is a differential operator $P\in{\mathbb W_t}$ and a ${\mathbb W_t}$-module with punctual support $\mathbb K$ so that one has an exact sequence $$0 \longrightarrow \mathbb K \longrightarrow {\mathbb W_t}/ {\mathbb W_t}\cdot P
\longrightarrow {{\mathbb M}}\longrightarrow 0\,.$$ A holonomic module with punctual support is a finite direct sum of Dirac’s $\delta$-modules, since both the global and the local Fourier transforms are exact functors, it will suffice to consider the of a quotient of ${\mathbb W_t}$ by an operator. Moreover, given $\mathbb M={\mathbb W_t}/{\mathbb W_t}\cdot P_1$, there is an operator $P_2\in{\mathbb W_t}$ such that one has $$0 \longrightarrow \mathbb K_1 \longrightarrow {\mathbb W_t}/ {\mathbb W_t}\cdot
P_1=\mathbb M \longrightarrow \mathbb M\,[t^{-1}]={\mathbb W_t}/{\mathbb W_t}\cdot
P_2\longrightarrow \mathbb K_2 \longrightarrow 0\,.$$ where $\mathbb K_i$ are supported at zero for $i=1,2$ ([@Sa0 4.2]). Thus, we will assume in what follows that ${{\mathbb M}}= {{\mathbb M}}\,[t^{-1}] = {\mathbb W_t}/{\mathbb W_t}P $, we write $P(t,\,\partial_t)=\sum_{i=0}^d\,
a_i(t)\partial_t^i$ with $a_i(t)\in K[t]$.
[**Step 1:**]{} [*The map $\Upsilon_c: {\widehat{\mathcal M}_{\infty}}\to
{\mathcal F ^{\,(\,c,\,\infty)}}({{\mathbb M}})$, given by the composition of $\Upsilon$ with the projection onto ${\mathcal F ^{\,(\,c,\,\infty)}}({{\mathbb M}})$, is exhaustive for all $c\in\mbox{Sing}({{\mathbb M}})\cup\{\infty\}$:*]{}
Assume first that $c=0$. Then $P(t,\eta)\in{\mathcal F ^{\,(\,0,\,\infty)}}$ has a principal symbol of the form $\sigma(P)=t^{m_0}\,b(t)$ with $b(0)\neq 0$, $m_0\geqslant 0$, and then we have $${\mathcal F ^{\,(\,0,\,\infty)}}({{\mathbb M}})={\mathcal F ^{\,(\,0,\,\infty)}}/{\mathcal F ^{\,(\,0,\,\infty)}}\cdot P.$$ Given $G\in{\mathcal F ^{\,(\,0,\,\infty)}}$, by the division theorem $$G \equiv \sum_{i=0}^{m_0-1} R_i\cdot t^i\ \ \ \ (\mbox{mod}\ {\mathcal F ^{\,(\,0,\,\infty)}}\,P)$$ where $R_i\in \mathcal K_{\eta^{-1}}\ (0\le i\le m_0-1)$. Then, a preimage of the class of $G$ in ${\mathcal F ^{\,(\,0,\,\infty)}}({{\mathbb M}})$ under $\Upsilon_c$ is given by $$\sum_{i=0}^{m_0-1} R_i\otimes \widehat{t^i}\,\in\, \mathcal
K_{\eta^{-1}}\otimes {\widehat{\mathbb M}}\, ,$$ where $\widehat{t^i}$ denotes the element $1\otimes t^i$ of ${\widehat{\mathbb M}}= \mathbb W_{\eta}\otimes {{\mathbb M}}$. The proof for arbitrary $c\in K$ is done in the same way, using the division theorem in ${\mathcal F ^{\,(\,c,\,\infty)}}$ (notice that the hypothesis $\mathbb M = \mathbb M[t^{-1}]$ has not been used up to now).
We consider now the case $c=\infty$. Write $\widehat{d}=\max_{j=1}^d\{\deg a_j(t)\}$, set $Q(t^{-1},\partial_t)= t^{-\widehat{d}}P(t, \partial_t)$. Since $Q$ satisfies the hypothesis of proposition 1 above, we have $$\frac{K[t^{-1}]\langle \partial_t\rangle}{K[t^{-1}]\langle
\partial_t\rangle \cdot Q} \cong \frac{K[t, t^{-1}]\langle \partial_t\rangle}
{K[t,t^{-1}]\langle
\partial_t\rangle \cdot Q}=\mathbb M$$ as $K[t^{-1}]\langle \partial_t\rangle$-modules. Then we have $${\mathcal F ^{\,(\,\infty,\,\infty)}}({{\mathbb M}}) \cong \frac{{\mathcal F ^{\,(\,\infty,\,\infty)}}}{{\mathcal F ^{\,(\,\infty,\,\infty)}}\cdot Q}\,,$$ and notice that the principal symbol of $Q$ is of the form $\sigma
(Q) = t^{\,\deg (a_d(t))-\widehat{d}}\cdot b(t^{-1})$ with $b(0)\neq 0$. Given $G\in{\mathcal F ^{\,(\,\infty,\,\infty)}}$, by the division theorem $$G\equiv \sum_{i} R_i\cdot t^{-i}\ \ \ \ (\mbox{mod}\ {\mathcal F ^{\,(\,\infty,\,\infty)}}\,Q)$$ where $i\in\{0,\dots,\widehat{d}-\deg (a_d(t))\}$. Since ${{\mathbb M}}={{\mathbb M}}\,[t^{-1}]$, we have that $\sum_i R_i\,\otimes\,
\widehat{t^{-i}}\in{\widehat{\mathcal M}_{\infty}}$ is a preimage of $G$ under $\Upsilon_{\infty}$.
[**Step 2:**]{} $\dim_{\mathcal K_{\eta^{-1}}}\,{\widehat{\mathcal M}_{\infty}}=
\dim_{\mathcal K_{\eta^{-1}}}\,\left[\left(\bigoplus_{c\in Sing\,
{{\mathbb M}}}{\mathcal F ^{\,(\,c,\,\infty)}}({{\mathbb M}})\right)\oplus\ {\mathcal F ^{\,(\,\infty,\,\infty)}}({{\mathbb M}}_{\,\infty})\right]$.\
Put $$a_d(t)=\prod_{c\in Sing\, {{\mathbb M}}}(t-c)^{m_c}, \ \ \deg(a_d(t))=\sum\,
m_c.$$ It follows from the existence and uniqueness of division for ${\mathcal F ^{\,(\,c,\,\infty)}}$ and ${\mathcal F ^{\,(\,\infty,\,\infty)}}$ that we have $\dim_{\mathcal K
_{\eta^{-1}}}{\mathcal F ^{\,(\,c,\,\infty)}}({{\mathbb M}})=m_c$, $ \dim_{\mathcal K
_{\eta^{-1}}}{\mathcal F ^{\,(\,\infty,\,\infty)}}({{\mathbb M}})=\widehat{d}-\deg (a_d(t)). $ Since $\widehat{d}=\mbox{rank}({\widehat{\mathbb M}})=\dim_{\mathcal
K_{\eta^{-1}}}\,{\widehat{\mathcal M}_{\infty}}$, the claimed equality follows.
[**Step 3:**]{}
**
- All slopes of ${\mathcal F ^{\,(\,0,\,\infty)}}({{\mathbb M}})$ are strictly smaller than $+1$.
- For $c\in K -\{ 0\}$, all slopes of ${\mathcal F ^{\,(\,c,\,\infty)}}({{\mathbb M}})$ are equal to $+1$.
- All slopes of ${\mathcal F ^{\,(\,\infty,\,\infty)}}({{\mathbb M}})$ are strictly greater than $+1$.
We assume first that the base field $K$ is the field $\mathbb C$ of complex numbers. Then, there is a holonomic $\mathbb
C[t]\langle \partial_t\rangle$-module $\mathbb N$ which is only singular at $0$ and $\infty$, the singularity at infinity is regular and for the singularity at zero we have $\mathbb C
[[t]]\langle
\partial_t\rangle \otimes_{{\mathbb W_t}}\mathbb N \cong \mathbb C[[t]]\langle
\partial_t\rangle \otimes_{{\mathbb W_t}}{{\mathbb M}}$ (this is the transcendental step in the proof, see [@Mal1], it follows that ${\mathcal F ^{\,(\,0,\,\infty)}}({{\mathbb M}})\cong{\mathcal F ^{\,(\,0,\,\infty)}}(\mathbb N)$). Let $ \mathbb
L\twoheadrightarrow\mathbb N $ be a surjection where $\mathbb L$ is the quotient of $\mathbb C[t]\langle
\partial_t\rangle$ by the left ideal generated by a single differential operator. Then we have
(165,80)(-80,0)
(0,0)[${\mathcal F ^{\,(\,0,\,\infty)}}(\mathbb L)$]{}
(0,70)[$\mathcal K_{\eta^{-1}}\otimes_{\mathbb
C[\eta^{-1}]}\widehat{\mathbb L}$]{}
(140,0)[${\mathcal F ^{\,(\,0,\,\infty)}}(\mathbb N)$]{}
(140,70)[$\mathcal K_{\eta^{-1}}\otimes_{\mathbb
C[\eta^{-1}]}\widehat{\mathbb N}$]{}
(70,5)[$\vector(1,0){62}$]{}
(70,5)[$\vector(1,0){58}$]{}
(80,75)[$\vector(1,0){55}$]{}
(80,75)[$\vector(1,0){51}$]{}
(30,60)[$\vector(0,-1){45}$]{}
(160,60)[$\vector(0,-1){45}$]{}
(38,35)[$\Upsilon_0^{\mathbb L}$]{}
(165,35)[$\Upsilon_0^{\mathbb N}$]{}
where we denote $\Upsilon_0^{\mathbb L}$ (respectively, $\Upsilon_0^{\mathbb N}$) the map defined in step 1 for the module $\mathbb L$ (resp., for $\mathbb N$) and the horizontal arrows are surjective. By step 1, the map $\Upsilon_0^{\mathbb L}$ is onto, and then so is $\Upsilon_0^{\mathbb N}$. But the behavior of formal slopes under Fourier transform is well known, in particular the slopes of $\mathcal K _{\eta^{-1}}\otimes_{\mathbb
C[\eta]}\widehat{\mathbb N}$ are strictly smaller than $+1$, and a) follows.
For b), take now a $\mathbb C[t]\langle\partial_t\rangle$-module $\mathbb N^c$ which is only singular at $c\in\mathbb C-\{0\}$ and at infinity, such that one has an isomorphism of formal germs $\mathbb C [[t_c]]\langle
\partial_{t_c}\rangle\otimes\mathbb N^c\cong \mathbb C
[[t_c]]\langle \partial_{t_c}\rangle\otimes\mathbb M$, and the singularity at infinity of $\mathbb N^c$ is regular. Then (as in a) above) the corresponding map $\Upsilon^{\mathbb N^c}_c$ is onto, and since $\mathcal K _{\eta^{-1}}\otimes_{\mathbb
C[\eta]}\widehat{\mathbb N^c}$ has only slope $+1$ at infinity, b) follows.
For the proof of c) consider first a $\mathbb
C[t]\langle\partial_t\rangle$-module $\mathbb N^0$ with a regular singularity at zero, no other singularity and $\mathbb
N^{0}_{\infty}\cong {{\mathbb M}}_{\infty}$ (then ${\mathcal F ^{\,(\,\infty,\,\infty)}}({{\mathbb M}})\cong
{\mathcal F ^{\,(\,\infty,\,\infty)}}(\mathbb N^0)$). Let $\mathbb L^0\twoheadrightarrow\mathbb
N^0$ be a surjection where $\mathbb L^0$ is the quotient of $\mathbb C[t]\langle \partial_t\rangle$ by a single differential operator. Then $\mathbb L^0[t^{-1}]$ is given by a single differential operator as well and, as in case a) above, the map $$\Upsilon^{\mathbb N^0[t^{-1}]}_{\infty}:\,\mathcal K
_{\eta^{-1}}\otimes_{\mathbb C[\eta]}\widehat{\mathbb
N^0[t^{-1}]}\longrightarrow{\mathcal F ^{\,(\,\infty,\,\infty)}}(\mathbb
N^0[t^{-1}])\cong{\mathcal F ^{\,(\,\infty,\,\infty)}}({{\mathbb M}})$$ is onto (the last isomorphism holds because ${{\mathbb M}}_{\infty}\cong\mathbb N^0_{\infty}=\mathbb N^0[t^{-1}]_{\infty}$).
Since the slopes of $K _{\eta^{-1}}\otimes_{\mathbb
C[\eta]}\widehat{\mathbb N^0[t^{-1}]}$ are either zero or strictly greater than $+1$, the same holds for ${\mathcal F ^{\,(\,\infty,\,\infty)}}({{\mathbb M}})$.
Let now $\mathbb N^1$ denote the pull back of $\mathbb N^0$ by the translation $t\mapsto t+1$. In the same way we get a surjection $$\Upsilon^{\mathbb N^1[t^{-1}]}_{\infty}:\,\mathcal K
_{\eta^{-1}}\otimes_{\mathbb C[\eta]}\widehat{\mathbb
N^1[t^{-1}]}\longrightarrow{\mathcal F ^{\,(\,\infty,\,\infty)}}(\mathbb
N^1[t^{-1}])\cong{\mathcal F ^{\,(\,\infty,\,\infty)}}({{\mathbb M}})$$ and the slopes of $K _{\eta^{-1}}\otimes_{\mathbb
C[\eta]}\widehat{\mathbb N^1}$ are greater or equal than $+1$. Thus all slopes of ${\mathcal F ^{\,(\,\infty,\,\infty)}}({{\mathbb M}})$ must be strictly greater than $+1$, and then we are done when the base field is $\mathbb C$.
In the general case, there is a subfield $K_1\subset K$ of finite transcendence degree over $\mathbb Q$ such that the module ${{\mathbb M}}$ is defined over $K_1$ and all its singular points are $K_1$-rational. Choosing an embedding of fields $K_1\hookrightarrow\mathbb C$, the statement follows from the complex case treated above.
If $\mathcal N$ is a $\mathcal K_{\eta^{-1}}$-vector space with connection and $q\in\mathbb Q$, we denote by $\mathcal N_q$ its subspace of formal slope $q$ (see e.g. [@Sa0 5.3.1]), and we denote $\mathcal N_{<q}$ its subspace of slopes strictly smaller than $q$. If $\varphi:\mathcal L \to \mathcal N$ is a morphism, we denote by $\varphi_{<q}: \mathcal L _{<q}\to \mathcal N_{<q}$ the induced morphism (and similarly, we let $\varphi_{>q}$ denote the restriction of $\varphi$ to the subspaces of slopes strictly bigger than $q$). For $c\in\mathbb C$, let $\mathcal E^c$ denote the one dimensional $\mathcal K_{\eta^{-1}}$- vector space with connection given by $ \nabla (1)=c\cdot\eta^2$. It is easy to see that, if $c\neq0$, then $\mathcal E^c$ has slope $+1$. Let $\tau_c:K[t]\to K[t]$ the translation given by $t\mapsto t+c$. One has an isomorphism of $\mathcal K_{\eta^{-1}}$- vector spaces with connection $\mathcal E^c \otimes {\mathcal F ^{\,(\,c,\,\infty)}}({{\mathbb M}}) \cong {\mathcal F ^{\,(\,0,\,\infty)}}(\tau_c^{\ast}{{\mathbb M}})$ (from this isomorphism one can get an alternative proof of b) above). Notice also that $\mathcal E^c
\otimes\mathcal E^{-c}\cong (\mathcal
K_{\eta^{-1}},\partial_{\eta^{-1}})$.
[**Step 4: **]{}[*$\Upsilon$ is an isomorphism.*]{} We remark first that if $\mathcal M$ is $\mathcal K_{\eta^{-1}}$- vector space with connection such that all its slopes are strictly smaller than $+1$ then for $c\neq 0$ the twisted vector space with connection $\mathcal E^c\otimes_{\mathcal K_{\eta^{-1}}}\mathcal
M$ has only slope $+1$ (this can be easily seen using the structure theorem of formal meromorphic connections, [@Sa0 Theorem 5.4.7]).
Set ${\widehat{\mathcal M}_{\infty}}^c:= \mathcal E^{-c} \otimes (\mathcal E^c \otimes
{\widehat{\mathcal M}_{\infty}})_{<1}\subset {\widehat{\mathcal M}_{\infty}}$. If $c,d\in Sing ({{\mathbb M}})$ are distinct, then the map $$\mbox{ id } \otimes {\Upsilon_c}_{\,\mid {\widehat{\mathcal M}_{\infty}}^d}: \mathcal E^c
\otimes {\widehat{\mathcal M}_{\infty}}^d \longrightarrow \mathcal E^c \otimes {\mathcal F ^{\,(\,c,\,\infty)}}({{\mathbb M}})
\cong {\mathcal F ^{\,(\,0,\,\infty)}}(\tau_c^{\ast}{{\mathbb M}})$$ is the zero map, because its source is purely of slope $+1$ and the target has slopes strictly smaller than $+1$. Tensoring with $\mathcal E^{-c}$, it follows that ${\Upsilon_c}_{\,\mid {\widehat{\mathcal M}_{\infty}}^d}:
{\widehat{\mathcal M}_{\infty}}^d\to{\mathcal F ^{\,(\,c,\,\infty)}}({{\mathbb M}}_{\,c})$ is the zero map as well.
Since by step one $\Upsilon_c$ is onto, also is the map $\mbox{ id
}\otimes {\Upsilon_c}:\mathcal E^c \otimes {\widehat{\mathcal M}_{\infty}}\to\mathcal E^c
\otimes {\mathcal F ^{\,(\,c,\,\infty)}}({{\mathbb M}})$. Since $\mathcal E^c \otimes {\mathcal F ^{\,(\,c,\,\infty)}}({{\mathbb M}})\cong
{\mathcal F ^{\,(\,0,\,\infty)}}(\tau_c^{\ast}{{\mathbb M}})$ has slopes strictly smaller than $+1$, the restriction $$(\mbox{ id }\otimes {\Upsilon_c})_{<1}: (\mathcal E^c \otimes
{\widehat{\mathcal M}_{\infty}})_{<1}\longrightarrow \mathcal E^c \otimes {\mathcal F ^{\,(\,c,\,\infty)}}({{\mathbb M}})$$ is onto as well, thus tensoring with $\mathcal E^{-c}$ follows that ${\Upsilon_c}_{\,\mid {\widehat{\mathcal M}_{\infty}}^c}$ is onto.
Let $\Upsilon_{\infty}$ denote the composition of $\Upsilon$ with the projection onto ${\mathcal F ^{\,(\,\infty,\,\infty)}}({{\mathbb M}})$. The restriction $\Upsilon_{\infty,\,>1}: \widehat{\mathcal
M}_{\infty,\,>1}\to{\mathcal F ^{\,(\,\infty,\,\infty)}}({{\mathbb M}})$ is onto while for $c\in K$, the maps $\Upsilon_{c,\,>1}$ are zero. Notice also that if $c\in
Sing({{\mathbb M}})$, then $${\widehat{\mathcal M}_{\infty}}^c \cap (\oplus_{d\neq c}{\widehat{\mathcal M}_{\infty}}^d\oplus \widehat{\mathcal
M}_{\infty,\,>1}) =\{ 0\},$$ because tensoring both ${\widehat{\mathcal M}_{\infty}}^c$ and $\oplus_{d\neq c}{\widehat{\mathcal M}_{\infty}}^d$ with $\mathcal E^c$, one obtains two subspaces of $\mathcal E^c
\otimes {\widehat{\mathcal M}_{\infty}}$ with different slopes. Also, $\widehat{\mathcal
M}_{\infty,\,>1}\cap (\oplus_c {\widehat{\mathcal M}_{\infty}}^c)=\{0\}$. Thus the map $$\!\!\!(\oplus_c \Upsilon_c)\, \oplus\,
\Upsilon_{\infty,\,>1}:(\oplus_{c\,}
{\widehat{\mathcal M}_{\infty}}^c)\,\oplus\,\widehat{\mathcal M}_{\infty,\,>1}\rightarrow \
(\bigoplus_{c\in\, Sing{{\mathbb M}}} {\mathcal F ^{\,(\,c,\,\infty)}}({{\mathbb M}}_{\,c}))\oplus\
{\mathcal F ^{\,(\,\infty,\,\infty)}}({{\mathbb M}}_{\,\infty})$$ is an epimorphism, and then $$\begin{aligned}
&\dim_{\mathcal K_{\eta^{-1}}}(\oplus_{c\,}
{\widehat{\mathcal M}_{\infty}}^c)\,\oplus\,\widehat{\mathcal M}_{\infty,\,>1}\geqslant&\\
&\dim_{\mathcal K_{\eta^{-1}}}(\bigoplus_{c\in\, Sing{{\mathbb M}}} {\mathcal F ^{\,(\,c,\,\infty)}}({{\mathbb M}}_{\,c}))\oplus\ {\mathcal F ^{\,(\,\infty,\,\infty)}}({{\mathbb M}}_{\,\infty})&=\dim_{\mathcal
K_{\eta^{-1}}}\,{\widehat{\mathcal M}_{\infty}},\end{aligned}$$ the last equality by step 2. It follows that ${\widehat{\mathcal M}_{\infty}}=(\oplus_c
{\widehat{\mathcal M}_{\infty}}^c)\oplus \widehat{\mathcal M}_{\infty,>1}$, $\Upsilon =
(\oplus_c \Upsilon_c) \oplus \Upsilon_{>1}$ and $\Upsilon$ is an isomorphism, as was to be proved. $\Box$
For the rest of this section we will assume that the base field $K$ is the field $\mathbb C$ of complex numbers. In this case, one can define subrings of the rings of formal microdifferential operators ${\mathcal F ^{\,(\,c,\,\infty)}}$ ($c\in\mathbb C \cup\{\infty\}$) adjoining suitable convergence conditions, this is well known for $c\in\mathbb C$, see for example [@Pham] or [@Bjo], we will show that an analogous definition can be given for $c=\infty$.
[*Definition:*]{} We denote by $\mathcal E$ the set of formal series $$\sum_{i\le \,r} a_i(z)\,\eta^i \ \,,r\in \mathbb Z\ \
a_i(z)\in\mathbb C[[z]]$$ such that
- There exists a $\rho_0 >0$ such that all series $a_i(z)$ are convergent in the disk $\mid z \mid <\rho_0$.
- There exists a $0<\rho<\rho_0$ and a $\theta>0$ such that the series $$\sum_{k\geqslant 0}\| a_{\,-k}(z)\|_{\rho} \frac{\theta^k}{k\,!}$$ is convergent, where $\| a_{-k}(z)\|_{\rho}=\sup_{\mid
z\mid\leqslant\rho}\mid a_{-k}(z)\mid$.
Given $c\in \mathbb C$, we denote ${\mathcal E^{\,(c, \,\infty)}}$ the image of $\mathcal E$ by the map $$\begin{aligned}
\mathcal E &\longrightarrow & {\mathcal F ^{\,(\,c,\,\infty)}}\\
\sum_{i\le \,r} a_i(z)\,\eta^i &\longrightarrow &\sum_{i\le \,r}
a_i(t_c)\,\eta^i\end{aligned}$$ and similarly, we denote ${\mathcal E^{\,(\infty, \,\infty)}}$ the image of $\mathcal E$ by the map $$\begin{aligned}
\mathcal E &\longrightarrow & {\mathcal F ^{\,(\,\infty,\,\infty)}}\\
\sum_{i\le \,r} a_i(z)\,\eta^i &\longrightarrow &\sum_{i\le \,r}
a_i(t^{-1})\,\eta^i\end{aligned}$$
One can prove that ${\mathcal E^{\,(c, \,\infty)}}$ is in fact a subring of ${\mathcal F ^{\,(\,c,\,\infty)}}$ and that the division theorem (1.1) holds also for the rings ${\mathcal E^{\,(c, \,\infty)}}$ (see loci cit.). For $c=\infty$ these facts can be proved in a similar way, although some modifications are needed. To illustrate them, we prove in detail that ${\mathcal E^{\,(\infty, \,\infty)}}$ is a ring using a slight variation of the seminorms of Boutet de Monvel-Kree ([@Bjo Chap. 4, §3]):
If $(t, \eta)$ are coordinates in $\mathbb C^2$ and $\delta>0$, we denote $\Delta_{\,\delta}\subset \mathbb C^2$ the open subset defined by the inequalities $\mid \eta\mid < \delta,\ \mid t \mid
>\delta^{-1}$. Clearly, if $a(z)\in \mathbb C[[z]]$ is convergent in some disk centered at $z=0$, there exists a $\delta>0$ such that $
\sup_{\mid t\mid > \delta^{-1}}\mid a(t^{-1})\mid < \infty $, which allows to consider the following seminorms: If $F=\sum_{k\geqslant 0}a_{m-k}(t^{-1})\eta^{m-k}\in {\mathcal E^{\,(\infty, \,\infty)}}$, put $$N_m(F;\delta;x):=\sum_{k,\alpha,\beta\geqslant 0}\frac{2^{-k+1}\
k\,!}{(k+\alpha)!\ (k+\beta)!}\ \|
\partial^{\alpha}_{t}\,\partial^{\beta}_{\eta}\,a_{m-k}(t^{-1})\eta^{m-k}\|_{\Delta_{\delta}}
\ x^{2k+\alpha+\beta}$$ where $\|
\partial^{\alpha}_{t}\,\partial^{\beta}_{\eta}\,a_{m-k}(t^{-1})\eta^{m-k}\|_{\Delta_{\delta}}:=
\sup_{(t,\eta)\in\Delta_{\delta}}\{\mid
\partial^{\alpha}_{t}\,\partial^{\beta}_{\eta}\,a_{m-k}(t^{-1})\eta^{m-k}\mid\}$
[**Proposition:**]{}
* If $F\in{\mathcal E^{\,(\infty, \,\infty)}}$, then there exist $\delta,\, x>0$ such that $N_m(F;\delta;x)$ is convergent.*
ii\) Let $a_{m-k}(z)\in \mathbb C[[t^{-1}]]$ ($k\geqslant 0$) such that all $a_{m-k}(z)$ are convergent for $\mid z\mid<\rho$, put $F=\sum_{k\geqslant 0}a_{m-k}(t^{-1})\eta^{m-k}$. If there exist a $\delta,\, x>0$ such that $N_m(F;\delta;x)$ is convergent, then $F\in{\mathcal E^{\,(\infty, \,\infty)}}$.
iii\) If $F,G\in{\mathcal E^{\,(\infty, \,\infty)}}$, then there is a $\delta>0$ such that $N_{ord(FG)}(F\cdot G; \delta; x)\leqslant N_{ord(F)}(F; \delta;
x)\cdot N_{ord(G)}(G; \delta; x)$, and thus $F\cdot G\in{\mathcal E^{\,(\infty, \,\infty)}}$.
[*Proof* ]{} (cf. [@Bjo Ch.4,§3]): i) If $F\in{\mathcal E^{\,(\infty, \,\infty)}}$, then there exist constants $A,C_1,\delta>0$ such that $$\|a_{m-k}(t^{-1})\eta^{m-k}\|_{\Delta_{2\delta}}\leqslant A\cdot
k\,!\cdot C_1^k \ \ \mbox{for all } k\geqslant 0$$ From Cauchy’s inequalities we get $$\|\partial_t^{\alpha}\,\partial_{\eta}^{\beta}\,a_{m-k}(t^{-1})
\eta^{m-k}\|_{\Delta_{\delta}}\leqslant
\alpha\,!\cdot\beta\,!\cdot \delta^{-\alpha-\beta}\cdot 2^{\alpha}
\cdot\|a_{m-k}(t^{-1})\eta^{m-k}\|_{\Delta_{2\delta}}$$ (notice that the factor $2^{\alpha}$ would not appear if $\Delta_{\delta}$ were a polydisk, as it is the case for usual microdifferential operators. However, this factor will be harmless). Since $\alpha ! \, k!\leqslant (\alpha + k)!$, $\beta
! \, k!\leqslant (\beta + k)!$, putting $C_2=\max\{\sqrt{C_1},
\delta/2\}$, we get from (1) that $N_m(F;\delta;x)\leqslant A\,
\sum 2^{-k+1}(C_2 \,x)^{2k+\alpha+\beta}$, and this series is convergent if $x<C^{-1}_2$.
Items ii) and iii) are proved exactly as for the usual microdifferential operators, see loc. cit. $\Box$
The proof of the division theorem for the usual ring of microdifferential operators (i.e., the ring ${\mathcal E^{\,(\,0,\,\infty)}}$), as given in [@Bjo], relies on a series of combinatorial identities and on Cauchy’s inequalities for analytic functions defined in polydisks. As shown above, this arguments can be modified replacing these polydisks by open sets of type $\Delta_{\delta}$, and one obtains that the division theorem (1.1) holds also for the ring ${\mathcal E^{\,(\infty, \,\infty)}}$.
[*Definition ([@Ra]):*]{} For $s\in\mathbb R^+$, we denote by ${\mathcal K^s_x}$ the field $\mathbb C\{x\}_s[x^{-1}]$ of $s$-Gevrey series on the variable $x$, this is the ring of series $\sum_{i\geqslant 0} a_i\, x$, $a_i\in \mathbb C$ such that $\sum_{i\geqslant 0} (a_i/ (i!)^{s})\, x^{i}$ has non-zero convergence radius. In particular, $\mathcal K^0_x$ will denote the field $\mathbb C\{ x\}[x^{-1}]$ of germs of meromorphic functions. For later use, we briefly recall the behavior of Gevrey rings and vector spaces with connection under ramification (cf. [@Mal1 (1.3)]): Let $q$ be a positive integer, $s\in\mathbb
R^+$ and set $\sigma=s/q$. The assignment $y\mapsto z^q$ defines a morphism of fields $\pi:\mathcal K_y^s \longrightarrow\mathcal
K_z^{\sigma}$. Then:
- If $\mathcal V$ is a $\mathcal K_y^s$-vector space with connection $\nabla_y$, we put $\pi^{\ast}(\mathcal V):=\mathcal
K_z^{\sigma}\otimes_{\mathcal K_y^s}\mathcal V$, endowed with the connection $\nabla$ defined by $$z\nabla (\varphi \otimes v)=q\, (\varphi \otimes
y\nabla_y(v))+(z\frac{d\varphi}{dz}\otimes v).$$ If the slopes of $\mathcal V$ are $\lambda_1,\dots,\lambda_r$, those $\pi^{\ast}(\mathcal V)$ are $q\,\lambda_1,\dots,q\,\lambda_r$.
- If $\mathcal V$ is a $\mathcal K_z^{\sigma}$-vector space with connection $\nabla_z$, we denote $\pi_{\ast}(\mathcal V)$ the set $\mathcal V$ regarded as a vector space over $\mathcal K_y^s$ by restriction of scalars and endowed with the connection $\nabla:=
\frac{1}{q\,z^{q-1}}\nabla_z$. If the slopes of $\mathcal V$ are $\lambda_1,\dots,\lambda_r$, those $\pi_{\ast}(\mathcal V)$ are $\lambda_1/q,\dots,\lambda_r/q$ (each one repeated $q$ times).
[*Definition:*]{} If $\mathbb N$ is a $\mathbb
W_{\eta}$-module, the $s$-Gevrey germ at infinity defined by $\mathbb N$ is the ${\mathcal K^s_x}$-vector space ${\mathcal K^s_x}\otimes_{\mathbb
C[\eta]}\mathbb N$, endowed with the same connection as in the formal case (that is, $ \nabla (\alpha \otimes
n)=\partial_{\eta^{-1}}(\alpha)\otimes n - \alpha \otimes
\eta^2\partial_{\eta}n$).
As in the formal case, given a holonomic ${\mathbb W_t}$-module ${{\mathbb M}}$ and $c\in\mathbb C$, its microlocalization ${\mathcal E^{\,(c, \,\infty)}}({{\mathbb M}}):={\mathcal E^{\,(c, \,\infty)}}\otimes_{{\mathbb W_t}}{{\mathbb M}}$ is a ${\mathcal K^1_{\eta^{-1}}}$-vector space endowed with the connection given by left multiplication by $\eta^2\,t$ and one defines similarly ${\mathcal E^{\,(\infty, \,\infty)}}({{\mathbb M}})$. We have
[**Theorem**]{} ($1$-Gevrey stationary phase): [*Let ${{\mathbb M}}$ be a holonomic ${\mathbb W_t}$-module. Then the map $$\Upsilon^{Gev} : {\mathcal K^1_{\eta^{-1}}}\otimes_{\mathbb C[\eta]}{\widehat{\mathbb M}}\longrightarrow\bigoplus_{c\in\, Sing{{\mathbb M}}\,\cup\, \{\infty\}} {\mathcal E^{\,(c, \,\infty)}}({{\mathbb M}})$$ given by $\Upsilon^{Gev} (\alpha\otimes \widehat{m}) = \oplus_c \
\alpha\otimes m$ is an isomorphism of ${\mathcal K^1_{\eta^{-1}}}$-vector spaces with connection.* ]{}
[*Proof:*]{} Again as in the formal case, the map $\Upsilon^{Gev}$ is a morphism of ${\mathcal K^1_{\eta^{-1}}}$-vector spaces with connection, and we have to prove that it is an isomorphism. The case of a module with punctual support is easy and left to the reader, so we assume that ${{\mathbb M}}={\mathbb W_t}/{\mathbb W_t}\cdot P$. Then, it follows from the division theorem for the rings ${\mathcal E^{\,(c, \,\infty)}}$ ($c\in
Sing({{\mathbb M}})\cup\{\infty\}$), that the dimension of the source and the target of $\Upsilon^{Gev}$ are equal. So, it will be enough to prove that the map $$id_{\mathbb C [[\eta^{-1}]][\eta]}\otimes
\Upsilon^{Gev}:{\widehat{\mathcal M}_{\infty}}\longrightarrow \mathbb C
[[\eta^{-1}]][\eta]\otimes_{{\mathcal K^1_{\eta^{-1}}}}(\oplus_c\,{\mathcal E^{\,(c, \,\infty)}}({{\mathbb M}}))$$ is injective. But we have a commutative diagram of $\mathbb
C[[\eta^{-1}]][\eta]$-vector spaces
(185,90)(-10,10) (45,90)[${\widehat{\mathcal M}_{\infty}}$]{}
(100,100)[$id\otimes\Upsilon^{Gev}$]{}
(70,50)[$\Upsilon$]{}
(165,50)[$\mathfrak m$]{}
(115,20)[$\oplus_c {\mathcal F ^{\,(\,c,\,\infty)}}({{\mathbb M}})$]{}
(185,90)[$\mathbb C
[[\eta^{-1}]][\eta]\otimes_{{\mathcal K^1_{\eta^{-1}}}}(\oplus_c{\mathcal E^{\,(c, \,\infty)}}({{\mathbb M}}))$]{}
(85,95)[(1,0)[90]{}]{}
(67,80)[(1,-1)[45]{}]{}
(185,80)[(-1,-1)[45]{}]{}
where $\mathfrak m$ is given by $\mathfrak m (\varphi
\otimes (\oplus_c \,\xi_c))=\oplus_c \,\varphi \cdot \xi_c$. Since $\Upsilon$ is an isomorphism (by formal stationary phase), we are done. $\Box$
[*Remarks: *]{}i) The theorem is proved in [@Mal2] when $Sing({{\mathbb M}})=\{0\}$ and the singularity at infinity of ${{\mathbb M}}$ has slopes smaller than $+1$, by quite a different method.
ii\) It is a consequence of the above proof that the map $\mathfrak
m$ is also an isomorphism. Notice that this fails if ${{\mathbb M}}$ is not holonomic, for example the multiplication map $\mathbb
C[[\eta^{-1}]][\eta]\otimes_{{\mathcal K^1_{\eta^{-1}}}}{\mathcal E^{\,(c, \,\infty)}}\longrightarrow {\mathcal F ^{\,(\,c,\,\infty)}}$ is clearly not onto.
iii\) Let ${{\mathbb M}}={\mathbb W_t}/{\mathbb W_t}\cdot P(t,\partial_t)$ be a holonomic ${\mathbb W_t}$-module such that its formal slopes at infinity of ${{\mathbb M}}$ are smaller than $+1$. Denote $\mathcal D^i=\mathcal K_{\eta^{-1}}^i
\langle
\partial_{\eta^{-1}}\rangle\ (i=0,1)$. For some $k\geqslant 0$ we will have $Q= \eta^{-k} P(-\partial_{\eta},\eta)\in \mathbb
C[\eta^{-1}]\langle
\partial_{\eta^{-1}}\rangle$ (using $\partial_{\eta}=-\eta^{-2}
\partial_{\eta^{-1}}$). Consider the two complexes of $\mathbb C$-vector spaces $$C_i:\ \ \ \ \ \ \ \ 0\longrightarrow \mathcal K_{\eta^{-1}}^i
\stackrel{Q}{\longrightarrow}\mathcal
K_{\eta^{-1}}^i\longrightarrow 0 \hspace{1cm} (i=0,1).$$ There is an obvious morphism of complexes $C_0\to C_1$. Since the formal slopes at infinity of ${{\mathbb M}}$ are smaller than $+1$, it follows from the results of J.P.Ramis in [@Ra 1.5.11, 1.5.14] that this morphism is a quasi-isomorphism. Since the complex $C_i$ is quasi-isomorphic to $\mathbb R \mbox{Hom}_{\mathcal
D^i}(\mathcal K^i_{\eta^{-1}}\otimes {\widehat{\mathbb M}}\,,\, \mathcal
K^i_{\eta^{-1}})$, by the theorem above we have a quasi-isomorphism of solution complexes $$\mathbb R \mbox{Hom}_{\mathcal D^0}(\mathcal
K^0_{\eta^{-1}}\otimes {\widehat{\mathbb M}}\,,\, \mathcal K^0_{\eta^{-1}})\cong
\oplus_{c\in Sing {{\mathbb M}}}\mathbb R \mbox{Hom}_{\mathcal
D^1}({\mathcal E^{\,(c, \,\infty)}}({{\mathbb M}})\,,\, \mathcal K^1_{\eta^{-1}})$$ That is, we can compute microlocally the “germ at infinity" of the solution complex of ${\widehat{\mathbb M}}$.
iv\) The theorem above we can be applied to study to which extent the formal decomposition of a $\mathbb C\{x\}[x^{-1}]$-vector space with connection given by its slopes holds at the $s$-Gevrey level, namely one has:
[**Theorem:**]{} [*Let $s\in \mathbb R^{+}$ and let $\mathcal V$ be a finitely dimensional $\mathcal K^s_x$-vector space with connection. Then there exist ${\mathcal K^s_x}$-vector spaces with connection\
$\mathcal V_{<1/s}, \mathcal V_{=1/s}, \mathcal
V_{>1/s}$ of formal slopes strictly smaller than $\frac{1}{s}$ (respectively, equal to $\frac{1}{s}$, strictly greater than $\frac{1}{s}$) and an isomorphism of $\mathcal K^s_{x}$-vector spaces with connection $$\mathcal V \cong \mathcal V_{<1/s} \oplus \mathcal V_{=1/s} \oplus
\mathcal V_{>1/s}.$$*]{}
[*Proof: *]{} We consider first the case $s=1$. From a theorem of Malgrange–Ramis ([@Ra 3.2.13], [@Ra2 7.1], cf. also [@Mal1]) on algebraization of $s$-Gevrey spaces with connection, it follows that there is a holonomic ${\mathbb W_t}$-module $\mathbb N$ and an isomorphism of $\mathcal K^s_x$-vector spaces with connection $\mathcal V \cong \mathcal K^s_{\eta^{-1}}\otimes
\mathbb N$ (the right hand side denotes the $s$-Gevrey germ at infinity defined by $\mathbb N$, although our coordinate will be labelled $x$ instead of $\eta^{-1}$ as done before). Let ${{\mathbb M}}$ denote the inverse Fourier transform of $\mathbb N$. Then, putting $\mathcal V_{<1/s}={\mathcal E^{\,(\,0,\,\infty)}}({{\mathbb M}})$, $\mathcal V_{=1/s}=\oplus_{c\in\,
Sing{{\mathbb M}}-\{0\}} {\mathcal E^{\,(c, \,\infty)}}({{\mathbb M}})$ and $\mathcal V_{>1/s}={\mathcal E^{\,(\infty, \,\infty)}}({{\mathbb M}})$ the claimed decomposition is the one given by the $1$-Gevrey stationary phase formula.
We consider next the case $s=\frac{1}{q}$, $q$ a positive integer (compare [@Mal1 (2.2)]). Consider the map $\pi: \mathcal
K^1_y\longrightarrow\mathcal K^s_x$ given by $y\mapsto x^q$. By the previous case, we will have $$\pi_{\ast}(\mathcal V)\cong\pi_{\ast}(\mathcal
V)_{<1}\oplus\pi_{\ast}(\mathcal V)_{=1}\oplus\pi_{\ast}(\mathcal
V)_{>1}.$$ We have a surjective morphism of $\mathcal K^s_x$-vector spaces with connection $\alpha: \pi^{\ast}(\pi_{\ast}(\mathcal V))= K^s_x
\otimes \pi_{\ast}(\mathcal V)\longrightarrow\mathcal V$ given by $\alpha (\varphi \otimes v)=\varphi\cdot v$. Notice that all formal slopes of $\pi^{\ast}(\pi_{\ast}(\mathcal V)_{<1})$ are strictly smaller than $q$, those of $\pi^{\ast}(\pi_{\ast}(\mathcal V)_{=1})$ are equal to $q$, and those of $\pi^{\ast}(\pi_{\ast}(\mathcal V)_{>1})$ are strictly bigger than $q$. Denote $\alpha_{<q}$ the restriction of $\alpha$ to $\pi_{\ast}(\pi^{\ast}(\mathcal V)_{<1})$ (similarly in the cases $=q$, $>q$). Since the filtration by slopes is strict, we have a decomposition $$\mathcal V \cong \alpha_{<q}(\pi_{\ast}(\pi^{\ast}(\mathcal
V)_{<1}))\oplus\alpha_{=q}(\pi_{\ast}(\pi^{\ast}(\mathcal
V)_{=1}))\oplus\alpha_{>q}(\pi_{\ast}(\pi^{\ast}(\mathcal
V)_{>1}))$$ as desired.
Assume now $s=\frac{p}{q}$ where $p,q\geqslant 1$ are integers. We consider the morphism $\pi: \mathcal K^s_x\longrightarrow\mathcal
K^{1/q}_z$ given by $x\mapsto z^p$. By the previous case we have a decomposition of $\mathcal K^{1/q}_z$- vector spaces with connection $$\pi^{\ast}(\mathcal V)\cong\pi^{\ast}(\mathcal
V)_{<q}\oplus\pi^{\ast}(\mathcal V)_{=q}\oplus\pi^{\ast}(\mathcal
V)_{>q}.$$ And an injective morphism of $\mathcal K^s_x$-vector spaces with connection $\beta: \mathcal V
\longrightarrow\pi_{\ast}(\pi^{\ast}(\mathcal V))$ given by $v\mapsto 1\otimes v$. Denote $\beta_{<q/p}$ the composition of $\beta$ with the projection $\pi_{\ast}(\pi^{\ast}(\mathcal
V))\twoheadrightarrow\pi_{\ast}(\pi^{\ast}(\mathcal V)_{<q})$ and similarly for $\beta_{=q/p}$, $\beta_{>q/p}$. Again by strictness of the filtration by formal slopes we have a decomposition $$\mathcal V \cong
(\mbox{Ker}(\beta_{=q/p})\cap\mbox{Ker}(\beta_{>q/p}))\oplus
(\mbox{Ker}(\beta_{<q/p})\cap\mbox{Ker}(\beta_{>q/p}))\oplus
(\mbox{Ker}(\beta_{<q/p})\cap\mbox{Ker}(\beta_{=q/p}))$$ which, because of the behavior of formal slopes under $\pi_{\ast}$, is the desired one.
Finally, if $s\in\mathbb R -\mathbb Q$, choose a rational number $0<p/q<s$ such that no formal slope of $\mathcal V$ is in the interval $[1/s, q/p]$. By the Malgrange-Ramis algebraization theorem we can assume $\mathcal V \cong \mathcal K^s_x
\otimes\mathcal W$, where $\mathcal W$ is a $\mathcal
K_x^{p/q}$-vector space with connection. Then, the previous case applies to $\mathcal W$ and the proof is complete. $\Box$
[**2. The singularity at zero of ${\widehat{\mathbb M}}$ and an exact sequence of vanishing cycles**]{}
In this section we introduce one more variant of the microlocalization functors (which should correspond to Laumon’s $(\infty,0')$ local Fourier transform). Our aim is to establish the existence of a sequence of vanishing cycles analogous to [@Kat Theorem 10], [@Lau proof of 3.4.2]. In this section we will work over the complex numbers and we will consider only the convergent (or, more precisely $1$-Gevrey) version of the $(\infty,0)$-microlocalization, the corresponding formal version can be obtained just by dropping all convergence conditions.
[*Definition: *]{} We denote ${\mathcal E^{\,(\,\infty,\,0)}}$ the set of formal sums $$P=\sum_{i\leqslant r} \ a_i(\eta)\, t^i \ \ , \ \ r\in\mathbb Z$$ such that there exists a $\rho_0 >0$ so that all series $a_i(\eta)$ are convergent in the disk of radius $\rho_0$ centered at $0$, and there exists a $0<\rho<\rho_0$ and a $\theta>0$ such that the series $$\sum_{k\geqslant 0}\| a_{\,-k}(\eta)\|_{\rho}
\frac{\theta^k}{k\,!}$$ is convergent, where $\| a_{-k}(\eta)\|_{\rho}=\sup_{\mid
z\mid\leqslant\rho}\mid a_{-k}(z)\mid$. We consider in ${\mathcal E^{\,(\,\infty,\,0)}}$ the multiplication rule given by $$P \cdot Q = \sum_{\alpha \geqslant\, 0}\ \frac{1}{\alpha\, !}\ \
\partial_{t}^{\,\alpha}P\cdot
\partial^{\,\alpha}_{\eta} Q$$ and the morphism of $\mathbb C$-algebras $$\begin{aligned}
\varphi^{(\,\infty,\,0\,)}: {\mathbb W_t}&\longrightarrow& {\mathcal E^{\,(\,\infty,\,0)}}\,,\\
t\ \ &\mapsto&\ -t\\
\partial_t\ \ &\mapsto&\ \eta\end{aligned}$$ which endows ${\mathcal E^{\,(\,\infty,\,0)}}$ with a structure of $({\mathbb W_t},{\mathbb W_t})$-bimodule. It is not difficult to see that $\varphi^{(\,\infty,\,0\,)}$ is flat.
[*Remark: *]{} While the ring ${\mathcal E^{\,(\,\infty,\,0)}}$ is nothing but ${\mathcal E^{\,(\,0,\,\infty)}}$, with the rôles of the variables $t$ and $\eta$ interchanged, the morphism $\varphi^{(\,\infty,\,0\,)}$ is [*not*]{} obtained in the same way from $\varphi^{(\,0,\,\infty\,)}$ (the morphism we considered in section 1). The morphism we get from $\varphi^{(\,0,\,\infty\,)}$ interchanging $t$ and $\eta$ will be denoted $$\begin{aligned}
\mu: \mathbb W_{\eta} &\longrightarrow& {\mathcal E^{\,(\,\infty,\,0)}}\,\\
\eta \ \ &\mapsto&\ \eta \\
\partial_{\eta}\ \ &\mapsto&\ t\end{aligned}$$
[*Definitions: *]{}i) Given a ${\mathbb W_t}$-module ${{\mathbb M}}$, we put $${\mathcal E^{\,(\,\infty,\,0)}}({{\mathbb M}}):={\mathcal E^{\,(\,\infty,\,0)}}\otimes_{{\mathbb W_t}}{{\mathbb M}}$$ where ${\mathcal E^{\,(\,\infty,\,0)}}$ is viewed as a left ${\mathbb W_t}$-module [*via*]{} $\varphi^{(\,\infty,\,0\,)}$.
ii\) Given a $\mathbb W_{\eta}$-module $\mathbb N$, we put $${\mu}(\mathbb N):={\mathcal E^{\,(\,\infty,\,0)}}\otimes_{\mathbb W_{\eta}}\mathbb N$$ where now ${\mathcal E^{\,(\,\infty,\,0)}}$ is viewed as a left $\mathbb W_{\eta}$-module [*via*]{} $\mu$.
Both ${\mathcal E^{\,(\,\infty,\,0)}}({{\mathbb M}})$ and ${\mu}(\mathbb N)$ have a structure of $\mathbb C\{t^{-1}\}[t]$-vector spaces and of $\mathbb
C\{\eta\}\langle \partial_{\eta}\rangle$-modules, where the action of $\partial_{\eta}$ is, by definition, given by left multiplication by $t$. Notice that in fact $\mu(\mathbb N)$ is nothing but ${\mathcal E^{\,(\,0,\,\infty)}}(\mathbb N)$ with the variables $t$ and $\eta$ interchanged. In section 1 we considered in the $(0,\,\infty)$-microlocalization only the structure of $\mathbb
C\{\eta^{-1}\}[\eta]$-vector space (i.e., of $\mathbb
C\{t^{-1}\}[t]$-vector space after our interchange of variables), while now the structure of $\mathbb C\{\eta\}\langle
\partial_{\eta}\rangle$-module will be considered as well (and in fact it will play the major rôle). Notice also that ${\mathcal E^{\,(\,\infty,\,0)}}({{\mathbb M}})$ depends only on the $1$-Gevrey germ at infinity defined by ${{\mathbb M}}$ and $\mu(\mathbb N)$ depends only on $\mathbb
N_0=\mathbb C\{\eta\}\langle
\partial_{\eta}\rangle\otimes_{\mathbb W_{\eta}}\mathbb N$.
(2.1)[**Proposition**]{} : [*Let ${{\mathbb M}}$ be a holonomic ${\mathbb W_t}$-module. Then the map $$\begin{aligned}
\Upsilon^0:{\mu}({\widehat{\mathbb M}}) &\longrightarrow&{\mathcal E^{\,(\,\infty,\,0)}}({{\mathbb M}}) \\
\alpha\otimes \widehat{m}&\longrightarrow& \alpha\otimes m\end{aligned}$$ is an isomorphism of $\mathbb C\{\eta\}\langle
\partial_{\eta}\rangle$-modules and of $\mathbb C\{t^{-1}\}[t]$-vector spaces.*]{}
[*Proof: *]{} It is easy to check that the map is a morphism both of $\mathbb C\{\eta\}\langle
\partial_{\eta}\rangle$-modules and of $\mathbb C\{t^{-1}\}[t]$-vector spaces, we have to prove that it is an isomorphism. As for the stationary phase formulas, the theorem reduces to the case of a Dirac $\delta$-module (then one has $\mu({\widehat{\mathbb M}})={\mathcal E^{\,(\,\infty,\,0)}}({{\mathbb M}})=0$), and the case $ {{\mathbb M}}= {\mathbb W_t}/{\mathbb W_t}\cdot P(t,\partial_t)$.
In this last case, we have ${\widehat{\mathbb M}}= \mathbb W_{\eta}/\mathbb
W_{\eta}\cdot P(-\partial_{\eta}, \eta)$, and both the source and the target of the map $\Upsilon^0$ are isomorphic to ${\mathcal E^{\,(\,\infty,\,0)}}/{\mathcal E^{\,(\,\infty,\,0)}}\cdot P(-t,\eta)$. The map $\Upsilon^0$ composed with these isomorphisms is the identity map, and the proposition is thus proved. $\Box$
Let $\tau$ be a coordinate in the affine line and let $\mathbb N $ be a holonomic $\mathbb
C\{\tau\}\langle\partial_{\tau}\rangle$-module. We recall next the formalism of solutions and microsolutions of $\mathbb N$, following [@Mal2]: For $r>0$, denote by $D_r$ the disk in the complex plane centered at $\tau=0$ and of radius $r$, by $\widetilde{D_r^{\ast}}$ the universal covering space of $D_r-\{0\}$, and by $\mathcal O(D_r)$ (respectively, $\mathcal O
(\widetilde{D_r^{\ast}})$) the ring of holomorphic functions on $D_r$ (respectively, on $\widetilde{D_r^{\ast}}$). Put $\widetilde{\mathcal C}(D_r)=\mathcal
O(\widetilde{D_r^{\ast}})/\mathcal O (D_r)$. Set $\mathcal O:=
\mbox{indlim}_{r\to 0}\,\mathcal O(D_r)$, $ \widetilde {\mathcal
O}:= \mbox{indlim}_{r\to 0}\, \widetilde{\mathcal O}(D_r)$, $\widetilde{\mathcal C}:=\mbox{indlim}_{r\to
0}\,\widetilde{\mathcal C}(D_r)$, ${\mathcal D }:=\mathcal O \langle
\partial_{\tau}\rangle$. We denote by $\mathbb N\mapsto D\mathbb N$ the duality functor in the category of holonomic left ${\mathcal D }$-modules, recall that if $\mathbb N = {\mathcal D }/{\mathcal D }P$, then $D\mathbb N={\mathcal D }/{\mathcal D }\,^{t}P$ where $^{t}P$ denotes the transposed differential operator (see e.g. [@Sa V.1]).
Following Malgrange, we put
1. $\Psi(\mathbb N):=\mbox{Hom}_{\mathcal D}(D\mathbb N,
\widetilde{\mathcal O}$) (the $\mathbb C$-vector space of “nearby cycles" of $\mathbb N$).
2. $\Phi(\mathbb N):=\mbox{Hom}_{\mathcal D}(D\mathbb N,
\widetilde{\mathcal C})$ (the $\mathbb C$-vector space of microsolutions of $\mathbb N$ or “vanishing cycles").
Between these vector spaces there are morphisms $can: \Psi(\mathbb
N)\mapsto \Phi(\mathbb N)$ (induced by the quotient map $can:
\widetilde{\mathcal O}\rightarrow \widetilde{\mathcal C}$) and $var: \Phi(\mathbb N)\mapsto \Psi(\mathbb N)$ (induced by the only map $var: \widetilde{\mathcal C}\rightarrow \widetilde{\mathcal
O}$ such that $var\circ can= T-Id$, where $T$ is the monodromy on $\widetilde{\mathcal O}$). The map $can$ is an isomorphism if $\mathbb N \cong \mu(\mathbb N)$, the map $var$ is an isomorphism if $\mathbb N \cong \mathbb N[\tau^{-1}]$. The assignment $\mathbb
N\mapsto (\Psi(\mathbb N), \Phi(\mathbb N), can, var)$ is functorial. The behavior of this spaces under localization and microlocalization is the following:
1. For the localization we have $\Phi(\mathbb N[\tau^{-1}])
\cong\Psi(\mathbb N[\tau^{-1}])\cong\Psi(\mathbb N)$.
2. For the microlocalization we have $\Psi(\mu(\mathbb N))
\cong\Phi(\mu (\mathbb N))\cong\Phi(\mathbb N)$.
For a), recall that both the kernel and cokernel of $\mathbb
N\mapsto N[\tau^{-1}]$ are a direct sum of Dirac $\delta_0$’s and $\Psi(\delta_0)=0$ ($\delta_0=\mathbb
C\{\tau\}\langle\partial_{\tau}\rangle/(\partial_{\tau})$). Similarly, the kernel and cokernel of $\mathbb N\mapsto\mu(\mathbb
N)$ is a direct sum of copies of the $\mathcal D$-module $\mathbb
C\{\tau\}$ (see e.g. [@Mal1 4.11.b]) and $\Phi(\mathbb
C\{\tau\})=0$, from which b) follows.
Given a holonomic $\mathbb W_{\tau}$-module ${{\mathbb M}}$ we denote by $DR({{\mathbb M}})$ its De Rham complex (see e.g. [@Mal2 I.2]) and by $\mbox{Sol}({{\mathbb M}})_0=\mathbb R\mbox{Hom}_{{\mathcal D }}(\mathcal O
\otimes_{\mathbb W_{\tau}}{{\mathbb M}},\, ,\mathcal O)[1]$ the stalk at zero of its solution complex. We denote by $\mathbb
H^{\ast}_c(\mathbb A^1_{\mathbb C}, DR({{\mathbb M}}))$ the hypercohomology with compact supports of the De Rham complex of ${{\mathbb M}}$.
The following proposition follows essentially from results of B. Malgrange. It shows that the De Rham cohomology with compact supports of a holonomic ${\mathbb W_t}$-module which has slopes at infinity strictly smaller than $+1$ can be computed locally in terms of the germs defined by ${{\mathbb M}}$ at its singular points and at infinity.
[**Proposition**]{} (exact sequence of vanishing cycles): [*Let ${{\mathbb M}}$ be a holonomic ${\mathbb W_t}$-module such that all its formal slopes at infinity are strictly smaller than $+1$. Then there is an exact sequence of $\mathbb C$-vector spaces $$\hspace{-1cm} 0\rightarrow \mathbb H_c^1(\mathbb A_{\mathbb C}^1,
\mbox{DR}({{\mathbb M}}))\rightarrow \oplus_{c\in
Sing({{\mathbb M}})}\Phi({{\mathbb M}}_{\,c})\rightarrow \Phi({\mathcal E^{\,(\,\infty,\,0)}}({{\mathbb M}}))\rightarrow
\mathbb H_c^2(\mathbb A_{\mathbb C}^1, \mbox{DR}({{\mathbb M}}))\rightarrow
0$$ where ${{\mathbb M}}_c:= \mathbb
C\{t_c\}\langle\partial_t\rangle\otimes_{{\mathbb W_t}}{{\mathbb M}}$.*]{}
[*Proof: *]{} From the exact sequence $
0\longrightarrow\mathcal O \longrightarrow \widetilde{\mathcal O
}\longrightarrow \widetilde{\mathcal C }\longrightarrow 0 $, we get $$0 \rightarrow \mbox{Hom}_{{\mathcal D }}(D{\widehat{\mathbb M}}_0,\mathcal O)\rightarrow
\Psi({\widehat{\mathbb M}}_{\,0})\rightarrow \Phi({\widehat{\mathbb M}}_{\,0})\rightarrow
\mbox{Ext}^1_{{\mathcal D }}(D{\widehat{\mathbb M}}_0,\mathcal O)\rightarrow 0$$ (since $\mbox{Ext}^1_{{\mathcal D }}(D{\widehat{\mathbb M}}_0,\mathcal O))=0$, see e.g. [@Mal2 II.3]). We have also quasiisomorphisms $$\mathbb R\Gamma_c(\mathbb A^1_{\mathbb C},
DR({{\mathbb M}}))\cong\mbox{Sol}\,(\widehat{D{{\mathbb M}}})_0\,[-1]\cong\mbox{Sol}(\,D{\widehat{\mathbb M}})_0\,[-1],$$ the first one follows from [@Mal2 VI, 2.9 and VII, 1.1] (since we assume that the slopes at infinity of ${{\mathbb M}}$ are strictly smaller than $+1$), and the second one holds because Fourier transform and duality commute up to the transformation given by $t\mapsto -t, \partial_t\mapsto -\partial_t$ (see e.g. [@Sa V.2.b]).
From an element of $\oplus_{c\in Sing {{\mathbb M}}}\Phi({{\mathbb M}}_{\,c})$ we get, by the Laplace transform considered in [@Mal2 chap. XII], a multivaluated solution of ${\widehat{\mathbb M}}$ defined on a half-plane in $\mathbb C$ \[loc.cit., XII, 1.2\]. Under our hypothesis, the module ${\widehat{\mathbb M}}$ is singular only at zero and at infinity, so this solution can be analytically prolonged and determines univocally an element of $\Psi({\widehat{\mathbb M}}_{\,0})$. This assignment establishes an isomorphism of complex vector spaces $ \oplus_{c\in Sing
{{\mathbb M}}}\Phi({{\mathbb M}}_{\,c})\simeq\Psi({\widehat{\mathbb M}}_{\,0})$. On the other hand, by proposition (2.1) we have also an isomorphism $$\Phi({\widehat{\mathbb M}}_{\,0})\cong\Phi(\mu({\widehat{\mathbb M}}))\cong\Phi({\mathcal E^{\,(\,\infty,\,0)}}({{\mathbb M}})),$$ and the proposition follows. $\Box$
[*Remark:*]{} Using b) above, the long exact sequence in the proposition can be rewritten in terms of spaces of nearby cycles instead of spaces of microsolutions, namely one has an exact sequence $$\hspace{-1cm} 0\rightarrow \mathbb H_c^1(\mathbb A_{\mathbb C}^1,
\mbox{DR}({{\mathbb M}}))\rightarrow \oplus_{c\in
Sing({{\mathbb M}})}\Psi(\mu({{\mathbb M}}_{\,c}))\rightarrow
\Psi({\mathcal E^{\,(\,\infty,\,0)}}({{\mathbb M}}))\rightarrow \mathbb H_c^2(\mathbb A_{\mathbb C}^1,
\mbox{DR}({{\mathbb M}}))\rightarrow 0.$$
[**3. $p$-adic microdifferential operators of finite order**]{}
It is proved in [@Mal1] (and it follows also from the $1$-Gevrey stationary phase theorem proved in section 1) that if ${{\mathbb M}}$ is a holonomic ${\mathbb W_t}$-module, singular only at zero and at infinity, and such that the formal slopes of the singularity at infinity are strictly smaller than $+1$, then one has an isomorphism of $\mathcal K^1_{\eta^{-1}}$-vector spaces with connection $$\mathcal K^1_{\eta^{-1}}\otimes_{\mathbb C[\eta]}{\widehat{\mathbb M}}\cong
\mathcal E^{(0,\infty)}({{\mathbb M}})$$ Notice that, in case the singularity at infinity of ${{\mathbb M}}$ is regular, these ${\mathbb W_t}$-modules are analogous to the $\ell$-adic canonical prolongations of Gabber and Katz, which play a major rôle in G. Laumon work ([@Lau], see also [@Kat]). So, it might be of interest to find an analogue of Malgrange’s result when $K$ is a $p$-adic field (we denote $\mid \cdot \mid$ its absolute value, normalized by the condition $\mid p\mid=p^{-1}$). In such case, it seems a reasonable $p$-adic version of the ring $\mathbb C\{\eta^{-1}\}_1$ would be the ring of power series $\sum_{i\geqslant 0}a_i\eta^{-i}$, $a_j\in K$, such that $\sum_{i\geqslant 0}\,i!\,a_i\,\eta^{-i}$ is convergent for $\mid
\eta^{-1} \mid <1$. If we set $\omega=\sqrt[p-1]{1/p}\in\mathbb
R$, it follows easily from the classical bounds $\omega^{k-1}<\mid
k!\mid< (k+1)\omega^{k}$ that this ring is nothing but the ring $\mathcal A_{\eta^{-1}}(\omega)$ of power series in $\eta^{-1}$ convergent in the disk $\mid \eta^{-1} \mid <\omega$. Pursuing this analogy, to the field $\mathcal K^1_{\eta^{-1}}$ would correspond the ring $\mathcal A_{\eta^{-1}}(\omega)[\eta]$, which for simplicity will be denoted ${\mathcal A(\omega)[\eta]}$ in the sequel. Its elements are the Laurent series $\sum_{j\leqslant r}a_j\,\eta^{j}$ with $a_j\in K$, $r\in\mathbb Z$, such that for all $0<\rho<\omega$, $\limsup_{j\to -\infty}\mid a_j\mid \rho^{-j}=0$.
In this section we define a ring ${\Phi ^{\,(\,0,\,\infty)}}$ of $p$-adic microdifferential operators and a corresponding microlocalization functor ${{\mathbb M}}\mapsto{\Phi ^{\,(\,0,\,\infty)}}({{\mathbb M}})$. We prove that if ${{\mathbb M}}=K[t]\langle\partial_t\rangle/K[t]\langle\partial_t\rangle\cdot
P$ is a holonomic $K[t]\langle\partial_t\rangle$-module which is singular only at zero and infinity, the singularity at infinity has formal slopes strictly smaller that $+1$ and the singularity at zero is solvable at radius $1$ ([@CM1 8.7]), then one has an isomorphism of ${\mathcal A(\omega)[\eta]}$-modules with connection $${\mathcal A(\omega)[\eta]}\otimes_{K[\eta]} {\widehat{\mathbb M}}\cong {\Phi ^{\,(\,0,\,\infty)}}({{\mathbb M}}).$$ where ${\widehat{\mathbb M}}$ denotes now the $p$-adic Fourier transform (defined below). This isomorphism might be regarded as a $p$-adic analogue of the theorem of Malgrange quoted above.
We assume that $K$ is a spherically complete $p$-adic field (e.g., a finite extension of $\mathbb Q_p$). Let $t$ be a coordinate in the affine $K$-line. We denote by $\mathcal A_t(1)$ the ring of power series in the variable $t$ with coefficients in $K$ which are convergent for $\mid t\mid <1$. For all $0<\lambda<1$, the ring $\mathcal A_t(1)$ is endowed with the norm $$\mid \sum_{i\geqslant 0}\,a_i\,t^i \mid_{\lambda}\, = \sup_i \{\,
\mid a_i \mid \lambda^i\,\}\in \mathbb R ^{+}.$$
Let $r\in \mathbb Z$ be an integer, set $\omega=\sqrt[p-1]{1/p}\in\mathbb R$. We denote by ${\Phi ^{\,(\,0,\,\infty)}}[r]$ the set of all Laurent series $\sum_{j\leqslant r}a_{j}(t)\,\eta^j$ with $a_j(t)\in \mathcal A_t(1)$, such that for all $\lambda, \rho
\in\mathbb R$ with $0<\rho<\omega\cdot\lambda<\omega$, one has $$\limsup_{j\to - \infty} \mid a_j(t)\mid_{\,\lambda} \,
\rho^{-j}=0.$$ Equivalently, for all $\lambda,\rho$ in the range above there is a $C>0$ such that for all $j\leqslant r$ one has $\mid
a_j(t)\mid_{\,\lambda}\leqslant C\cdot\rho^j$ (in fact, it is clear that given any $0<\lambda_0<1$ is enough to check this condition holds for $\lambda>\lambda_0$). We put ${\Phi ^{\,(\,0,\,\infty)}}= \bigcup
_{r\in\, \mathbb Z} {\Phi ^{\,(\,0,\,\infty)}}[r]$.
[**Proposition:**]{} [*If $\ F=\sum f_u \,\eta^u\in
{\Phi ^{\,(\,0,\,\infty)}}$ and $\ G=\sum g_v \,\eta^v\in {\Phi ^{\,(\,0,\,\infty)}}$, then $$F \cdot G = \sum_{\alpha \geqslant\, 0}\ \frac{1}{\alpha\, !}\ \
\partial_{\eta}^{\,\alpha}F\cdot
\partial^{\,\alpha}_{t} G \,
\, \in {\Phi ^{\,(\,0,\,\infty)}}.$$*]{}
[*Proof:*]{} Write $F\cdot G= \sum r_j(t)\,\eta^j$. We have $$\begin{aligned}
\mid r_j(t) \mid_{\lambda} \rho^{-j} &\leqslant
&\max_{j=u+v-\alpha} \left\{\mid \frac{1}{\alpha\, !}\
u(u-1)\dots(u-\alpha+1)\ f_u\ \frac{d^{\alpha}
g_v}{dt^{\alpha}}\mid_{\,\lambda} \rho^{-j}\right\}
\\
&\leqslant &\max_{j=u+v-\alpha} \left\{\mid
u(u-1)\dots(u-\alpha+1)\mid \ \mid f_u \mid_{\,\lambda} \ \mid g_v
\mid_{\,\lambda} \ \frac{1}{\lambda^{\alpha}}\ \rho^{-j}\right\}
\\
&\leqslant &\sup_{u} \{\mid f_u\mid_{\,\lambda}\lambda^{-u}\}\
\cdot\ \sup_{v} \{\mid g_v\mid_{\,\lambda}\lambda^{-v}\} \cdot
(\frac{\rho}{\lambda})^{\alpha}\end{aligned}$$ where the second inequality follows from the Cauchy inequalities. If $j\mapsto -\infty$ then either $u\mapsto -\infty$ or $v\mapsto
-\infty$ or $\alpha\mapsto \infty$, and then we are done. $\Box$
[*Definition:*]{} The filtered ring ${\Phi ^{\,(\,0,\,\infty)}}$ will be called the ring of [*$p$-adic microdifferential operators of finite order*]{}. The order and the principal symbol of a microdifferential operator are defined as in the formal case. Notice that ${\Phi ^{\,(\,0,\,\infty)}}[0]\subset{\Phi ^{\,(\,0,\,\infty)}}$ is a filtered subring and that one has ${\mathcal A(\omega)[\eta]}=K[[\eta^{-1}]][\eta]\cap {\Phi ^{\,(\,0,\,\infty)}}$.
[*Definitions:*]{} If $F=\sum_{u\leqslant
m}f_{u}(t)\,\eta^u\in \mathcal A_t(1)[[\eta^{-1}]][\eta]$ and $0<\rho<\omega\cdot\lambda<\omega$, we put $$\| F \|_{\lambda,\rho} = \sup_u\{\mid
f_u(t)\mid_{\lambda}\rho^{-u}\}$$ We have $F\in{\Phi ^{\,(\,0,\,\infty)}}$ if and only if $\| F \|_{\lambda,\rho} <
\infty$ for all $\lambda, \rho$ in the range above. From the proof of the preceding proposition follows that if $F,G\in{\Phi ^{\,(\,0,\,\infty)}}$, then we have $\|F\cdot
G\|_{\lambda,\rho}\leqslant \|F\|_{\lambda,\rho} \cdot\|
G\|_{\lambda,\rho}$. The subscript $\lambda,\rho$ will be omitted if no confusion may arise. We will say that $F=\sum_{u}\,f_u\,
\eta^u \in{\Phi ^{\,(\,0,\,\infty)}}$ is [*dominant*]{} if there is a $\lambda_0<1$ such that for all $\lambda_0<\lambda<1$ and $0<\rho<\omega\cdot\lambda$, one has $\mid
f_{ord(F)}\mid_{\lambda}\rho^{-ord(F)}\,=\, \|
F\|_{\lambda,\rho}$.
We want to prove a division theorem for $p$-adic microdifferential operators of finite order. We will make implicit use of the following lemma, its proof is elementary and left to the reader:
[**Lemma:**]{}[* Let $f(t)=t^m \,b(t)\in\mathcal
A_t(1)$, where $b(t)$ is invertible in $\mathcal A_t(1)$ and $m\geqslant 0$. Then, for each $\varphi\in\mathcal A_t(1)$, there are unique $q\in\mathcal A_t(1)$ and $r\in K[t]$ of degree smaller or equal than $m-1$ such that $\varphi = f \cdot q +r$, and for all $0<\lambda<1$, $\mid r\mid_{\lambda}\,\leqslant\,\mid
\varphi\mid_{\lambda}$, and $\mid f \mid_{\lambda}\cdot\mid
q\mid_{\lambda}\,\leqslant\,\mid \varphi\mid_{\lambda}$.*]{}
[**Theorem:**]{} [*Let $F\in{\Phi ^{\,(\,0,\,\infty)}}$ be dominant and assume that $\sigma(F)={t}^m \,b(t)$ where $\ b(t)\in \mathcal
A_t(1)$ is invertible. Then, for all $G\in{\Phi ^{\,(\,0,\,\infty)}}$ there exist unique $Q\in {\Phi ^{\,(\,0,\,\infty)}}$ and $R_0,\dots ,R_{m-1}\in {\mathcal A(\omega)[\eta]}$ such that $$G= Q\cdot F + {t}^{m-1}\, R_{m-1} + \dots + R_0.$$ The remainder ${t}^{m-1}\, R_{m-1} + \dots + R_0$ can also be written in a unique way in the form $S_{m-1}\,{t}^{m-1}+\dots+S_0$ with $S_i\in{\mathcal A(\omega)[\eta]}$.*]{}
[*Proof:*]{} It is easy to see that if $F$ is dominant so is the product $\eta^{-ord(F)}F$ is also dominant, so we can assume $F$ is of order zero. Again, multiplying $G$ by a suitable power of $\eta$ we may assume that $\mbox{ord}(G)=0$ as well. The existence of a unique formal solution $Q=\sum_{j\leqslant\, 0}
q_j(t)\,\eta^j$ , $R_i=\sum_{j\leqslant\, 0} r_{i,j}\,\eta^j\
(0\leqslant i\leqslant m-1)$ to the division problem formulated above is well-known, the solution can be obtained as follows (cf. [@Bjo Ch.4, Theorem 2.6]): One constructs inductively power series $q_0,q_{-1},\dots\in{\mathcal A_t(1)}$ such that $$\begin{aligned}
G-(q_0+\dots+q_{j}\,\eta^{-j})F&=&H_{j-1}+K_{j-1}\ \ \mbox{with}\
\ H_{j-1}\in{\Phi ^{\,(\,0,\,\infty)}}[j-1] \\ \mbox{and}\ \ K_{j-1}&\in& t^{m-1}\,
{\mathcal A(\omega)[\eta]}+ \dots + {\mathcal A(\omega)[\eta]}.\end{aligned}$$ Assume $q_0,\dots,q_{j+1}$ have already been found and put $$\varphi_j = g_j - \sum_{(j)}\ \frac{1}{\alpha !}\
v\,(v-1)\dots(v-\alpha+1)\ q_v \ \frac{d^{\alpha}f_u}{dt^{\alpha}}$$ where the sum runs over those $v,u,\alpha$ with $j=v+u-\alpha$, $\alpha\geqslant0$ and $j+1\leqslant v\leqslant 0$ (it is understood that the product $v\,(v-1)\dots(v-\alpha+1)$ is replaced by $1$ if $\alpha=0$). Then, the next series $q_j$ is the quotient of the division of $\varphi_j$ by $\sigma(F)$, that is, it is defined by the equality $\varphi_j = t^m \,b(t)\, q_j\,+r_j
$, where $q_j\in{\mathcal A_t(1)}$ and $r_j(t)=\sum_{i=0}^{m-1}r_{i,j}t^i$ is a polynomial of degree $m-1$ at most. The formal solution to the division problem is given by the quotient $Q=\sum_{j\leqslant\, 0}
q_j(t)\,\eta^j$ and the series $R_i=\sum_{j\leqslant\, 0}
r_{i,j}\,\eta^j\in{\mathcal A(\omega)[\eta]}$ $(1\leqslant i\leqslant m-1)$.
We have to prove that this formal solution is convergent in our sense. Fix $0<\rho <\omega\cdot \lambda<\omega$ and put $C_{\lambda,\,\rho}=\|G\|_{\lambda,\rho}/\|F\|_{\lambda,\rho}$. Notice that because of our hypothesis on $F$ we have $\|F\|_{\lambda,\rho}=\mid f_0 \mid_{\lambda}$. We show next, by descending induction on $j\leqslant 0$, that $\mid q_j
\mid\rho^{-j} \leqslant C_{\lambda,\,\rho}$. We have: $$\begin{aligned}
\mid \ q_j\mid_{\,\lambda}\rho^{-j} &\leqslant&\max
\left\{\,\frac{1}{\mid f_0\mid_{\lambda} }\mid\
g_j\mid_{\,\lambda}\rho^{-j},\ \right. \\ &\,& \left.
\max_{u,v,\alpha} \left\{ \frac{\mid
v(v-1)\dots(v-\alpha+1)\mid}{\mid f_0 \mid_{\lambda}
\cdot\mid\alpha\, !\mid}\ \mid q_v\mid_{\,\lambda} \
\mid\frac{d^{\alpha}f_u}{dt^{\alpha}}\mid_{\,\lambda}
\rho^{-j}\right\}\right\}\\
&\leqslant&\max_{u,v,\alpha}
\left\{\,\frac{\|G\|}{\|F\|},\frac{1}{\lambda^{\alpha}\,\|F\|}\
\mid
q_v\mid_{\,\lambda}\ \mid f_u \mid_{\,\lambda}\rho^{-j}\right\}\\
&=&\max_{u,v,\alpha} \left\{\,
C_{\lambda,\,\rho},\frac{1}{\lambda^{\alpha}\ \| F\|}\ (\mid
q_v\mid_{\,\lambda}\rho^{-v})\ (\mid f_u
\mid_{\,\lambda}\rho^{-u})\
\rho^{\alpha}\right\}\\
&\leqslant&\max\left\{ C_{\lambda,\,\rho}, \ C_{\lambda,\,\rho}\,
\cdot (\frac{\rho}{\lambda})^{\alpha}\right\}\,=\,
C_{\lambda,\,\rho}\, ,\end{aligned}$$ where the second inequality follows from the Cauchy inequalities. This proves the convergence of the quotient as well as the first inequality of norms. The convergence of the series $R_0,\dots,R_{m-1}$ is proved similarly and it is left to the reader. For the last statement, notice that there are unique $S_i=\sum_{j\leqslant 0}s_{i,j}\,\eta^{j}\in K[[\eta^{-1}]][\eta]$ ($0\leqslant i \leqslant m-1$), such that the remainder ${t}^{m-1}\, R_{m-1} + \dots + R_0$ can be written in the form $S_{m-1}\,{t}^{m-1}+\dots+S_0$, in fact one has $$s_{i,j}=\sum_{k=0}^{m-j-1}(-1)^k \ r_{i+k, j+k}\frac{(i+k)! \cdot
(j+k)!}{k!\cdot i! \cdot j!}.$$ From this formula follows easily that $S_i\in{\mathcal A(\omega)[\eta]}$ for $i=0,\dots,m-1$. $\Box$
[*Remark: *]{} It is unreasonable to expect a division theorem without some restriction on the divisor, for example if $\alpha\in K$ and we take $F=1-\alpha\,\eta^{-1}\in{\Phi ^{\,(\,0,\,\infty)}}[0]$, its formal inverse is $\sum_{i\geqslant 0} \alpha^i\eta^{-i}$, which is not convergent in our sense for $\mid\alpha\mid>\omega^{-1}$.
We will assume that there is a $\pi \in K$ such that $\pi^{p-1}+p=0$, which we fix from now on. Then, the morphism of $K$-algebras defined by $$\begin{aligned}
\varphi^{(\,c,\,\infty\,)}: {\mathbb W_t}&\longrightarrow& {\Phi ^{\,(\,0,\,\infty)}}\,,\\
t\ \ &\mapsto&\ t/\pi\\
\partial_t\ \ &\mapsto&\ \pi\cdot\eta\end{aligned}$$ endows ${\Phi ^{\,(\,0,\,\infty)}}$ with a structure of $({\mathbb W_t},{\mathbb W_t})$-bimodule.
[*Definition:*]{} Let ${{\mathbb M}}$ be a ${\mathbb W_t}$-module. We define its $p$-adic $(0,\infty)$-microlocalization as the ${\mathcal A(\omega)[\eta]}$-module $${\Phi ^{\,(\,0,\,\infty)}}({{\mathbb M}}):= {\Phi ^{\,(\,0,\,\infty)}}\otimes_{{\mathbb W_t}} {{\mathbb M}}\, ,$$ endowed with the connection given by left multiplication by $\eta^2\cdot t$.
[*Definition*]{} ([@Huy] and [@Meb1]): If ${{\mathbb M}}$ is a ${\mathbb W_t}$-module, its $p$-adic Fourier transform is defined as $\widehat{{{\mathbb M}}} = \mathbb W_{\eta}\otimes_{{\mathbb W_t}} {{\mathbb M}}$, where $\mathbb W_{\eta}$ is regarded as a right ${\mathbb W_t}$-module via the $K$-algebra isomorphism given by $t\mapsto
-\,\partial_{\eta}/\pi$, $\partial_t \mapsto \pi\cdot \eta$. If $m\in{{\mathbb M}}$, we put $\widehat{m}=1\otimes m\in\widehat{{{\mathbb M}}}$.
If $\tau$ is a coordinate, we denote by $\mathcal
R_{\tau}(\theta)$ the Robba ring of power series $\sum_{i\in\,\mathbb Z}a_i\,\tau^i$, $a_i\in K$, convergent in some annulus $\theta-\epsilon<\,\mid\tau\mid<\theta$, $\epsilon>0$, endowed with the derivation $\partial_{\tau}$. Let $P(t,\partial_t)=\sum_{k=0}^d \, a_k(t)\partial_t^k\in K[t]\langle
\partial_t \rangle$, set $\mathbb M_P :=\mathbb W_t/\mathbb W_t \, P$. We make the following assumptions on the differential operator $P$:
1. $P$ is singular only at zero and at infinity, and $\deg(a_d(t))\geqslant
\deg(a_i(t))$ for all $1\leqslant i \leqslant d$ (that is, the formal slopes of the singularity at infinity of $P$ are smaller or equal than $+1$).
2. The $\mathcal R_t(1)-$module with connection $\mathcal R_t(1)\otimes_{K[t,t^{-1}]}{{\mathbb M}}_P$ is soluble at $1$ (see [@CM1 8.7]).
As in the formal case, if $\mathbb N$ is a $\mathbb
W_{\eta}$-module, on the ${\mathcal A(\omega)[\eta]}$-module ${\mathcal A(\omega)[\eta]}\otimes_{K[\eta]}\mathbb N$ we will consider the connection given by $$\nabla (\alpha \otimes n):=
\partial_{\eta^{-1}}(\alpha) \otimes n\, -\, \alpha \otimes
\eta^2
\partial_{\eta} n.$$
[**Theorem: **]{} [*Let $P\in K[t]\langle
\partial_t \rangle$ satisfy the conditions i) and ii) above. Then the map $$\Upsilon: {\mathcal A(\omega)[\eta]}\otimes_{K[\eta]} \widehat{{{\mathbb M}}_P} \longrightarrow
{\Phi ^{\,(\,0,\,\infty)}}({{\mathbb M}}_P)$$ given by $\Upsilon(\alpha\otimes \widehat{m})= \alpha\otimes m$ is an isomorphism of ${\mathcal A(\omega)[\eta]}$-modules with connection.*]{}
[*Proof:*]{} It is easy to check that $\Upsilon$ is a morphism of ${\mathcal A(\omega)[\eta]}$-modules with connection, we have to prove that it is an isomorphism. We have $${\Phi ^{\,(\,0,\,\infty)}}({{\mathbb M}}_P)\cong \frac{{\Phi ^{\,(\,0,\,\infty)}}}{{\Phi ^{\,(\,0,\,\infty)}}\cdot P(t/\pi,\pi\,\eta)}.$$ By ii) we have (with the notations of [@CM1]) $Ray(\mathcal
R_t(1)\otimes_{K[t,t^{-1}]}{{\mathbb M}}_P, 1^{-})=1$, so for $\lambda$ close to $+1$ we have $ \| a_{d-i}(t)\|_{\lambda} \leqslant
\lambda^{-i}\ \| a_d(t)\|_{\lambda}$ ([@CM1 Corollaire 6.4]). Since $P(t,\partial_t)$ is singular only at zero and infinity, we have $a_d(t)=\alpha_d\,t^{\delta}$, with $\alpha_d\in K$, so $\|a_d(t/\pi)\|_{\lambda}=\omega^{-\delta}\,\|a_d(t)\|_{\lambda}$ and since $\delta\geqslant \deg \, a_{d-i}(t)$ for all $i=0,\dots,d$, we get $\|a_{d-i}(t/\pi)\|_{\lambda}\leqslant\omega^{-\delta}\,\|a_{d-i}(t)\|_{\lambda}$, so it follows that $\| a_{d-i}(t/\pi)\|_{\lambda} \leqslant
\lambda^{-i}\ \| a_d(t/\pi)\|_{\lambda}$. If $0<\rho<\omega\lambda$, then $$\|a_d(t/\pi)\|_{\lambda} \,
\omega^{d}\rho^{-d}\geqslant\|a_{d-i}(t/\pi)\|_{\lambda} \,
\omega^{d-i}\cdot
(\omega\,\lambda)^{i}\rho^{-d}\geqslant\|a_{d-i}(t/\pi)\|_{\lambda}
\, \omega^{d-i}\cdot \rho^{-d+i}$$ for $\lambda$ close to one, which is just the condition of dominance for the microdifferential operator $P(t/\pi,\pi\,\eta)$. As in the formal case, it follows now from the division theorem that ${\Phi ^{\,(\,0,\,\infty)}}({{\mathbb M}}_P)$ is a free ${\mathcal A(\omega)[\eta]}$-module with basis $1\otimes
t^i$, $i=0,\dots,\delta-1$, and since $\Upsilon (1\otimes (-1)^i
\partial_{\eta}^i/\pi^i)=1\otimes t^i$, the morphism $\Upsilon$ is surjective.
We compute next the rank over ${\mathcal A(\omega)[\eta]}$ of ${\mathcal A(\omega)[\eta]}\otimes_{\mathbb
C[\eta]} \widehat{{{\mathbb M}}_P}$. Denote by $\alpha_i\in K$ the (possibly zero) coefficient of $t^{\delta}$ in $a_i(t)\in K[t]$. The coefficient of $\partial_{\eta}^{\delta}$ in the differential operator $\widehat{P}=P(-\partial_{\eta}/\pi, \pi\,\eta)$ will be the polynomial $q(\eta)=(-1)^{\delta}\sum_i
\alpha_{d-i}\,\pi^{d-\delta-i}\,\eta^{d-i}\in K[\eta]$. By condition ii), $\mid\alpha_d\mid\geqslant \mid \alpha_i\mid$, which implies that the roots of $q(\eta)$ are either zero or of absolute value smaller than $\omega^{-1}$. It follows that $q(\eta)$ is a unit of ${\mathcal A(\omega)[\eta]}$, and then the rank of ${\mathcal A(\omega)[\eta]}\otimes_{\mathbb C[\eta]} \widehat{{{\mathbb M}}_P}$ over ${\mathcal A(\omega)[\eta]}$ equals $\delta$.
Thus $\Upsilon$ is an epimorphism between two ${\mathcal A(\omega)[\eta]}$-modules with connection of the same rank. Since the ring ${\mathcal A(\omega)[\eta]}$ is a localization of $\mathcal A_{\eta^{-1}}(\omega)$, it follows from [@CM1 8.1], that its kernel is zero and then the theorem is proved. $\Box$
Since ${\mathcal A(\omega)[\eta]}$ is a subring of ${\mathcal R_{\eta^{-1}}(\omega)}$, we have
[**Corollary: **]{} [*Under the same hypothesis of the theorem above, there is an isomorphism of ${\mathcal R_{\eta^{-1}}(\omega)}$-modules with connection $${\mathcal R_{\eta^{-1}}(\omega)}\otimes_{\mathbb K[\eta]} \widehat{{{\mathbb M}}_P} \longrightarrow {\mathcal R_{\eta^{-1}}(\omega)}\otimes_{{\mathcal A(\omega)[\eta]}} {\Phi ^{\,(\,0,\,\infty)}}({{\mathbb M}}_P).$$*]{}
[*Remark:*]{} Given $P\in K[t]\langle
\partial_t\rangle$ verifying the conditions of the previous theorem, one can consider as well the formal Fourier transform of $\widehat{{{\mathbb M}}_P}^{for}$ of ${{\mathbb M}}_P$ (that is, the one given by $t\mapsto -\partial_{\eta},\,
\partial_t\mapsto\eta$). One can also define a variant ${\Phi ^{\,(\,0,\,\infty)}}_1$ of the ring ${\Phi ^{\,(\,0,\,\infty)}}$ (taking $\omega=1$), it is easy to check that one obtains also a ring for which the division theorem holds. Then, similarly as in the theorem above, one can show that there is an isomorphism of $\mathcal R_{\eta^{-1}}(1)$-modules with connection $$\mathcal R_{\eta^{-1}}(1) \otimes_{\mathbb C[\eta]}
\widehat{{{\mathbb M}}_P}^{for} \longrightarrow \mathcal R_{\eta^{-1}}(1)
\otimes {\Phi ^{\,(\,0,\,\infty)}}_1({{\mathbb M}}_P).$$ However, unlike the $p$-adic Fourier transform, this formal transformation does not extend to the weak completion of the Weyl algebra considered in [@NM] and [@Huy], and I ignore whether it can be related to the sheaf-theoretic Fourier transform considered in [@Huy].
[8]{}
J.- E. Björk, , North-Holland Mathematical Library vol. 21. Kluwer, 1979.
G. Christol; Z. Mebkhout, In [*“Cohomologies $p$-adiques et applications arithmétiques (II)".*]{} Asterisque vol. 279 (2002) 125-183.
C. Huyghe, 321 (1995), no. 6, 759–762.
N. Katz, , in Seminaire Bourbaki, exposé 691. (1988), 105-132.
G. Laumon, (1987), 131-210.
B. Malgrange, (1981), 513-530.
B. Malgrange, Progress in Mathematics, vol. 96. Birkhäuser, 1991.
Z. Mebkhout, 119 (1997), 1027–1081.
L. Narváez-Macarro; Z. Mebkhout, Lecture Notes in Mathematics vol. 1454, 267–308.
F. Pham, Progress in Mathematics, vol.2. Birkhäuser, 1979.
J.-P. Ramis, Memoirs of the A.M.S., nr. 296. Providence, 1984.
J.-P. Ramis, Springer Lecture Notes in Physics, vol. 126, 178–199, 1980.
C. Sabbah, In [“$\mathcal D$-modules cohérents et holonomes"]{} Travaux en cours, vol.45. Hermann, 1993.
C. Sabbah, . Savoirs Actuels, EDP Sciences. Paris, 2002.
[*Departament d’Algebra i Geometria, Universitat de Barcelona.\
Gran Via, 585.\
E-08007 Barcelona, Spain.\
e-mail adress: [email protected]*]{}
[^1]: Partially supported by the DGCYT, BFM2002-01240.
|
---
abstract: |
This paper introduces the fifth oriental language recognition (OLR) challenge AP20-OLR, which intends to improve the performance of language recognition systems, along with APSIPA Annual Summit and Conference (APSIPA ASC). The data profile, three tasks, the corresponding baselines, and the evaluation principles are introduced in this paper.
The AP20-OLR challenge includes more languages, dialects and real-life data provided by Speechocean and the NSFC M2ASR project, and all the data is free for participants. The challenge this year still focuses on practical and challenging problems, with three tasks: (1) cross-channel LID, (2) dialect identification and (3) noisy LID. Based on Kaldi and Pytorch, recipes for i-vector and x-vector systems are also conducted as baselines for the three tasks. These recipes will be online-published, and available for participants to configure LID systems. The baseline results on the three tasks demonstrate that those tasks in this challenge are worth paying more efforts to achieve better performance.
author:
-
bibliography:
- 'olr.bib'
title: 'AP20-OLR Challenge: Three Tasks and Their Baselines'
---
language recognition, language identification, oriental language, AP20-OLR challenge
Introduction
============
The language identification (LID) refers to identify the language categories from utterances, and it is usually presented at the front end of speech processing systems, such as the automatic speech recognition (ASR), meaning that the LID technology plays a great role in the applications of multilingual interaction. However, there are still difficult issues, decaying the performance of LID systems, such as the cross-channel issue, the lack of training resources condition and the noisy environment.
The oriental language families, as a part of many language families around the world, often include Austroasiatic languages (e.g.,Vietnamese, Cambodia) [@sidwell201114], Tai-Kadai languages (e.g., Thai, Lao), Hmong-Mien languages (e.g., some dialects in south China), Sino-Tibetan languages (e.g., Chinese Mandarin), Altaic languages (e.g., Korea, Japanese) and Indo-European languages (e.g., Russian) [@ramsey1987languages; @shibatani1990languages; @comrie1996russian].
Dialect, often referring to a variety of a specific language, is also a typical linguistic phenomenon. Different dialects may be considered as different kinds of languages for speech processing. As an oriental country, China has 56 ethnic groups, and each ethnic group has its own unique dialect(s). Some of these Chinese dialects may share some part of written system with Mandarin Chinese, but the totally different pronunciation results in more complicated multilingual phenomena.
The oriental language recognition (OLR) challenge is organized annually, aiming at improving the research on multilingual phenomena and advancing the development of language recognition technologies. The challenge has been conducted four times since 2016, namely AP16-OLR [@wang2016ap16], AP17-OLR [@tang2017ap17], AP18-OLR [@tang2018ap18] and AP19-OLR [@tang2019ap19], each attracting dozens of teams around the world.
AP19-OLR involved more than $10$ languages and focused on three challenging tasks: (1) short-utterance ($1$ second) LID, which was inherited from AP18-OLR; (2) cross-channel LID; (3) zero-resource LID. In the first task, the system submitted by the Innovem-Tech team achieved the best performance ($C_{avg}$ with $0.0212$, and EER with $2.47\%$). In the second task, the system submitted by the Samsung SSLab team achieved the best $C_{avg}$ performance with $0.2008$, and EER with $20.24\%$. And in the third task, the system submitted by the XMUSPEECH team achieved the best $C_{avg}$ performance with $0.0113$, and EER with $1.13\%$. From these results, one can see that for the cross-channel condition, the task remains challenging. More details about the past four challenges can be found on the challenge website.[^1]
Based on the experience of the last four challenges and the calling from industrial application, we propose the fifth OLR challenge. This new challenge, denoted by AP20-OLR, will be hosted by APSIPA ASC 2020. It involves more languages/dialects and focuses on more practical and challenging tasks: (1) cross-channel LID, as in the last challenge, (2) dialect identification, where three dialect resources are provided for training, but other three languages are also included in the test set, to compose the open-set dialect identification, and (3) noisy LID, which reveals another real-life demand of speech technology to deal with the low SNR condition.
In the rest of the paper, we will present the data profile and the evaluation plan of the AP20-OLR challenge. To assist participants to build their own submissions, two types of baseline systems are provided, based on Kaldi and Pytorch respectively.
------- ------------------------------------------ --------- ----------------- ----------- ------------ ----------------- ----------- ------------
Code Description Channel No. of Speakers Utt./Spk. Total Utt. No. of Speakers Utt./Spk. Total Utt.
ct-cn Cantonese in China Mainland and Hongkong Mobile 24 320 7559 6 300 1800
zh-cn Mandarin in China Mobile 24 300 7198 6 300 1800
id-id Indonesian in Indonesia Mobile 24 320 7671 6 300 1800
ja-jp Japanese in Japan Mobile 24 320 7662 6 300 1800
ru-ru Russian in Russia Mobile 24 300 7190 6 300 1800
ko-kr Korean in Korea Mobile 24 300 7196 6 300 1800
vi-vn Vietnamese in Vietnam Mobile 24 300 7200 6 300 1800
Code Description Channel No. of Speakers Utt./Spk. Total Utt. No. of Speakers Utt./Spk. Total Utt.
ka-cn Kazakh in China Mobile 86 50 4200 86 20 1800
ti-cn Tibetan in China Mobile 34 330 11100 34 50 1800
uy-id Uyghur in China Mobile 353 20 5800 353 5 1800
------- ------------------------------------------ --------- ----------------- ----------- ------------ ----------------- ----------- ------------
Male and Female speakers are balanced.
The number of total utterances might be slightly smaller than expected, due to the quality check.
Database profile
================
Participants of AP20-OLR can request the following datasets for system construction. All these data can be used to train their submission systems as followd.
- AP16-OL7: The standard database for AP16-OLR, including AP16-OL7-train, AP16-OL7-dev and AP16-OL7-test.
- AP17-OL3: A dataset provided by the M2ASR project, involving three new languages. It contains AP17-OL3-train and AP17-OL3-dev.
- AP17-OLR-test: The standard test set for AP17-OLR. It contains AP17-OL7-test and AP17-OL3-test.
- AP18-OLR-test: The standard test set for AP18-OLR. It contains AP18-OL7-test and AP18-OL3-test.
- AP19-OLR-dev: The development set for AP19-OLR. It contains AP19-OLR-dev-task2 and AP19-OLR-dev-task3.
- AP19-OLR-test: The standard test set for AP19-OLR. It contains AP19-OL7-test and AP19-OL3-test.
- AP20-OLR-dialect: The newly provided training set, including three kinds of Chinese dialects.
- THCHS30: The THCHS30 database (plus the accompanied resources) published by CSLT, Tsinghua University [@wang2015thchs].
Besides the speech signals, the AP16-OL7 and AP17-OL3 databases also provide lexicons of all the 10 languages, as well as the transcriptions of all the training utterances. These resources allow training acoustic-based or phonetic-based language recognition systems. Training phone-based speech recognition systems is also possible, though large vocabulary recognition systems are not well supported, due to the lack of large-scale language models.
A test dataset AP20-OLR-test will be provided at the date of result submission, which includes three parts corresponding to the three LID tasks.
AP16-OL7
--------
The AP16-OL7 database was originally created by Speechocean, targeting for various speech processing tasks. It was provided as the standard training and test data in AP16-OLR. The entire database involves 7 datasets, each in a particular language. The seven languages are: Mandarin, Cantonese, Indonesian, Japanese, Russian, Korean and Vietnamese. The data volume for each language is about $10$ hours of speech signals recorded in reading style. The signals were recorded by mobile phones, with a sampling rate of $16$ kHz and a sample size of $16$ bits.
For Mandarin, Cantonese, Vietnamese and Indonesia, the recording was conducted in a quiet environment. As for Russian, Korean and Japanese, there are $2$ recording sessions for each speaker: the first session was recorded in a quiet environment and the second was recorded in a noisy environment. The basic information of the AP16-OL7 database is presented in Table \[tab:ol10\], and the details of the database can be found in the challenge website or the description paper [@wang2016ap16].
AP17-OL7-test
-------------
The AP17-OL7 database is a dataset provided by SpeechOcean. This dataset contains 7 languages as in AP16-OL7, each containing $1800$ utterances. The recording conditions are the same as AP16-OL7. This database is used as part of the test set for the AP17-OLR challenge.
AP17-OL3
--------
The AP17-OL3 database contains 3 languages: Kazakh, Tibetan and Uyghur, all are minority languages in China. This database is part of the Multilingual Minorlingual Automatic Speech Recognition (M2ASR) project, which is supported by the National Natural Science Foundation of China (NSFC). The project is a three-party collaboration, including Tsinghua University, the Northwest National University, and Xinjiang University [@wangm2asr]. The aim of this project is to construct speech recognition systems for five minor languages in China (Kazakh, Kirgiz, Mongolia, Tibetan and Uyghur). However, our ambition is beyond that scope: we hope to construct a full set of linguistic and speech resources and tools for the five languages, and make them open and free for research purposes. We call this the M2ASR Free Data Program. All the data resources, including the tools published in this paper, are released on the web site of the project.[^2]
The sentences of each language in AP17-OL3 are randomly selected from the original M2ASR corpus. The data volume for each language in AP17-OL3 is about $10$ hours of speech signals recorded in reading style. The signals were recorded by mobile phones, with a sampling rate of $16$ kHz and a sample size of $16$ bits. We selected $1800$ utterances for each language as the development set (AP17-OL3-dev), and the rest is used as the training set (AP17-OL3-train). The test set of each language involves $1800$ utterances, and is provided separately and denoted by AP17-OL3-test. Compared to AP16-OL7, AP17-OL3 contains much more variations in terms of recording conditions and the number of speakers, which may inevitably increase the difficulty of the challenge task. The information of the AP17-OL3 database is summarized in Table \[tab:ol10\].
AP18-OLR-test
-------------
The AP18-OLR-test database is the standard test set for AP18-OLR, which contains AP18-OL7-test and AP18-OL3-test. Like the AP17-OL7-test database, AP18-OL7-test contains the same target $7$ languages, each containing $1800$ utterances, while AP18-OL7-test also contains utterances from several interference languages. The recording conditions are the same as AP17-OL7-test. Like the AP17-OL3-test database, AP18-OL3-test contains the same $3$ languages, each containing $1800$ utterances. The recording conditions are also the same as AP17-OL7-test.
AP19-OLR-test
-------------
The AP19-OLR-test database is the standard test set for AP19-OLR, which includes 3 parts responding to the 3 LID tasks respectively, precisely AP19-OLR-short, AP19-OLR-channel and AP19-OLR-zero.
AP20-OLR-dialect
----------------
AP20-OLR-dialect is the training set provided by SpeechOcean. It includes three kinds of Chinese dialects, namely Hokkien, Sichuanese and Shanghainese. The utterances of each language are about 8000. The signals were recorded by mobile phones, with a sampling rate of $16$ kHz and a sample size of $16$ bits.
AP20-OLR-test
-------------
The AP20-OLR-test database is the standard test set for AP20-OLR, which includes 3 parts responding to the 3 LID tasks respectively, precisely AP20-OLR-channel-test, AP20-OLR-dialect-test and AP20-OLR-noisy-test.
- AP20-OLR-channel-test: This subset is designed for the cross-channel LID task, which contains six of the ten target languages, but was recorded with different recording equipments and environment. The six languages are Cantonese, Indonesian, Japanese, Russian, Korean and Vietnamese. Each language has about 1800 utterances.
- AP20-OLR-dialect-test: This subset is designed for the dialect identification task, including three dialects which are Hokkien, Sichuanese and Shanghainese. Considering the real-life situation, other three kinds of nontarget languages, which are Mandarin, Malay and Thai, are included in this subset to compose the open-set dialect identification, and there may be some cross channel utterances as well. Each dialect/language has about 1800 utterances.
- AP20-OLR-noisy-test: This subset is designed for the noisy LID task, which contains five of the ten target languages, but was recorded under noisy environment (low SNR). The five languages are Cantonese, Japanese, Russian, Korean and Mandarin. Each language has about 1800 utterances.
AP20-OLR challenge
==================
Following the definition of NIST LRE15 [@lre15], the task of the LID challenge is defined as follows: Given a segment of speech and a language hypothesis (i.e., a target language of interest to be detected), the task is to decide whether that target language was in fact spoken in the given segment (yes or no), based on an automated analysis of the data contained in the segment.
The AP20-OLR challenge includes three tasks as follows:
- Task 1: cross-channel LID is a close-set identification task, which means the language of each utterance is among the known traditional $6$ target languages, but utterances were recorded with different channels.
- Task 2: dialect identification is a open-set identification task, in which three nontarget languages are added to the test set with the three target dialects.
- Task 3: noisy LID, where noisy test data of the $5$ target languages will be provided.
System input/output
-------------------
The input to the LID system is a set of speech segments in unknown languages. For task 1 and task 3, those speech segments are within the $6$ or $5$ known target languages. For task 2, the three target dialects of the speech segments are the same as three dialects in the AP20-OLR-dialect. The task of the LID system is to determine the confidence that a language is contained in a speech segment. More specifically, for each speech segment, the LID system outputs a score vector $<\ell_1, \ell_2, ..., \ell_{10}>$, where $\ell_i$ represents the confidence that language $i$ is spoken in the speech segment. The scores should be comparable across languages and segments. This is consistent with the principles of LRE15, but differs from that of LRE09 [@lre09] where an explicit decision is required for each trial.
In summary, the output of an OLR submission will be a text file, where each line contains a speech segment plus a score vector for this segment, e.g.,
--------- ---------- ---------- ----- ---------- ------------- -- -- --
lang$_1$ lang$_2$ ... lang$_9$ lang$_{10}$
seg$_1$ 0.5 -0.2 ... -0.3 0.1
seg$_2$ -0.1 -0.3 ... 0.5 0.3
... ...
--------- ---------- ---------- ----- ---------- ------------- -- -- --
Training condition
------------------
- The use of additional training materials is forbidden, including the use of non-speech data for data augmentation purposes. The only resources that are allowed to use are: AP16-OL7, AP17-OL3, AP17-OLR-test, AP18-OLR-test, AP19-OLR-test, AP19-OLR-dev, AP20-OLR-dialect and THCHS30.
Test condition
--------------
- All the trials should be processed. Scores of lost trials will be interpreted as -$\inf$.
- The speech segments in each task should be processed independently, and each test segment in a group should be processed independently too. Knowledge from other test segments is not allowed to use (e.g., score distribution of all the test segments).
- Information of speakers is not allowed to use.
- Listening to any speech segments is not allowed.
Evaluation metrics
------------------
As in LRE15, the AP20-OLR challenge chooses $C_{avg}$ as the principle evaluation metric. First define the pair-wise loss that composes the missing and false alarm probabilities for a particular target/non-target language pair:
$$%\begin{equation}
C(L_t, L_n)=P_{Target} P_{Miss}(L_t) + (1-P_{Target}) P_{FA}(L_t, L_n)
% \end{equation}$$
where $L_t$ and $L_n$ are the target and non-target languages, respectively; $P_{Miss}$ and $P_{FA}$ are the missing and false alarm probabilities, respectively. $P_{Target}$ is the prior probability for the target language, which is set to $0.5$ in the evaluation. Then the principle metric $C_{avg}$ is defined as the average of the above pair-wise performance:
$$%\begin{equation}
C_{avg} = \frac{1}{N} \sum_{L_t} \left\{
\begin{aligned}
& \ P_{Target} \cdot P_{Miss}(L_t) \\
& + \sum_{L_n}\ P_{Non-Target} \cdot P_{FA}(L_t, L_n)\
\end{aligned}
\right\}
% \end{equation}$$
where $N$ is the number of languages, and $P_{Non-Target}$ = $(1-P_{Target}) / (N -1 )$. For the open-set testing condition, all of interfering languages will be seen as one unknown language in the computation of $C_{avg}$. We have provided the evaluation scripts for system development.
Baseline systems
================
Two kinds of baseline LID systems were constructed in this challenge based on Kaldi [@povey2011kaldi] and Pytorch [@Pytorch]: the i-vector model baseline and the extended TDNN x-vector model baselines, respectively. The feature extracting and back-ends were all conducted with Kaldi. To provide more options, we built the i-vector and x-vector models with Kaldi, and conducted an x-vector model with Pytorch as well. The Kaldi and Pytorch recipes of these baselines can be downloaded from the challenge web site.[^3]
We trained the baseline systems with a combined dataset including AP16-OL7, AP17-OL3 and AP17-OLR-test, and the target number of the system refers to the number of all languages, i.e. $10$. Before training, we adopted the data augmentation, including speed and volume perturbation, to increase the amount and diversity of the training data. For speed perturbation, we applied a speed factor of 0.9 or 1.1 to slow down or speed up the original recording. And for volume perturbation, random volume factor was applied. Finally, two augmented copies of the original recording were added to the original data set to obtain a 3-fold combined training set.
The acoustic features involved 20-dimensional Mel frequency cepstral coefficients (MFCCs) with the 3-dimensional pitch, and the energy VAD was used to filter out nonspeech frames.
The back-end was the same for all three tasks when the embeddings was extracted from the model. Linear discriminative analysis (LDA) trained on the enrollment set was employed to promote language-related information. The dimensionality of the LDA projection space was set to 100. After the LDA projection and centering, the logistic regression (LR) trained on the enrollment set was used to compute the score of a trial on a particular language.
i-vector system
---------------
The i-vector baseline system was constructed based on the i-vector model \[15\], \[16\]. The acoustic features were augmented by their first and second order derivatives, resulting in 69-dimensional feature vectors. The UBM involved 2,048 Gaussian components and the dimensionality of the i-vectors was 600.
x-vector system
---------------
We used the x-vector system with extended TDNN as the x-vector baseline system [@snyder2018x; @snyder2018spoken; @villalba2016etdnn]. Compared to the traditional x-vector, the extended TDNN x-vector structure used a slightly wider temporal context in the TDNN layers and interleave dense layers between TDNN layers, which leaded to a deeper x-vector model. This deep structure was trained to classify the $N$ languages in the training data with the cross entropy (CE) loss. After training, embeddings called ‘x-vector’ were extracted from the affine component of the penultimate layer. Two implementations of this model were conducted on Kaldi and Pytorch, respectively.
### Implementation details on Kaldi
The chunk size between 60 to 80 was used in the sequential sampling when prepared the training examples. The model was optimized with SGD optimizer, with a mini-batch size of 128. The Kaldi’s parallel training and sub-models fusion strategy was used.
-- -- -- -- -- --
-- -- -- -- -- --
: C$_{avg}$ and EER results on the referenced development sets[]{data-label="tab:1"}
-- -- -- -- -- -- --
-- -- -- -- -- -- --
### Implementation details on Pytorch
The chunk size was 100 with the language-balanced sampling when prepared the training examples. The language-balanced sampling ensured that the examples in languages were roughly the same, by the repeated sampling of languages with less training frames. The model was optimized with Adam optimizer, with a mini-batch size of 512. The warm restarts was used to control the learning rate and the feature dropout was used to enhance the robustness.
Performance results
-------------------
The primary evaluation metric in AP20-OLR is $C_{avg}$. Besides that, we also present the performance in terms of equal error rate (EER). These metrics evaluate system performance from different perspectives, offering a whole picture of the capability of the tested system. The performance of baselines is evaluated on the AP20-OLR-test database, but we also provide the results on the referenced development sets. For task 1, we choose the cross-channel subset of AP19-OLR-test to be the referenced development set. For task 2, the dialect test subset of AP19-OLR-dev-task3, which contains three target dialects: Hokkien, Sichuanese and Shanghainese, and the test subset of AP19-OLR-eval-task3, which contains three nontarget (interfering) languages: Catalan, Greek and Telugu, are combined as the referenced development set. While no noisy data set was given in the past OLR challenges, we do not present the referenced development set of task 3 in this challenge. The referenced development sets are used to help estimate the system performance when participants reproduce the baseline systems or prepare their own systems, and participants are encouraged to design their own development sets.
Table \[tab:1\] and Table \[tab:2\] show the utterance-level C$_{avg}$ and EER results on the referenced development sets and AP20-OLR-test, respectively.
### Cross-channel LID
The first task identifies cross-channel utterances. The enrollment sets are subsets of the 3-fold combined training set mentioned above, in which the utterances of the same languages in test sets are reserved, namely AP20-ref-dev-task1 and AP20-ref-enroll-task1. In the referenced development set, the inconsistency of two metrics $C_{avg}$ and EER is observed. The x-vector model on Pytorch achieves the best $C_{avg}$ performance with $0.2696$, but the EER performance with $26.94\%$, which was much worse than i-vector model (EER with $19.40\%$). In the evaluation set, the x-vector model on Pytorch achieves the best $C_{avg}$ performance with $0.1321$, and EER with $14.58\%$. Meanwhile, the trend of baselines’ performance is different between the referenced development set and the evaluation set, since the channel conditions might be different between these two test sets.
### Dialect Identification
The second task identifies three target dialects from six languages/dialects. In the referenced development set, it should be noted that the AP19-OLR-dev-task3-test only contain 500 utterances per target dialects, and the two interfering languages in the AP19-OLR-eval-task3-test are European languages, while the languages in the training set are all oriental languages, so the performance of Pytorch’s x-vector baseline is relatively unsatisfactory, since it’s stronger fitting of the training data, comparing with the Kaldi’s model. In the evaluation set, three nontarget languages are Asian languages, and the best performance on $C_{avg}$ is achieved on the Pytorch’s x-vector model, with $0.1752$, and EER with $19.74\%$. The Kaldi’s x-vector and i-vector models have similar performance in both the referenced development set and the evaluation set in terms of $C_{avg}$.
### Noisy LID
The evaluation process for task 3 can be seen as identifying languages from 5 target languages under noisy testing condition. No referenced development is given for this task. The referenced enrollment set is subsets of the 3-fold combined training set mentioned above, in which the utterances of the same languages as in test sets were reserved, namely AP20-ref-enroll-task3. In the evaluation set, the x-vector model on Pytorch achieves the best performance of $C_{avg}$ and EER, with $0.0873$, and $9.67\%$, respectively. The performance of Kaldi’s x-vector and i-vector models is close in the evaluation set.
Conclusions
===========
In this paper, we present the data profile, three task definitions and their baselines of the AP20-OLR challenge. In this challenge, besides the presented data sets in past challenges, new dialect training/developing data sets are provided for participants, and more languages are included in the test sets. The AP20-OLR challenge are with three tasks: (1) cross-channel LID, (2) dialect identification and (3) noisy LID. The i-vector and x-vector frameworks, deploying with Kaldi and Pytorch, are conducted as baseline systems to assist participants to construct a reasonable starting system. Given the results on baseline systems, these three tasks defined by AP20-OLR are rather challenging and are worthy of careful study. All the data resources are free for the participants, and the recipes of the baseline systems can be freely downloaded.
Acknowledgment {#acknowledgment .unnumbered}
==============
This work was supported by the National Natural Science Foundation of China No.61876160 and No.61633013. We would like to thank Ming Li at Duke Kunshan University, Xiaolei Zhang at Northwestern Polytechnical University for their help in organizing this AP20-OLR challenge.
[^1]: http://olr.cslt.org
[^2]: http://m2asr.cslt.org
[^3]: http://cslt.riit.tsinghua.edu.cn/mediawiki/index.php/OLR\_Challenge\_2020
|
---
abstract: 'Accretion of gas and interaction of matter and radiation are at the heart of many questions pertaining to black hole (BH) growth and coevolution of massive BHs and their host galaxies. To answer them it is critical to quantify how the ionizing radiation that emanates from the innermost regions of the BH accretion flow couples to the surrounding medium and how it regulates the BH fueling. In this work we use high resolution 3-dimensional (3D) radiation-hydrodynamic simulations with the code [*Enzo*]{}, equipped with adaptive ray tracing module [*Moray*]{}, to investigate radiation-regulated BH accretion of cold gas. Our simulations reproduce findings from an earlier generation of 1D/2D simulations: the accretion powered UV and X-ray radiation forms a highly ionized bubble, which leads to suppression of BH accretion rate characterized by quasi-periodic outbursts. A new feature revealed by the 3D simulations is the highly turbulent nature of the gas flow in vicinity of the ionization front. [During quiescent periods between accretion outbursts, the ionized bubble shrinks in size]{} [and the gas density that precedes the ionization front increases. Consequently,]{} the 3D simulations show oscillations in the accretion rate of only $\sim$2-3 orders of magnitude, significantly smaller than 1D/2D models. We calculate the energy budget of the gas flow and find that turbulence is the main contributor to the kinetic energy of the gas but corresponds to less than 10% of its thermal energy [and thus]{} does not contribute significantly to the pressure support of the gas.'
author:
- 'KwangHo Park, John H. Wise, and Tamara Bogdanović'
bibliography:
- 'park\_bh.bib'
title: 'Radiation-driven turbulent accretion onto massive black holes'
---
Introduction
============
The existence of supermassive black holes (BHs) observed as quasars at high redshift challenges our understanding of the formation of seed BHs and their growth history in the early Universe [@Fan:2001; @Fan:2003; @Fan:2006; @Willott:2003; @Willott:2010; @Wu:2015]. Several scenarios have been suggested for the origin of seed BHs in mass range of $10^2$–$10^5\,{M_\sun}$ (intermediate-mass black holes; IMBHs) such as Population III remnants [@BrommCL:99; @AbelBN:00; @MadauR:01], collapse of primordial stellar clusters [@Devecchi:2009; @Davies:2011; @Katz:2015], and direct collapse of pristine gas [@Carr:84; @HaehneltNR:98; @Fryer:01; @BegelmanVR:06; @ChoiSB:13; @YueFSXC:14; @Regan:2017]. Alternatively, mildly metal-enriched gas can gravitationally collapse to form an IMBH in pre-galactic disk [@LodatoN:2006; @OmukaiSH:08]. However, even such massive seed BHs still have to go through rapid growth to be observed as billion solar mass quasars at $z\sim7$. Thus, a realistic estimate of accretion rate can provide a test to plausible theoretical scenarios [@MadauR:01; @VolonteriHM:03; @YooM:04; @Volonteri:05].
The most perplexing issue related to the existence of high redshift quasars is the ubiquity of radiative feedback associated with the central BHs which should, in principle, preclude the rapid growth. Indeed, several works have shown that the radiative feedback from BHs regulates the gas supply from large spatial scales, slowing down the growth of BHs significantly, despite of the availability of high density neutral cold gas in the early Universe [@AlvarezWA:09; @MiloCB:09; @ParkR:11; @ParkR:12; @ParkR:13; @ParkRDR:14a; @ParkRDR:14b; @Park:2016]. For example, the radiation has been shown to readily suppress accretion in the regime when ${M_{\rm BH}}{n_{\infty}}\la 10^9\,{M_\sun}{\rm cm}^{-3}$ [@ParkR:12], where ${M_{\rm BH}}$ and ${n_{\infty}}$ are the BH mass and gas number density unaffected by the BH feedback in units of ${M_\sun}$ and ${\rm cm}^{-3}$, respectively. Self-regulation occurs when the ionizing radiation from the BH accretion disk produces a hot and rarefied bubble around the BH. Simulations show that the average accretion rate is suppressed to [a few]{} percent of the classical Bondi accretion rate (i.e., measured in absence of radiative feedback) and that the accretion rate shows an oscillatory behavior due to the accretion/feedback loop. As ${M_{\rm BH}}$ or ${n_{\infty}}$ increases, the ${M_{\rm BH}}{n_{\infty}}\ga 10^9\,{M_\sun}{\rm cm}^{-3}$ threshold is crossed. In this regime accretion onto the BH makes a critical transition to a high accretion rate regime so-called [*hyper-accretion*]{} where the radiation pressure cannot longer resist the gravity of the inflowing gas [@Begelman:79; @PacucciVF:2015; @ParkRDR:14a; @Inayoshi:2016; @Park:2016].
The local 1D and 2D simulations of accretion mediated by radiative feedback have been extensively used to explore these accretion regimes [@ParkR:11; @ParkR:12; @ParkRDR:14b]. They commonly adopt an assumption of spherical symmetry of the accretion flow and isotropy of ionizing radiation emerging from a radiatively efficient accretion disk [@ShakuraS:73]. Specifically, 1D simulations have been used to efficiently explore the parameter space of radiative efficiency, BH mass, gas density/temperature, and spectral index of the radiation. In 2D simulations, an additional degree of freedom revealed the growth of the Rayleigh–Taylor instability across the ionization front which is suppressed and kept in check by ionizing radiation [@ParkRDR:14a]. 2D simulations are also useful to study the effect of anisotropic radiation from an accretion disc that preferentially produces radiation perpendicular to the disk plane [e.g., @Sugimura:2016].
Although the assumption of spherical symmetry is reasonable in the setup where an isolated BH is accreting from a large scale reservoir of uniform and neutral gas, it has been long known that certain physical processes can only be reliably captured in full 3D simulations. A well known example is a spherical accretion shock instability (SASI), found in supernovae simulations, where an extra degree of freedom in 3D simulations is found to significantly affect the dynamics of gas and explosion of a supernova [e.g., @BlondinS:07]. Furthermore, 3D simulations of radiation-regulated accretion are necessary in order to capture fueling and feedback of BHs in a more complex cosmological context.
The main aim of this paper is to extend numerical studies of accretion mediated by radiative feedback to full 3D local simulations and identify any physical processes that have not been captured by the local 1D/2D simulations. In order to achieve this we carry out a suite of high resolution 3D hydrodynamic simulations with the adaptive mesh refinement (AMR) code [*Enzo*]{}, equipped with the adaptive ray tracing module [*Moray*]{} [@Wise:2011; @Bryan:2014]. The results to be presented in the next sections corroborate the role of ionizing radiation in regulating accretion flow which causes an oscillatory behavior of gas accretion. More interestingly, our simulations capture the development of the radiation-driven turbulence, which plays a role in the BH fueling during periods of quiescence. In Section \[sec:method\], we explain the basic accretion physics and the numerical procedure. We present the results in Section \[sec:results\] and discuss and summarize them in Section \[sec:discussion\].
Methodology {#sec:method}
===========
Calculation of Accretion Rate
-----------------------------
One of the first steps in quantifying the radiation-regulated accretion onto BHs starts with the estimate of accretion rate. We briefly review two main approaches that have been used as a part of different numerical schemes in earlier works.
A straightforward approach to accretion rate measurement in simulations is to [*read*]{} the mass flux directly through a spherical surface centered around the BH. This approach is usually used in simulations that employ spherical polar coordinate systems and it requires that the sonic point in the accretion flow is resolved in order to return reliable results [e.g., @NovakOC:11; @ParkR:11]. For example, a minimum radius of the computation domain $r_{\rm min} \sim
10^{-3}\,({M_{\rm BH}}/10^4\,{M_\sun})$pc has been used in some of our earlier works, where logarithmically spaced radial grids make it possible to achieve a high spacial resolution close to the BH [@ParkR:11; @ParkR:12; @ParkR:13; @Park:2016; @Park:2017]. Another way to estimate the accretion rate is to [*infer*]{} the accretion rate from gas properties near the BH assuming a spherically symmetric accretion onto a point source [@Bondi:52]. In this case, it is important to resolve the Bondi radius within which the gravitational potential by the BH dominates over the thermal energy of the gas $$r_{\rm B}= \frac{G{M_{\rm BH}}}{c_{s,\infty}^2}.$$ Here, $c_{s,\infty}$ is the sound speed of the gas far from the BH. For the gas with temperature $T_\infty=10^4$K, $r_{\rm B} \sim
0.5\,({M_{\rm BH}}/10^4\,{M_\sun})$pc, which is about 2 orders of magnitude larger than $r_{\rm min}$ used in the [aforementioned]{} mass flux measurement method. This approach consequently imposes less stringent requirements on numerical resolution and is often used in simulations that employ Cartesian coordinate grids.
The classical Bondi accretion rate can then be estimated in terms of the properties of the gas and BH mass $$\begin{split} \dot{M}_{\rm B} =& \; 4 \pi \lambda_{\rm B}
\rho_\infty \frac{G^2 {M_{\rm BH}}^2}{c_{\rm
s,\infty}^{3}} \\
=& \; 8.7\!\times\!10^{-4}\!
\left(\frac{n_\infty}{10^3\,{\rm cm^{-3}}} \right)
\left(\frac{T_\infty}{10^4\,{\rm K}} \right)^{-\frac{3}{2}}\\
& \times \left(\frac{{M_{\rm BH}}}{10^4 M_\odot} \right)^2\, {M_\sun}\,{\rm yr}^{-1}.
\label{eq:M_B}
\end{split}$$ where $\lambda_{\rm B}$ is a dimensionless parameter ranging from $1/4$ for adiabatic gas ($\gamma=5/3$) to $e^{3/2}/4$ for isothermal gas ($\gamma=1$) [which we adopt to represent the value for the classical Bondi rate for the purpose of normalization]{}.
The latter approach is commonly preferred in large cosmological simulations where a large dynamic range of spatial scales is involved and BHs are often treated as sink particles. The Eddington-limited Bondi rate is the most common recipe for the growth of BHs used in the literature $$\dot{M}_{\rm BH}=
{\rm min} \left[\dot{M_{\rm B}}, \frac{L_{\rm Edd}}{\eta c^2}\right]
\label{eq_Mdot}$$ where $\eta$ is the radiative efficiency and $c$ is the speed of light. In this approach the Eddington rate is the maximum accretion rate onto the BH set by the radiative feedback from the BH and defined as $$L_{\rm Edd} = \frac{4\pi G{M_{\rm BH}}m_{\rm p}c}{\sigma_{\rm T}} \simeq
1.26\!\times\!10^{38}\left(\frac{{M_{\rm BH}}}{{M_\sun}}\right)\,{\rm erg\ s}^{-1}$$ for pure hydrogen gas, where $m_{\rm p}$ is the proton mass and $\sigma_{\rm
T}$ is the Thomson cross section. From the accretion rate in Equation (\[eq\_Mdot\]), the accretion luminosity is calculated as $$L_{\rm acc}= \eta \dot{M}_{\rm BH} c^2$$ where we adopt a constant radiative efficiency $\eta = 0.1$ assuming a thin disk model [@ShakuraS:73].
------------------ ---------------- ------------------- --------------- -------- ---------------------- ---------------------- ---------
${M_{\rm BH}}$ ${n_{\infty}}$ $L_{\rm box}$ Top $\Delta L_{\rm max}$ $\Delta L_{\rm min}$ $N_\nu$
Run ID $({M_\sun})$ $({\rm cm}^{-3})$ (kpc) Grids (pc) (pc)
[M4N3]{} $10^4$ $10^3$ 0.04 $32^3$ 1.25 0.156 4
[M4N3sec]{} $10^4$ $10^3$ 0.04 $32^3$ 1.25 0.156 4
[M4N3E8mod]{} $10^4$ $10^3$ 0.04 $32^3$ 1.25 0.156 8
[M4N3E8R64mod]{} $10^4$ $10^3$ 0.04 $64^3$ 0.625 0.078 8
[M4N4]{} $10^4$ $10^4$ 0.02 $32^3$ 0.625 0.078 4
[M6N1]{} $10^6$ $10^1$ 4.0 $32^3$ 125 15.6 4
------------------ ---------------- ------------------- --------------- -------- ---------------------- ---------------------- ---------
\[table:para\]
We adopt the latter approach (i.e., the Bondi prescription) for calculating the BH accretion rate, which is well suited to the numerical scheme used in this work based on Cartesian coordinate grid. Note however that instead of the properties of neutral gas we use $\rho_{\rm
HII}$ and $c_{s,{\rm HII}}$, which denote the density and sound speed of the gas under the influence of BH radiation. The BH accretion rate is [*estimated*]{} [@Kim:2011] $$\dot{M}_{\rm BH} = 4 \pi \rho_{\rm HII}
\frac{G^2 {M_{\rm BH}}^2}{c_{\rm s,HII}^{3}}, \label{eq:mdot_hii}$$ [where we assume $\lambda_{\rm B} \sim 1$. Note however that in reality the equation of state of the gas changes depending on the efficiency of cooling and heating.]{} Therefore, we assume that gas accretion onto the BH inside the [Strömgren]{} radius still occurs in a manner similar to Bondi accretion. The implication is that the BH does not accrete cold gas ($T \sim 10^4$K) directly from larger spatial scales, but is instead fueled by the ionized gas heated by its own radiation. The temperature of this photo-heated gas is $T
\sim 4\times 10^4$K for the spectral index of ionizing radiation $\alpha=1.5$, assuming pure hydrogen gas [see @ParkR:11; @ParkR:12]. Note that the mean accretion rate can be analytically derived from the pressure equilibrium in time-averaged gas density and temperature profiles between the neutral and ionized region, such that $\rho_\infty
T_\infty \simeq {2} \rho_{\rm HII} T_{\rm HII}$ [@ParkR:11]. Equation (\[eq:mdot\_hii\]) can be rewritten to estimate the mean accretion rate as a function of ($T_\infty/T_{\rm HII}$) as $$\langle\dot{M}_{\rm BH}\rangle \simeq
\left[\frac{T_\infty}{T_{\rm HII}}\right]^{5/2} \dot{M}_{\rm B}
\label{eq:mdot_mean}$$ Because $\rho_{\rm HII}$ and $c_{\rm s,HII}$ evolve constantly with time, so does the new Bondi radius for the photo-heated gas ${r_{\rm B}}' =
G{M_{\rm BH}}/c_{\rm s,HII}^2 \propto {M_{\rm BH}}/T_{\rm HII}$. Note that we aim to resolve the new Bondi radius ${r_{\rm B}}'= {r_{\rm B}}(T_\infty/T_{\rm
HII})$. For the photo-heated hydrogen gas $T_{\rm HII} \sim 4\times 10^4$K and ${r_{\rm B}}' = 0.13\,({M_{\rm BH}}/10^4\,{M_\sun})$pc which is comparable to or larger than the numerical resolution achieved in vicinity of the BH in this work (represented by $\Delta L_{\rm min}$ in Table \[table:para\]).
Radiation-Hydrodynamic Simulations with Enzo+Moray
--------------------------------------------------
We perform local 3D radiation-hydrodynamic simulations in Cartesian coordinates using the AMR code [*Enzo*]{} coupled with the adaptive ray tracing module [*Moray*]{} [@Wise:2011; @Bryan:2014].
The BH is modeled as a sink particle and is located at the center of the computation domain. We fix the position of the BH throughout the simulation and update the BH mass to account for growth through accretion. We assume a simple initial condition of uniform temperature and density, as well as zero velocity and metalicity for the background gas [see @Yajima:2017 for dusty accretion onto BHs]. [We assume an adiabatic gas ($\gamma=5/3$) and]{} neglect its self-gravity. We select ${M_{\rm BH}}=10^4\,{M_\sun}$, ${n_{\infty}}=10^{3}\,{\rm cm}^{-3}$, and $T_\infty =10^4$K as the baseline setup in the feedback-dominated regime [@ParkR:12]. Note that the combination of ${M_{\rm BH}}$ and ${n_{\infty}}$ can be extended to other sets of simulations which return [qualitatively comparable]{} results (i.e., Equation (\[eq:mdot\_mean\]) holds and the size of [H[<span style="font-variant:small-caps;">ii</span>]{}]{} region is $\propto {M_{\rm BH}}$) when ${M_{\rm BH}}\, {n_{\infty}}$ is kept constant. Table \[table:para\] lists the parameters for each simulation. We use the following naming convention for simulations: runs are denoted as ‘MmNn’ where the BH mass is ${M_{\rm BH}}=10^{m}\,{M_\sun}$ and gas number density is ${n_{\infty}}=10^{n}\,{\rm cm}^{-3}$.
We use numerical resolution of $32^3$ on the top grid in most of our simulations, except where noted otherwise. This resolution corresponds to a coarsest resolution element with the size, $\Delta L_{\rm max}$, listed in Table \[table:para\]. With 3 levels of refinement our simulations attain the finest resolution of $\Delta L_{\rm min}= 0.156
({M_{\rm BH}}/10^4\,{M_\sun})\,$pc. The strategy we adopt for AMR is to achieve the highest level of resolution both in the central region near the BH and around the ionization front. In order to accomplish this, we enforce the highest level of refinement within the box of a size $\Delta
L_{\rm max}$ centered around the BH (see Table \[table:para\]). In addition to this requirement we also use the local gradients of all variables (i.e., density, energy, pressure, velocity, and etc) to flag cells for refinement around the ionization front. Outflowing boundary conditions are imposed on all boundaries to prevent reflection of density waves, which form due to the expansion of the ionized region.
We use the non-equilibrium chemistry model for ${\rm H\,{\textsc i}}$, ${\rm H\,{\textsc{ii}}}$, ${\rm
He}\,\textsc{i}$, ${\rm He}\,\textsc{ii}$, ${\rm He}\,{\textsc{iii}}$, and $e^{-}$ implemented in [*Enzo*]{} [@Abel:1997; @Anninos:1997]. For simplicity, we consider the photo-heating and cooling of pure hydrogen gas and neglect the effects of radiation pressure, which are relatively minor. The radiation pressure on both the electron gas and neutral hydrogen is weaker than the local gravity by the BH and negligible relative to the thermal pressure which plays a central role in creating outflows inside the [H[<span style="font-variant:small-caps;">ii</span>]{}]{} region [see @ParkR:12 for details]. The photo-heating of the hydrogen-helium gas mixture is known to return a higher temperature inside the [H[<span style="font-variant:small-caps;">ii</span>]{}]{} region (i.e., $T_{\rm HII} \sim 6\times 10^4$K), which leads to somewhat lower accretion rate and qualitatively similar results to the pure hydrogen gas. The Compton heating plays a minor role compared to the photo-heating when the radiation spectrum is soft [@ParkRDR:14b] and is therefore neglected.
Radiation emitted from the innermost parts of the BH accretion flow is propagated through the computational domain by the module [*Moray*]{}, which solves radiative transfer equation coupled with equations of hydrodynamics. Specifically, [*Moray*]{} accounts for the photo-ionization of gas and calculates the amount of photo-heating, which contributes to the total thermal energy of the gas. [*Moray*]{} uses adaptive ray tracing [@AbelW:02] that is based on the HEALPix framework [@Gorski:05]. In this approach the BH is modeled as a radiation point source that emits [$12\times 4^3$]{} rays which are then split into four child rays so that a single cell is sampled by at least 5.1 rays. The adaptive ray splitting occurs automatically when the solid angle associated with a single ray increases with radius or if the ray encounters a high resolution AMR grid. The radiation field is updated in every time step that corresponds to the grid with the finest numerical resolution as set by the Courant-Friedrichs-Lewy condition.
Modified power-law spectral energy distribution
-----------------------------------------------
We describe the accretion luminosity with a power law spectrum, $L_\nu
=C \nu^{-\alpha}$, where $\alpha$ is the spectral index and $C$ is the normalization constant. The frequency integrated (bolometric) luminosity between the energy $h\nu_1$ and $h\nu_2$ is $L_{\rm acc} =
C(\nu_2^{1-\alpha} - \nu_1^{1-\alpha})/(1-\alpha)$ for $\alpha \ne 1$. [*Moray*]{} models the SED with $N_\nu = 4-8$ discrete energy bins that are equidistant in log-space between $E_{\rm min}=h\nu_{\rm min} =
13.6$eV and $E_{\rm max}=h\nu_{\rm max}=100$keV. A fraction of energy is allotted to each bin assuming the spectral index $\alpha=1.5$. The mean photon energy of the $n$-th bin $\langle E_n \rangle$ between energies $E_{n-1}$ and $E_{n}$ is calculated as $$\langle{E}_n\rangle =\frac{\alpha}{\alpha
-1}\frac{(E_n^{1-\alpha}-E_{n-1}^{1-\alpha})}
{(E_n^{-\alpha}-E_{n-1}^{-\alpha})}\,\, {\rm for}\,\, \alpha > 1$$ which returns the mean energy of $\langle E \rangle \simeq 40.3$eV for the entire energy range. Table \[table:sed\] lists the mean energy ($h\nu_n$) and the dimensionless fraction of bolometric luminosity allotted to each bin ($L_n$). The number of ionizing photons in each bin is then calculated as $L_n L_{\rm acc}/\langle{E}_n\rangle$.
The prescription for modeling the spectrum outlined above ensures that the total power of emitted radiation is divided among the chosen number of energy bins to model the SED. In addition to the energy spectrum of radiation, another crucial consideration for radiation transport is the number of ionizing photons contained in each energy bin, as given by the photon spectrum. In order to maintain consistency in terms of the photon spectrum among simulations with different values of $N_\nu$, we introduce a modification to the number of photons in the first energy bin. This ensures that the number of near-UV ionizing photons, which are responsible for the bulk of photo-ionizations and photo-heating, remains approximately the same regardless of which SED model is used. Specifically, the mean energy of the lowest energy bin determines the temperature of [H[<span style="font-variant:small-caps;">ii</span>]{}]{} region, which is why we adopt the same energy of $E_1=28.4$eV for 4 and 8-bin SED models (the original first entry for $N_\nu=8$ is $E_1 = 21.5$eV). The fraction of energy in this bin is accordingly adjusted from 0.4318 to 0.5704 so to preserve the same number of ionizing photons. One consequence of this optimization performed simultaneously in terms of the luminosity and number of ionizing photons is that the sum of luminosity fractions allotted to all energy bins ($L_n$) is no longer exactly equal to 100%, as can be seen from Table \[table:sed\]. [We choose the 4-bin model without modification as a reference since it still returns a consistent result with @ParkR:11 using the minimum number of energy bins.]{}
In the Appendix we include a detailed discussion of the SED optimization used in this work, along with convergence tests, and a comparison with SED models published in @Mirocha:2012. We explore a full set of modeled SEDs in our simulations and mark those that are modified as outlined in this section in Table \[table:para\]. For example, ‘E8mod’ indicates the case of modified SED with 8 energy bins and ‘sec’ marks the case with secondary ionizations included [to consider the effect by energetic particles produced by high energy X-ray photons]{}.
$n_\nu$ $N_\nu=4$ $N_\nu=8$
--------- ------------------- -------------------
1 (28.4, 0.6793) (28.4, 0.5704)
2 (263.0, 0.2232) (65.3, 0.2475)
3 (2435.3, 0.0734) (198.7, 0.0813)
4 (22551.1, 0.0241) (604.5, 0.0813)
5 ... (1839.5, 0.0466)
6 ... (5597.8, 0.0267)
7 ... (17034.3, 0.0153)
8 ... (51836.1, 0.0088)
: SEDs for Power-law BH radiation with $\alpha=1.5$
\[table:sed\]
Results {#sec:results}
=======
Our 3D simulations are characterized by the oscillatory behavior of the accretion rate and size of ionized region, which is analogous to the previous 1D/2D simulations. Our simulations also show the presence of the turbulent gas motion driven by the fluctuating ionized region, which has not been captured in lower dimensionality simulations.
Formation of ionized region and oscillatory behavior
----------------------------------------------------
Figure \[fig:snapshot1\] shows the evolution of the gas density (top), [H[<span style="font-variant:small-caps;">ii</span>]{}]{} fraction ($n_{\rm HII}/n_{\rm H}$, middle), and temperature (bottom) for the [M4N3E8mod]{} run. From left to right the snapshots illustrate evolution of the ionized region starting with the burst of accretion and ending with a quiescent phase just before the subsequent burst. In general, the region under the influence of ionizing radiation is characterized by the low gas density, high ionization fraction, and high temperature ($T \sim 4\times 10^4$K).
The low density [Strömgren]{} sphere roughly maintains spherical symmetry throughout a sequence of oscillations despite a highly turbulent nature of the gas between the high and low density regions separated by the ionization front. In contrast to the turbulent features imprinted in the density map shown in the top panels of Figure \[fig:snapshot1\], the [H[<span style="font-variant:small-caps;">ii</span>]{}]{} fraction and temperature maps are relatively uniform. All maps illustrate a correlation of the size of the [Strömgren]{} sphere with the accretion rate: as the accretion rate decreases (from left to right), the average size of [Strömgren]{} sphere also decreases. Unlike the 1D/2D simulations however, in 3D simulations the [Strömgren]{} sphere never completely collapses between the accretion outbursts.
The outline of the [H[<span style="font-variant:small-caps;">ii</span>]{}]{} region traced by the [H[<span style="font-variant:small-caps;">ii</span>]{}]{} fraction and temperature shows a more dramatic departure from spherical symmetry than the smoother density maps. The “jagged" edge of the [H[<span style="font-variant:small-caps;">ii</span>]{}]{} region can be directly attributed to ionization of gas by the UV photons. It arises as a consequence of inhomogeneity of the gas driven by turbulence, which produces a range of column densities along different radial directions, as seen by the central source. More energetic photons with longer mean free paths travel beyond the ionization front of the [H[<span style="font-variant:small-caps;">ii</span>]{}]{} region partially ionizing the gas there. Their effect is noticeable in the [H[<span style="font-variant:small-caps;">ii</span>]{}]{} fraction maps as a light green halo surrounding the ionization front, [size of which]{} remains approximately constant regardless of the accretion phase.
Accretion rate and period of oscillation
----------------------------------------
The top panel of Figure \[fig:acc\_rate\] shows evolution of the accretion rate, which is calculated from the gas density (middle), and gas temperature (bottom), for the runs M4N3 (blue solid), M4N3sec (green dashed), and M4N3E8mod (red dashed). The temperature of the ionized region, $T_{\rm HII}$, increases modestly during the accretion bursts and consequently, does not play a large role in determining the accretion rate onto the BH. It follows that the density of the ionized region, which varies by a few orders of magnitude, is the primary driver of evolution of the accretion rate.
Indeed, the amplitude and variability of the accretion rate in Figure \[fig:acc\_rate\] closely follow the density of the ionized gas, as expected given the adopted prescription of the Bondi accretion rate, $\dot{M}_{\rm BH} \propto \rho_{\rm HII}$. A notable departure of the 3D simulations is that the accretion rate spans the range of 2-3 orders of magnitude compared to the 1D/2D simulations, where this range is measured to be 5-6 orders of magnitude [@ParkR:11; @ParkR:12]. The difference seen in the 3D simulations can be directly attributed to a higher level of “quiescent" accretion that occurs between the outbursts, while the maximum and mean accretion rate remain approximately unchanged relative to the 1D/2D models. [In the case of 1D/2D simulations, a sharp density drop inside of the ionization front is well preserved until the front collapses to a very small radius. The relatively high gas density [inside the ionization front]{} in the 3D simulations, explains the higher level of accretion rate]{} [during the quiescent phase.]{} [The difference between the current 3D and former 1D/2D simulations may arise due to different dimensionality of simulations, however we cannot completely rule out a possibility that the difference in the codes and numerical schemes are also partially responsible.]{}
We also investigate the effect of modified spectrum of the ionizing source as well as that of secondary ionizations by more energetic X-ray photons which generate energetic particles causing further ionization. Specifically, in addition the baseline run M4N3, we carry out the run M4N3sec, which accounts for secondary ionizations and M4N3E8mod, which uses our modified prescription for the ionizing continuum modeled in 8 energy bins. We find consistent results for all three. Based on this we conclude that (a) secondary ionizations do not have a significant impact on the thermodynamic state of the gas and (b) that our prescription for the ionizing continuum in M4N3E8mod returns physical results indistinguishable from the baseline simulation (see Appendix for more discussion of the latter).
In addition to the peak and mean accretion rates, the mean period between the bursts of accretion (i.e., the accretion duty cycle) is consistent with the previous 1D/2D results. The mean accretion rate (dotted lines) for all M4N3 runs is approximately 2 orders of magnitude lower than the Bondi rate [for neutral gas]{} (dot-dashed line). The mean period between the bursts is approximately $0.25\,$Myr, consistent with earlier results of @ParkR:11 [@ParkR:12] $$\tau_{\rm cycle} \sim 0.22\,{\rm Myr}
\left(\frac{{M_{\rm BH}}}{10^4\,{M_\sun}}\right)^{\frac{2}{3}}
\left(\frac{{n_{\infty}}}{10^3\,{\rm cm^{-3}}}\right)^{-\frac{1}{3}}
\label{eq:cycle}$$ with $\alpha=1.5$ and $\eta=0.1$ for radiative efficiency.
Phase diagrams and velocity structure
-------------------------------------
Figure \[fig:phase\] shows the phase diagrams of the gas density and temperature during the burst (left) and quiescent (right) phases for [M4N3E8mod]{} run. The plotted values of density and temperature are measured within [8pc]{} radius surrounding the BH (see Figure \[fig:snapshot1\]). During the bursts, the gas in vicinity of the BH is photo-ionized and photo-heated to $T
\approx 5\times10^4\,$K, as illustrated by a horizontal branch in the $T-\rho$ distribution. The rest of the gas in the outer part of [Strömgren]{} sphere is shown as a separate branch, characterized by temperature decreasing with density. This temperature structure is consistent with Figure \[fig:snapshot1\] where the central region shows a fairly uniform temperature, which then decreases with radius at larger separations from the BH.
During the quiescent phase, most of the gas cools through recombination to $T \approx 10^4\,$K. However, a smaller fraction of high temperature, low density gas still appears in the same location at the top left corner of the diagram, giving rise to a relatively wide multi-phase distribution of gas. [Note that in 1D/2D simulations, the central region is replenished with high density gas during the quiescent phase as the ionization front collapses to the minimum size, which is significantly smaller than the size of the [H[<span style="font-variant:small-caps;">ii</span>]{}]{} region on average [@MiloCB:09; @ParkR:11].]{} In 3D simulations, even though the ionization front does not collapse entirely during the quiescent phase [(i.e., the radius of [H[<span style="font-variant:small-caps;">ii</span>]{}]{} bubble remains approximately half the average size)]{}, [the enhanced gas density]{} within the ionized region is still sufficient to trigger the next burst of accretion. [The reason why the hot bubble maintains larger size during the quiescent phase in 3D simulations is related to the higher accretion rate and enhanced gas density just inside the ionization front, as explained in the previous section. ]{}
Figure \[fig:snapshot2\] shows the radial velocity, tangential velocity magnitude, Mach number, and thermal pressure in run [M4N3E8mod]{} during the same phases shown in Figure \[fig:snapshot1\]. During the burst of accretion (first column), the gas in the central region falls into the BH boosting the accretion rate while a strong outflow (colored in red) is still observed in the outer part of the ionized region at [$r\sim
5$pc]{}. The outflow velocity reaches $\sim 20$km/s, the transonic value for the temperature of the ionized region, as illustrated by the Mach number slices.
As the transonic gas outflow encounters the neutral medium, the kinetic energy of the gas is efficiently dissipated by shocks. The supersonic gas is mostly found near the ionization front or just outside of the [Strömgren]{} sphere. [The central core shows the highest thermal pressure during the burst of accretion due to the high density of the core but the temperature distribution within the ionized region remains uniform, as shown in Figure \[fig:snapshot1\]]{}. A spherical shell around the ionization front also exhibits high thermal pressure because the neutral clumps of gas in this region are efficiently photo-heated by the ionizing UV photons that escape absorption by the central core.
After the burst, the central region starts to develop an outflow (second and third columns) driven by the high thermal pressure. The high thermal pressure, located at the core and around the ionization front in the first column, expands and dissipates as the accretion powered luminosity decreases in time. The average thermal pressure is maximal during the burst (first column) and decreases as a function of time displaying the minimum just before the subsequent burst (last column). The inner region becomes gradually under-pressurized due to expansion and radiative cooling. This allows the gas to flow toward the center, causing a subsequent accretion burst.
The velocity structure in this work is distinct from the 1D/2D simulations where a strong laminar (i.e., non-turbulent) outflow in the outer part of the [Strömgren]{} sphere persists most of the time. In these simulations the ionization front collapses completely due to the loss of thermal pressure in the ionized region. The 3D simulations described here instead show highly turbulent motion in both radial (first row) and tangential directions (second row) that cascades to small scales over several oscillation cycles. The tangential velocity is only suppressed in the central region during the strong inflow and outflow episodes.
Evolution of vorticity
----------------------
We use vorticity, defined as a curl of the velocity field $$\vec{\omega} = \vec{\nabla} \times \vec{v}
\label{eq:vorticity}$$ to quantify the degree of turbulent motion of the gas. Figure \[fig:vorticity\] shows the evolution of vorticity at times [0.3, 0.6, 2.0, and 6.0Myr]{} in the run [M4N3E8mod]{}. The line integral convolution [[@Cabral:1993]]{}, a method to create a texture correlated in the direction of the vector field, is shown in black over vorticity magnitude. The vorticity is highest around the ionization front and propagates [inward]{} and outward. It builds up over time and saturates during the [early phase]{} of the simulation, at [$t=2.0\,$Myr]{}, after which point it does not display a significant increase until [$t=6.0\,$Myr]{}.
The propensity of the gas to develop turbulence in 3D simulations can be understood in the context of the vorticity equation, which written as Lagrangian derivative ($D/Dt$) reads $$\frac{D\vec{\omega}}{Dt} = \frac{\partial \vec{\omega}}{\partial t}
+(\vec{v} \cdot \nabla )\vec{\omega} = \frac{1}{\rho ^2}\nabla \rho
\times \nabla p .$$ The right hand side of the equation quantifies the [*baroclinicity*]{} of a stratified fluid, present when the gradient of pressure is misaligned from the density gradient of the gas. In our simulations, the local density gradients have no preferred direction because of the turbulence which produces a significant inhomogeneity of density, as evident in Figure \[fig:snapshot1\]. On the other hand, the pressure gradient is generally along the direction of gravity and largest across the ionization front. This misalignment dictates the evolution of $\vec{\omega}$ close to the ionization front.
From dimensional analysis the magnitude of vorticity squared can be estimated as $|\vec{\omega}|^2 \sim G{M_{\rm BH}}/ r^3$, where the relevant radius corresponds to the size of the [Strömgren]{} sphere. The number of ionizing photons from the BH is proportional to the density squared from recombination rate as well as the recombination volume, $N_{\rm ion} \propto \langle R_s \rangle^3 {n_{\infty}}^2$ where $\langle R_s \rangle$ is the mean size of the [Strömgren]{} radius. On the other hand $N_{\rm ion}$ is also proportional to the Bondi accretion rate, i.e., $N_{\rm ion} \propto {M_{\rm BH}}^2 {n_{\infty}}$ [@ParkR:11]. Thus, the mean size of [Strömgren]{} sphere is related to the BH mass and gas density as $\langle R_s \rangle^3 {n_{\infty}}^2 \propto {M_{\rm BH}}^2 {n_{\infty}}$. Using this in the estimate of the mean magnitude squared, $|\vec{\omega}|^2$, which develops around the ionization front $$\left|\vec{\omega}\right|^2 \sim \frac{G{M_{\rm BH}}}{\langle R_s \rangle ^3}
\propto \frac{{n_{\infty}}}{{M_{\rm BH}}}.
\label{eq:w2}$$
Figure \[fig:w\_time\] shows the time evolution of mass-weighted mean vorticity ${|\vec{\omega}|}$ for the simulation M4N3, calculated within the radius $r=5.0$ (solid line), 10.0 (dashed), and 15.0pc (dotted). [On average, mean vorticity shows a rapid increase at the beginning at all radii and remains at a constant level after $t
\approx 1.3$Myr. For the $r < 5.0$pc volume, the vorticity shows a significant change during the initial phase ($t \lesssim 0.5$Myr) , when the ionization front crosses this scale, but reaches a steady value with a minor variation after only $t \gtrsim 1$Myr. The small variation in ${|\vec{\omega}|}$ for the $r < 5.0$pc volume shows an approximate match with the accretion cycles in Figure \[fig:acc\_rate\]. On the other hand, ${|\vec{\omega}|}$ for the volumes with $r < 10.0$pc and $< 15.0$pc show a steady increase until $t \approx 1.3$Myr and remains at the same level after that. Note that for the run M4N3 the ionization front extends to a radius of $r\approx 10$pc as shown Figure \[fig:snapshot1\], thus $r\approx
10$pc volume is large enough to capture most of the vorticity. The value of ${|\vec{\omega}|}$ for $r < 5.0$pc and $r < 10.0$pc volumes after $\gtrsim 1.3$Myr becomes comparable, which means that there is no noticeable difference in ${|\vec{\omega}|}$ in inner and outer parts of the ionized region. However, as we increase the volume, i.e., $r<15.0$pc, the mean ${|\vec{\omega}|}$ decreases since the volume outside of ionized region with low ${|\vec{\omega}|}$ is included. ]{}
Figure \[fig:w2\_den\] shows the evolution of the mean vorticity squared for simulations M4N3 (solid red, 5pc), M4N4 (dashed blue, 2.5pc), and M6N1 (dashed green, 0.5kpc) within the radius written in parentheses for each run. The selected radius is approximately a half of the mean [Strömgren]{} radius $\langle R_s \rangle$ for each. To show all three simulations on the same scale, we normalize $\left|\vec{\omega}\right|^2$ by ${n_{\infty}}{M_{\rm BH}}^{-1}$, as in Equation (\[eq:w2\]), and the time on the $x$-axis by the average length of the oscillation cycle $\propto{M_{\rm BH}}^{2/3} {n_{\infty}}^{-1/3}$ shown [in Equation (\[eq:cycle\])]{}. Note that simulations M4N3 and M6N1 which share the same value of ${M_{\rm BH}}{n_{\infty}}=10^7\,{M_\sun}{\rm cm}^{-3}$, display a good match while [M4N4]{} shows a reasonably consistent result considering the large range of $\left|\vec{\omega}\right|^2$.
Turbulent Kinetic Energy
------------------------
In order to quantify how much turbulent energy is present on different spatial scales we calculate the energy spectrum $E(k)$ as a function of the wavenumber $k$. The total turbulent kinetic energy per unit mass is evaluated as $$\frac{1}{2} \left< |\vec{v}|^2 \right> = \int_{0}^{\infty} E(k)dk
\label{eq:tke}$$ where $\vec{v}$ shown here only accounts for the turbulent component of the velocity.
Figure \[fig:powerspec\] shows the specific turbulent kinetic energy power spectrum from runs [M4N3R32 (top) and M4N3E8R64mod (bottom)]{}, plotted for phases of an accretion cycle [at $t\sim
1.3$Myr after which the mean vorticity reaches at a steady value as shown in Figure \[fig:w\_time\]]{}. The wavenumber is defined as $k = 1/\Delta L$, where $\Delta L$ corresponds to the scale ranging from the finest resolution element to the size of the entire computational domain. In calculations presented here $\Delta L$ (and hence, $k$) is expressed in code units while $E(k)$ is shown in ${\rm cm}^2/{\rm s}^2$. The shaded region defined by $R_{\rm s,
max}$ and $R_{\rm s, min}$, which represent the maximum and minimum sizes of the [Strömgren]{} radius, marks the scale for the turbulence source. $\Delta L_{\rm max}$ and $\Delta L_{\rm min}$ indicate the resolutions of the top and finest grids, respectively. [Note that the high resolution run M4N3E8R64mod is characterized by a wider range of k which extends to larger values, resulting in a better resolved turbulence at small spatial scales.]{}
The turbulence is sourced on the scales that are just inside the ionization front ($k \la 6$) and it cascades to smaller scales (larger $k$). Figure \[fig:powerspec\] illustrates that in quiescence (blue line) the turbulent energy tends to be [slightly]{} lower across the spectrum relative to the other phases of the accretion cycle. The turbulent energy spectrum calculated from simulations is broadly consistent with the Kolmogorov spectrum \[$E(k) \propto k^{-5/3}$; @Kolmogorov:91\], which provides analytic description of a saturated and isotropic turbulence. Departure from the Kolmogorov spectrum is noticeable in two spatial regions in Figure \[fig:powerspec\]. Namely, the turbulence is diminished in neutral gas, outside of the ionization sphere, which is consistent with our hypothesis of radiatively driven turbulence. [Secondly, the simulated spectrum departs from Kolmogorov at wave numbers $k \gtrsim 30$ for M4N3R32 and $k \gtrsim 60$ for M4N3E8R64mod, respectively, that correspond to scales smaller than [$\Delta L_{\rm max}$]{}.]{} [Note that the inertial regime where $E(k) \propto k^{-5/3}$ is extended to larger $k$ for M4N3E8R64mod whereas the dissipation regime arises at $k \gtrsim
\Delta L_{\rm max}^{-1}$ consistently for both simulations with different resolutions.]{} This region coincides with a portion of a simulation domain where we employ several different refinement levels in the form of layered grids. Specifically, while the highest level grid resolves eddies of the size $2\Delta L_{\rm min}$, the base grid will resolve eddies with the minimum size of $\Delta L_{\rm max} = 8\Delta L_{\rm min}$. Because of the non-uniform sampling of turbulence in these regions some fraction of the turbulent kinetic energy is numerically dissipated before it cascades to the smallest scales. This expectation is consistent with the spectrum that steepens towards smallest scales (highest $k$ numbers), as shown in the figure. The turbulent kinetic energy contained on small scales is however a small fraction of the total turbulent kinetic energy and we do not expect that numerical dissipation of turbulence into the internal energy of the gas will significantly affect gas thermodynamics on these scales.
Figure \[fig:energy\_evol\] summarizes contribution to the total energy budget of the simulated gas flow from the accretion powered radiation, thermal and kinetic energy of the gas in run M4N3E8. We find that the average energy of radiation in one accretion cycle, $\langle
L_{\rm acc}\,\tau_{\rm cycle}\rangle \sim 2.4\times 10^{53}\,$erg, dominates over all other components of energy by more than $\sim$2 orders of magnitude. The energy of radiation is estimated as the bolometric luminosity emitted from the BH multiplied by a characteristic length of one cycle of oscillation and is plotted as a gray line. This implies that radiation can easily drive turbulence and account for the increased internal energy of the gas, as energetic requirements for these are comparatively modest.
The remaining components of energy are the thermal energy of the gas (TE), marked as red line in Figure \[fig:energy\_evol\], and the kinetic energy (KE), marked as blue line. Because the relative contributions to thermal and kinetic energy are different for the strongly irradiated gas in vicinity of the BH and neutral gas outside of the [Strömgren]{} sphere, we show the evolution of TE and KE in the entire computational domain as well as within the sphere with radius $r=5$pc. Recall that in the run M4N3E8 the ionization front resides at a radius of $r\approx 10$pc, and so the volume within $r<5$pc traces plasma enclosed within the [Strömgren]{} sphere at all times.
The kinetic energy is a combination of the kinetic energy of the bulk flow (i.e., the radial inflow and outflow of the gas) combined with random motions of the gas due to turbulence. The total kinetic energy is calculated from simulations as $${\rm KE} = \frac{1}{2} \Sigma m_i (v_{i,x}^2+v_{i,y}^2+v_{i,z}^2)
\label{eq:ke}$$ where $m_i$ is the cell mass and $v_{i,x}$, $v_{i,y}$, and $v_{i,z}$ are the velocity components in x, y, and z directions, respectively. The average energies (horizontal dotted lines) show that the kinetic energy ([thick]{} blue) is equivalent to $\sim1\,\%$, of the thermal energy (thick red) in the entire computation domain. Similarly, kinetic energy corresponds to $\sim7\,\%$ of the thermal energy for the gas within $r<10$pc and to $\sim9\,\%$ for the gas within $r<5$pc (the latter is shown with thin lines). Therefore, the kinetic energy of the gas corresponds to less than 10% of the thermal energy anywhere in the computational domain. It then follows that the kinetic energy due to turbulence cannot be the main contributor to the pressure support of the gas, which is mostly provided by thermal pressure. Along similar lines, even if all kinetic energy of the gas is promptly thermalized, it would not significantly alter the internal energy of the gas. This is the basis for our earlier statement that numerical dissipation of turbulence due to finite spatial resolution should not affect thermodynamics of the gas.
In order to estimate the contribution to the total kinetic energy from the bulk flow and turbulence we initially decompose the kinetic energy into the radial and tangential components, shown as magenta and green lines in Figure \[fig:energy\_evol\], respectively. Specifically, we calculate the radial component as KE$_{\rm R} = \Sigma m_i v_{i,r}^2/2$, where $v_{i,r}$ is the radial velocity, and the tangential component as KE$_{\rm T}$= KE - KE$_{\rm R}$. Figure \[fig:energy\_evol\] shows that shortly after the beginning of the simulation the two components achieve equipartition and KE$_{\rm R}$ $\approx$ KE$_{\rm T}$. Because the bulk flow of the gas is mostly along the radial direction, KE$_{\rm R}$ actually accounts for both the kinetic energy of the bulk flow and turbulence. The KE$_{\rm T}$ component of kinetic energy is on the other hand mostly contributed by turbulence. We estimate the magnitude of the turbulent component of kinetic energy in radial direction as $\sim$ 1/2KE$_{\rm
T}$, given that turbulence is isotropic, so that tangential and radial motions account for two and one degrees of freedom, respectively. It follows that the total kinetic energy is KE = KE$_{\rm bulk}$ + 3KE$_{\rm T}$/2 and because KE$_{\rm R}$ $\approx$ KE$_{\rm T}$, the turbulent kinetic energy contributes $\sim$3/4 of the total kinetic energy. Therefore, turbulent motions on all scales dominate the kinetic energy of the entire accretion flow.
Discussion and Conclusions {#sec:discussion}
==========================
The main aim of this paper is to extend numerical studies of accretion mediated by radiative feedback to full 3D local simulations and identify physical processes that have not been captured by the local 1D/2D models. We achieve this by performing 3D simulations of radiation-regulated accretion onto massive BHs using the AMR code [*Enzo*]{} equipped with the adaptive ray tracing module [*Moray*]{} [@Wise:2011]. Our main findings are listed below.
- Our 3D simulations corroborate the role of ionizing radiation in regulation of accretion onto the BHs and in driving oscillations in the accretion rate. We also confirm the mean accretion rate and the mean period between accretion cycles found in earlier studies [@ParkR:11; @ParkR:12]. Since this study adopts different code and numerical schemes relative to the earlier ones, this provides verification that 3D simulations described here accurately reproduce the key features of 1D/2D models.
- The 3D simulations show significantly higher level of accretion during the quiescent phase [due to the enhanced gas density within the ionization front and thus]{} oscillations in the accretion rate of only $\sim
2-3$ orders of magnitude in amplitude, significantly lower than in 1D/2D models. [We caution however that even though 3D simulations seem to faithfully reproduce the 1D/2D simulations, we cannot fully rule out a possibility that higher level of accretion during quiescence is caused by differences in the numerical scheme employed in this work.]{}
- In terms of the energy budget of the gas flow, we find that the radiative energy dominates over the thermal and kinetic energy of the gas by more than [$\sim$2]{} orders of magnitude, implying that radiation can easily drive turbulence and account for the increased internal energy of the gas. The thermal energy of the gas dominates over the kinetic energy by a factor of $\sim 10-100$ depending on the distance from the BH, while the kinetic energy itself is mostly contributed by the turbulent motions of the gas. The turbulence therefore does not contribute significantly to the pressure support of the gas.
The local simulations of radiation feedback mediated accretion onto BHs presented here are the first step towards investigation of the full 3D phenomena that take place in vicinity of growing BH. Future, more sophisticated local 3D simulations will in addition to turbulence need to capture the physics of magnetic fields, angular momentum of gas, and anisotropy of emitted radiation. All these ingredients are likely to play an important role in determining how quickly BHs grow and how they interact with their ambient mediums.
This study also bridges the gap between the local and cosmological simulations. In large cosmological simulations radiative feedback from BHs is often treated as purely thermal feedback and calculated without directly solving the radiative transfer equations. This is a practical compromise because direct calculations of radiative transfer are still relatively computationally expensive, albeit not impossible. This study paves the way for the next generation of cosmological simulations in which the BH radiative feedback will be evaluated directly. It remains to be explored how the rich phenomena discovered in local simulations will affect the details of the growth and feedback from BHs at the centers of galaxies or BHs recoiling during galaxy mergers [e.g., @Sijacki:11; @SouzaLima:2017; @Park:2017].
This work is supported by the National Science Foundation (NSF) under the Theoretical and Computational Astrophysics Network (TCAN) grants AST-1332858, AST-1333360, AST-1333514. [The authors thank the anonymous referee for thoughtful and constructive comments and suggestions. The authors also thank Jarrett Johnson for carefully reading our manuscript.]{} KP and TB are in part supported by the National Aeronautics and Space Administration through Chandra Award Number TM7-18008X issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the National Aeronautics Space Administration under contract NAS8-03060. JHW is also supported by NSF grant AST-1614333, NASA grant NNX17AG23G, and Hubble Theory grants HST-AR-13895 and HST-AR-14326. Support for programs \#13895 and \#14326 were provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. This work was performed using the open-source [Enzo]{} and [yt]{} [@Turk:2011] codes, which are the products of collaborative efforts of many independent scientists from institutions around the world. Their commitment to open science has helped make this work possible.
\[sec:appendix\]
Modeling of the Power-law Spectral Energy Distribution {#modeling-of-the-power-law-spectral-energy-distribution .unnumbered}
======================================================
We find that simulations, which rely on approximate description of the spectral energy distribution (SED) with a finite number of energy bins, are very sensitive to the precise configuration of those energy bins. Specifically, slightly different energy bin combinations result in a different temperature inside the [H[<span style="font-variant:small-caps;">ii</span>]{}]{} region, which in turn affects the accretion rate.
In this study we use an SED defined with $N_\nu = 4-8$ energy bins. Here, we elucidate how the energy groups are selected across the broad energy spectrum from 13.6eV to 100keV. Note that the mean energy of the entire photon distribution for power-law spectrum with $\alpha=1.5$ is $\langle E \rangle \sim 40.3$eV (blue symbol in Figure \[fig:sed\]). Therefore, in SEDs modeled with $N_\nu \le 8$ energy bins, the lowest energy bin alone covers the energy range $E \le \langle E \rangle$ (see Figure \[fig:sed\]). A majority of ionizing UV photons is assigned to the lowest energy bin, which in simulations are trapped within the ionized low density [H[<span style="font-variant:small-caps;">ii</span>]{}]{} region. High energy photons, which are characterized by longer mean free paths, travel beyond the ionization front partially ionizing the neutral gas there, as shown in the ionization fraction map of Figure \[fig:snapshot1\]. The temperature profile [*within*]{} the [H[<span style="font-variant:small-caps;">ii</span>]{}]{} region is therefore sensitive to the number and energy of the UV photons, encoded in the energy of the first SED bin.
On the other hand, the mean size of the [Strömgren]{} sphere, and thus the mean period of oscillation, is determined by the total number of ionizing photons [see @ParkR:11 for details]. Therefore, (1) the choice of the energy of the first SED bin and (2) the total number of ionizing photos must be considered together to produce consistent results from SED models with different spectral energy bin configurations.
$n_\nu$ $N_\nu=1$ $N_\nu=2$ $N_\nu=4$ $N_\nu=8$
--------- --------------- ---------------- ----------------- ------------------
1 (16.88,0.995) (18.47, 0.505) (18.50, 0.255) (18.49, 0.105)
2 ... (26.58, 0.505) (86.90, 0.255) (45.94, 0.075)
3 ... ... (136.97, 0.055) (66.11, 0.055)
4 ... ... (236.47, 0.055) (114.15, 0.075)
5 ... ... ... (125.02, 0.055)
6 ... ... ... (587.44, 0.045)
7 ... ... ... (704.72, 0.055)
8 ... ... ... (6858.18, 0.045)
: Optimized SEDs for $\alpha=1.5$ Power-law BH radiation using @Mirocha:2012
\[table:sed\_mirocha\]
Figure \[fig:sed\] shows the SED bin energies in log space chosen for different number of energy bins: $N_\nu=$1(blue symbol), 2(green), 4(red), and 8(cyan). With increasing $N_\nu$, the distribution occupies the energy space more evenly. If the SEDs are chosen in such way that the energy under the curve is preserved, then the lowest energy point shifts toward lower energy values with increasing $N_\nu$. Such SEDs have a higher fraction of the ionizing UV photons in proportion to the higher energy ones. The number of the readily absorbed, lower energy UV photons is on the other hand a primary factor determining the temperature profile and size of the [H[<span style="font-variant:small-caps;">ii</span>]{}]{} region, as described above. Thus, considering the energy under the SED curve is a necessary but not sufficient condition if the goal is to obtain consistent results of photo-ionization calculations. The number of ionizing photons assigned to the first SED bin is also of importance. In order to achieve both conditions, we first construct SEDs according to the “energy under the curve" requirement and modify them, so that the “number of photons in the first SED bin” condition is satisfied.
An example is shown in Figure \[fig:sed\] where squares mark the modified SED energy distribution for $N_\nu=8$ which is intended to match the properties of the $N_\nu=4$ case for the first energy bin and the total number of ionizing photons. For the first bin of $N_\nu=8$ we increase the energy from 21.5eV to 28.4eV and also increase the energy fraction from $L_1=0.4318$ to $L_1^{\prime}=0.4318\times
(28.4\,{\rm eV}/21.5\,{\rm eV})=0.5704$ which is adopted in Table \[table:sed\]. The extension to an even larger numbers of SED energy bins $N_\nu \ga 16$ should be modeled carefully since more than 2 energy bins are allotted in this case for $E< \langle E \rangle$.
Figure \[fig:sed\_test\] illustrates the results of simulations with different SED configurations. The SED configurations with $N_\nu=4$ (red solid) and 8 (blue solid), in which only energy under the curve criterion was considered, clearly exhibit different temperature of the ionized region: $T_{\rm HII} \sim 4.5\times 10^4$K and $4\times
10^4$K, respectively. As a consequence, the accretion rate is higher for $N_\nu=8$ case since for the Bondi accretion rate $\dot{M}_{\rm BH}
\propto T_{\rm HII}^{-3/2}$.
In the next step, the SED with $N_\nu=8$ is modified in such way that its first energy bin has an equal energy and number of UV photons as $N_\nu=4$ model (M4N3E8mod, red dashed line). The two runs exhibit indistinguishable temperature evolution in the [H[<span style="font-variant:small-caps;">ii</span>]{}]{} region and nearly the same (albeit not identical) evolution in the accretion rate. This demonstrates that our approach to modeling the SEDs will give consistent results regardless of the exact numerical configuration of the SED. As an additional test of our method, we also change the first bin for $N_\nu=4$ (blue dashed) to mirror the energy but not the number of photons in the first bin of unmodified SED with $N_\nu=8$ (M4N3, blue solid). As a result, the temperature evolution in the [H[<span style="font-variant:small-caps;">ii</span>]{}]{} region is indistinguishable in the two models but the evolution of the accretion rate differs noticeably between $N_\nu=8$ (blue solid) and $N_\nu=4$ with modification (blue dashed).
We also investigate the effects of numerical resolution coupled with the physics of photo-ionization in a run with a modified SED. M4N3E8R64mod (red dot-dashed) shows the highest resolution test with a $64^3$ top grid and 3 levels of refinement. Note that in this run the first burst of accretion is slightly delayed by $\sim$0.1Myr compared to M4N3, however the following sequence of oscillations shows consistent results in terms of the accretion rate and the mean period of oscillation.
Finally, we also test the [SEDs for the source with a discretized energy spectrum, which are optimized to recover analytic solutions for the size and thermal structure of [H[<span style="font-variant:small-caps;">ii</span>]{}]{} region]{} using the method by @Mirocha:2012. The SEDs based on this method are listed in Table \[table:sed\_mirocha\] [and shown as triangles in Figure \[fig:sed\]]{}. Note that with increasing $N_\nu$, the energy of the first bin is fixed at $E \sim 18.5\,$eV which is similar to our strategy for selecting the first energy bin. Green solid ($N_\nu=4$) and dashed ($N_\nu=8$) lines show the results for this SED configuration. The temperature of [H[<span style="font-variant:small-caps;">ii</span>]{}]{} region is $T_{\rm HII}
\sim 30,000\,$K which is lower than other simulations, and can be understood as a consequence of assigning lower energy to the first SED bin.
|
---
abstract: 'Using the methods of loop quantum gravity, we derive a framework for describing an inflationary, homogeneous universe in a purely quantum theory. The classical model is formulated in terms of the Ashtekar-Sen connection variables for a general subclass of Bianchi class A spacetimes. This formulation then provides a means to develop a corresponding quantum theory. An exact solution is derived for the classical Bianchi type I model and the corresponding semi-classical quantum state is found to also be an exact solution to the quantum theory. The development of a normalizable quantum state from this exact solution is presented and the implications for the normalizability of the Kodama state discussed. We briefly consider some consequences of such a quantum cosmological framework and show that the quantum scale factor of the universe has a continuous spectrum in this model. This result may suggest further modifications that are required to build a more accurate theory. We close this study by suggesting a variety of directions in which to take the results presented.'
author:
- |
Justin Malecki[^1]\
*Perimeter Institute for Theoretical Physics*\
*Waterloo, Canada*
title: 'Inflationary Quantum Cosmology: General Framework and Exact Bianchi I Solution'
---
Introduction {#sec:intro}
============
The theory of cosmological inflation [@liddle] has seen continued success since its introduction, most notably in its accurate description of the observed fluctuations of the cosmic microwave background (CMB). Inflation is based on a classical theory of cosmology which is generally thought to be invalid in regimes close to the big bang where quantum gravitational dynamics must be considered. In this paper, we develop an intrinsically quantum description of inflation to try and understand the role of quantum gravity in the early universe. Our model consists of a spatially homogeneous universe with scalar field matter as this provides the necessary exponential expansion of inflation (with an appropriate choice of scalar field potential).
A theory of quantum cosmology should be based on a more general theory of quantum gravity. Such a theory of quantum gravity that has proved very successful in recent years is the theory of loop quantum gravity.[^2] The starting point for the general theory of loop quantum gravity is the Ashtekar-Sen formulation of general relativity [@ashvar; @ashbook] which greatly simplifies the dynamical equations and allows for a well defined quantization of the classical theory. While this quantization procedure is a very general and powerful construction, we follow a simpler route by imposing homogeneity of the underlying spacetime. This allows us to circumvent some of the regularization procedures that are necessary in the full theory.
The application of loop quantum gravity to cosmological scenarios has already seen many intriguing results in the work of Martin Bojowald and others (see [@bojrev] for a review of the formulation and the main results). They show that the discreteness of the spectrum of the volume operator leads to many novel effects including a well-defined evolution through the classical big bang singularity [@bojsing]. In this paper, we use a similar approach to that of Bojowald to build our inflationary model with the advantage that we are able to find an exact quantum state to describe an inflationary universe.
The framework for this inflationary quantum cosmological model was first introduced in [@lqginf]. In that study, a quantum state for a flat, isotropic spacetime was derived and discussed. It was found to be normalizable and related to a well-known exact quantum state of the general theory of loop quantum gravity, the Kodama state [@kodama]. The purpose of the following paper is to present a generalization of the previous isotropic model to a broader class of homogeneous spacetimes and to derive exact quantum states for a subclass of these models.
The starting point for building our model is the classical Hamiltonian framework of [@lqginf] which is summarized here in section \[sec:scalarham\]. In section \[sec:classa\], we specialize this framework to consider only homogeneous universes and discuss the procedure of quantization. We then focus on a single class of homogeneous universe, Bianchi type I, in section \[sec:bianchi1\] where a family of exact classical and quantum solutions are derived. It is shown in section \[sec:normal\] that these solutions can form a normalizable state and we discuss the possible implications for the Kodama state. The specialization of these Bianchi I states to flat, isotropic spacetimes is presented in \[sec:flatiso\] so as to compare with the analogous state of [@lqginf]. We summarize our results in section \[sec:disc\] and suggest some directions for future research. Unless otherwise stated, we work in units in which $c =
\hbar = 1$ so that the Planck length is $\ell_p^2 = G$.
Classical Hamiltonian Scalar Field Model {#sec:scalarham}
========================================
The formalism of loop quantum gravity traditionally begins by describing general relativity as a constrained Hamiltonian theory in terms of Ashtekar-Sen variables [@ashvar; @ashbook] before applying the quantization procedure of Dirac [@dirac]. Since we are interested in modelling inflation by including a scalar field into our model, it is possible to recast this classical framework into a more familiar Hamiltonian language by using the additional matter degrees of freedom. This derivation was first presented in [@lqginf] and so we briefly summarize the procedure here.
The phase space variables used to describe the geometry of a spatial slice of spacetime are a complex, self-dual $SU(2)$ connection $A_a^i$ and its conjugate momentum $E_i^a$. Here and throughout we use indices $a, b, c, \ldots \in \{ 1, 2, 3 \}$ as spatial indices and $i, j, k, \ldots \in \{ 1, 2, 3 \}$ as $su(2)$ indices, the latter being raised and lowered by the Kronecker delta $\delta_{ij}$. The momentum $E_i^a$ is the densitized spatial triad $$E_i^a = \det(e_b^j) (e^{-1})_i^a$$ where $e_a^i$ is the triad or frame field on the spatial slice and $(e^{-1})_i^a$ its inverse. The connection can be expressed in terms of the three dimensional spin connection $\Gamma_a^i$ of the frame field and the extrinsic curvature $K_a^b$ of the spatial slice by $$\label{geoa} A_a^i = \Gamma_a^i + i e_b^i K_a^b.$$ The canonical relationship between these phase space variables is given by the Poisson bracket $$\label{pois1} \{ A_a^i(x), \ E_j^b(y) \} = i \ell_p^2 \delta_a^b
\delta_j^i \delta^3(x, y)$$ where $x$ and $y$ are coordinates on the spatial slice. Since the triad and spin connection are both real quantities, it may be necessary to impose the reality conditions $$\begin{aligned}
\label{reality1} A_a^i + \bar{A_a^i} & = & 2 \Gamma_a^i \\
\label{reality2} E_i^a - \bar{E}_i^a & = & 0\end{aligned}$$ on the phase space variables where $\bar{\phantom{A_a^i}}$ denotes complex conjugation.
As in conventional inflationary cosmology, we consider gravity coupled to a scalar field $\phi$ in a potential $V(\phi)$ with conjugate scalar momentum $\pi$. The gauge, diffeomorphism, and Hamiltonian constraints can be written respectively as $$\begin{aligned}
\label{gaugecon} G_i & = & D_a E^a_i = \partial_a E^a_i +
\epsilon_{ij}^{\phantom{ij}k} A_a^j E^a_k = 0, \\
\label{diffcon} H_a & = & E^b_i F_{ab}^i + \pi \partial_a \phi = 0,\end{aligned}$$ and $$\label{hamcon} \mathcal{H} = \frac{1}{\ell_p^2} \epsilon^{ijk}
E^a_i E^b_j \left( F_{abk} + \frac{\ell_p^2 V(\phi)}{3}
\epsilon_{abc}E^c_k \right) + \frac{1}{2} \pi^2 + \frac{1}{2}
E^{ai} E^b_i (\partial_a \phi) (\partial_b \phi) = 0$$ where $\epsilon_{ijk}$ is the totally antisymmetric, Levi-Civita tensor, $F_{abk}$ is the curvature of the connection $A_a^i$, and the cosmological constant $\Lambda$ has been absorbed into the potential $V(\phi)$.
We choose a decomposition of the spacetime manifold $\mathcal{M}$ into $\mathbb{R} \times \mathcal{S}$ by choosing space $\mathcal{S}$ to be defined as a constant $\phi$ hypersurface. The limitations of such a gauge choice are discussed in [@lqginf] but are not important for the results of this paper. It is sufficient to know that such a gauge choice is valid during the inflationary phase but may break down after the exponential expansion.
To implement this gauge, we choose a basis of spatial vector fields $\{ \partial_a \}_{a = 1}^3$ such that $$\label{phistrict}
\partial_a \phi = 0$$ and demand that this constraint be preserved in time by demanding $$\label{tpres} \{ \partial_a \phi, \ \mathcal{H}(\tilde{N}) \} =
\partial_a(\tilde{N} \pi) = 0$$ where $\mathcal{H}(\tilde{N}) = \int_\mathcal{S} \tilde{N}
\mathcal{H}$ is the integrated Hamiltonian constraint that generates diffeomorphisms in the time direction. Equation (\[tpres\]) implies that the *densitized* lapse $\tilde{N}$ must take the form $$\tilde{N} = \frac{k}{\pi}$$ where $k$ is a constant, chosen to be $k = \ell_p^{-2}$ so that the lapse is dimensionless.
The diffeomorphism and gauge constraints (\[diffcon\]) and (\[gaugecon\]) will be solved automatically when we eventually reduce to the homogeneous models and so we focus only on satisfying the Hamiltonian constraint (\[hamcon\]). In light of the time gauge condition (\[phistrict\]) we can solve the Hamiltonian constraint for $\pi$ to get $$\label{pi} \pi = \pm \sqrt{- \frac{2}{\ell_p^2} \epsilon^{ijk} E^a_i
E^b_j \left( F_{abk} + \frac{\ell_p^2 V(\phi)}{3}
\epsilon_{abc}E^c_k \right)}.$$
Combining (\[hamcon\]), (\[phistrict\]), and (\[pi\]) one can show that the integrated Hamiltonian constraint is $$\label{hammer} \mathcal{H}(\tilde{N}) = \frac{1}{2 \ell_p^2}
\int_\mathcal{S} \pi \mp \frac{1}{\sqrt{2} \ell_p^2}
\int_\mathcal{S} \sqrt{- \frac{1}{\ell_p^2} \epsilon^{ijk} E^a_i
E^b_j \left( F_{abk} + \frac{\ell_p^2 V(\phi)}{3}
\epsilon_{abc}E^c_k \right)}.$$ We use $\phi$ to define a time coordinate[^3] $$\label{time} T := \ell_p^2 \phi$$ with conjugate momentum $$\label{tmom} P := \frac{1}{l_p^2} \int_\mathcal{S} \pi$$ $$\label{tpois} \Rightarrow \{T, \ P \} = 1$$ so that setting the integrated constraint $\mathcal{H}(\tilde{N})
= 0$ in (\[hammer\]) implies $$\label{hamrel} H = P$$ where we recognize $$\label{gaugeham} H := \pm \frac{\sqrt{2}}{\ell_p^2} \int_\mathcal{S}
\sqrt{-\frac{1}{\ell_p^2} \epsilon^{ijk} E^a_i E^b_j \left( F_{abk}
+ \frac{\ell_p^2 V(T)}{3} \epsilon_{abc}E^c_k \right)}$$ as the Hamiltonian that generates evolution in the parameter $T$. Herein we will use $T$ instead of $\phi$ and denote the scalar field potential as $V(T)$ to mean the original scalar field potential $V(\phi)$ evaluated at $\phi = T/\ell_p^2$.
To summarize, by choosing an appropriate coordinate gauge, we have been able transform the constrained Hamiltonian theory of general relativity into a theory whose dynamics are generated by a traditional Hamiltonian functional (\[gaugeham\]). This will be the starting point for reducing to homogeneous spacetimes and, eventually, quantizing to obtain a quantum theory for a homogeneous universe.
Bianchi Class A Models {#sec:classa}
======================
We now proceed to specialize the Hamiltonian formalism of the previous section to consider only the so-called Bianchi class A models. The result will be a simpler theory that can immediately be quantized, providing a quantum framework to describe a homogeneous, inflationary universe.
A spatially homogeneous spacetime[^4] is characterized by a 3-dimensional symmetry group acting transitively on the spatial manifold $\mathcal{S}$. Such a group can be described by its algebra with structure constants $c^I_{\phantom{I}JK}$ which can, in general, be written as $$c^K_{\phantom{K}IJ} = M^{KL} \epsilon_{LIJ} + \delta^K_{[I} A_{J]}$$ where $M^{KL}$ and $A_J$ are any tensors satisfying $$M^{IJ} A_J = 0.$$ The Bianchi class A models are those in which $A_J = 0$. This class consists of 6 models (up to isomorphism) and we further restrict our consideration to those 5 whose structure constants can be written in the form $$\label{diagconst} c^I_{\phantom{I}JK} = n^{(I)}
\epsilon^{I}_{\phantom{I}JK}$$ where the brackets around an index indicate that it is not to be summed over. Herein, we will use the term Bianchi class A to mean this subset of 5 models.
It can be shown [@ryshep] that one can always construct a basis of 1-form fields $\{ \omega^I \}$ with dual vector fields $\{ X_I \}$ which are left-invariant under the action of the symmetry group (*i.e.* a basis that does not change when acted upon by a group element from the left). Such a basis is uniquely determined by demanding that it satisfy the “curl” relations $$\label{domegaabs} d \omega^I = - \frac{1}{2} c^I_{\phantom{I}JK}
\omega^J \wedge \omega^K$$ and that its Lie derivative along timelike vector fields orthogonal to the spatial slices vanish. It will prove advantageous to utilize such an invariant basis in order to simplify the calculations.
Following the procedure described in [@kodbianchi; @homobojo], we expand the $A$ and $E$ fields in terms of these left-invariant bases. The connection and momentum can then be written as $$\label{homovar} A_a^i(x, T) = c_{(I)}(T) \Lambda^i_I \omega^I_a,
\qquad E_i^a(x, T) = p^{(I)}(T) \Lambda^I_i X^a_I$$ where $\Lambda \in SO(3)$ and the functions $\{c_I\}$ and $\{p^I\}$ are constant on a given spatial slice and only depend on the time variable (\[time\]). Please note that (\[homovar\]) is not the most general form of the homogeneous variables but is, in fact, a diagonalized form that is well suited to the study of Bianchi class A models. The $SO(3)$ matrices are, essentially, a rotation of the invariant basis introduced in order to obtain this diagonal form of the connection and momentum. These matrices are a statement of the gauge degrees of freedom in the $A$ and $E$ variables. For the computations that are to follow, we will often use the following properties of $SO(3)$ matrices: $$\label{so3tricks} \Lambda^i_I \Lambda^J_i = \delta_I^J, \qquad
\epsilon_{ijk}\Lambda^i_I \Lambda^j_J \Lambda^k_K =
\epsilon_{IJK}.$$
In order to avoid infinite quantities arising from the (possible) infinitude of space, we consider a compact submanifold $\Sigma \subset
\mathcal{S}$ of the full spatial manifold $\mathcal{S}$. Since the spatial manifold is, by definition, homogeneous, no generality is lost by this reduction. We parametrize this submanifold choice by defining a dimensionless parameter, $R$, such that $$R^3 := \frac{1}{\ell_p^3} \int_\Sigma$$ where the volume element on $\Sigma$ is understood. The $\ell_p^{-3}$ normalization is simply an arbitrary length scale, chosen to be the Planck length. $R$ can then be considered a parameter of the model that measures the size of our sample of space.
The reduction (\[homovar\]) of the full theory to Bianchi class A models reduces general relativity from a Hamiltonian theory with an infinite number of degrees of freedom to one with only three degrees of freedom, $c_1$, $c_2$, and $c_3$ and their conjugate momenta $p^1$, $p^2$, and $p^3$. To determine their canonical relationship, we must first define $c_I$ and $p^I$ in terms of the variables $A_a^i$ and $E_i^a$. A convenient definition is $$\begin{aligned}
c_I & = & \frac{1}{\ell_p^3 R^3} \int_\Sigma A_a^i \Lambda_i^{(I)}
X_I^a \\
p^I & = & \frac{1}{\ell_p^3 R^3} \int_\Sigma E_i^a \Lambda_{(I)}^i
\omega^I_a.\end{aligned}$$ With these definitions we can use the relation (\[pois1\]) to compute the Poisson bracket $$\label{pois} \{ c_I(T), \ \frac{R^3 \ell_p}{i}p^J(T) \} =
\delta_I^J.$$ Hence, we recognize $(R^3 \ell_p / i) p^I(T)$ as being the true canonical momenta to the variables $c_I(T)$. Of course, the Poisson bracket between different $c_I$ variables and between different $p^J$ variables respectively vanishes.
In order to write the diffeomorphism and Hamiltonian constraints in terms of the new phase space variables $(c_I, p^J)$, we must first calculate the curvature of the homogeneous connection $$F = dA + \frac{1}{2} [A, A].$$ By expanding $A = A^i \tau_i$ in terms of the $su(2)$ basis $\tau_i := -\frac{i}{2} \sigma_i$ (where $\sigma_i$ are the Pauli matrices) and using the identities (\[domegaabs\]) and (\[so3tricks\]) we can work out the curvature components in the left-invariant basis to be $$\label{curvcoord} F^i_{JL} = -n^{(I)} c_{(I)}
\epsilon^I_{\phantom{I}JL} \Lambda_I^i + c_{(J)} c_{(L)}
\epsilon^i_{\phantom{i}jk} \Lambda_J^j \Lambda_L^k.$$ Here, the constants $\{ n^I \}_{I = 1}^3$ are the structure constants of the symmetry group defined in (\[diagconst\]).
We can now use these curvature components and the homogeneous momentum field (\[homovar\]) to calculate the diffeomorphism constraint (\[diffcon\]) in the left invariant basis. The result is the expression $$H_I = p^{(J)} \Lambda^J_i \left( -n^{(I)} c_{(I)}
\epsilon^L_{\phantom{L}IJ} \Lambda_I^L + c_{(I)} c_{(J)}
\epsilon^i_{\phantom{i}jk} \Lambda_I^j \Lambda_J^k \right)$$ which, by applying the first relation in (\[so3tricks\]), can be shown to vanish identically for all Bianchi class A models. So, by reducing our field variables to an intrinsically homogeneous and diagonalized form, we have also, at the same time, automatically satisfied the diffeomorphism constraint. Note that this result is not necessarily true for more general forms of the homogeneous variables.
An expression for the Hamiltonian (\[gaugeham\]) in terms of the new phase space variables can be obtained in a similar fashion. That is, by substituting the curvature (\[curvcoord\]) and the components of the homogeneous variables (\[homovar\]) into the Hamiltonian and applying the $SO(3)$ relations (\[so3tricks\]) one obtains the Hamiltonian $$\label{hamiltonian} H(c_I, p^J; T) = \pm 2 R^3 \left[ p^1 p^2 c_1
c_2 \left( n^3 \frac{c_3}{c_1 c_2} - 1 - \frac{\ell_p^2 V(T)}{3}
\frac{p^3}{c_1 c_2} \right) + \begin{array}{c}
\textrm{cyc.} \\
\textrm{perm.} \\
\end{array}
\right]^{\frac{1}{2}}.$$ Recall that $V(T)$ is the potential energy function for the scalar field $\phi = \ell_p^{-2} T$, kept as an arbitrary function throughout the paper.
In order to determine the reality condition (\[reality1\]) for the new phase space variables, one must, first, expand the three dimensional spin connection in terms of the ($SO(3)$ rotated) left invariant basis $$\Gamma_a^i = \Gamma_{(I)} \Lambda_I^i \omega^I_a$$ where $\Gamma_I$ can be computed [@bojbianc1] to be $$\label{spincon} \Gamma_I = \frac{1}{2} \left( n^J \frac{p^K}{p^J}
+ n^K \frac{p^J}{p^K} - n^I \frac{p^J p^K}{(p^I)^2} \right) \qquad
\textrm{(no summation on indices)}$$ where $(I, J, K)$ are even permutations of (1, 2, 3). The resulting reality conditions are then $$\begin{aligned}
\label{realhomo1} c_I + \overline{c_I} & = & 2 \Gamma_I \\
\label{realhomo2} p_I - \overline{p_I} & = & 0.\end{aligned}$$ Due to the presence of the $n^I$ parameters in (\[spincon\]), we see that the reality conditions are dependent on which Bianchi model is chosen.
Our classical Hamiltonian theory for a homogeneous universe with a scalar field is now completely defined. We have phase space variables $c_I$ and $p^I$ with the symplectic structure (\[pois\]), the reality conditions (\[realhomo1\]) and (\[realhomo2\]) and a time dependent Hamiltonian (\[hamiltonian\]) that governs the dynamics. The definitions (\[homovar\]) then relate the phase space variables back to the Ashtekar-Sen variables from which we can then infer all of the information concerning the spacetime geometry. Since there are only a finite number of degrees of freedom and no constraints needing to be satisfied, one can, in principle, quantize this model in a straightforward manner following the traditional algorithm of quantizing classical Hamiltonian systems [@diracprinc].
Zero-Energy Classical Solution
------------------------------
Before looking at the quantum theory, it would be very desirable to obtain a general solution to this classical Hamiltonian theory. As was seen in the calculation of the quantum solution to the flat, isotropic inflationary model in [@lqginf], such a classical solution can be crucial in finding a solution to the corresponding quantum theory. This will again be the case in the following section.
A first attempt at a general, classical solution may be made via Hamilton-Jacobi theory [@goldstein] in which we seek a Hamilton-Jacobi function $S(c_1, c_2, c_3; T)$ such that $$\label{hjrel} p^I = \frac{i}{R^3 \ell_p} \frac{\partial
S}{\partial c_I}, \qquad P = \frac{\partial S}{\partial T}$$ where the constant coefficient in the first relation comes from the non-standard Poisson bracket (\[pois\]). Such a function is determined by equation (\[hamrel\]) which now reads as a partial differential equation $$\begin{aligned}
\nonumber \frac{\partial S}{\partial T} & = & H \left( c_I,
\frac{i}{R^3
\ell_p} \frac{\partial S}{\partial c_I}; T \right) \\
\label{genhjeqn} \Rightarrow \frac{\partial S}{\partial T} & = &
\frac{2 i}{\ell_p} \left[ \frac{\partial S}{\partial c_1}
\frac{\partial S}{\partial c_2} c_1 c_2 \left( n^3 \frac{c_3}{c_1
c_2} - 1 - \frac{i \ell_p V(T)}{3 R^3 c_1 c_2} \frac{\partial
S}{\partial c_3} \right) + \begin{array}{c}
\textrm{cyc.} \\
\textrm{perm.} \\
\end{array}
\right]^{\frac{1}{2}}\end{aligned}$$ where we have chosen the positive root of the Hamiltonian. The determination of such a Hamilton-Jacobi function is equivalent to solving the equations of motion.
While we are not able to find a completely general solution to (\[genhjeqn\]) for arbitrary values of $n^I$, we are able to find a “zero-energy” function $S_0(c_1, c_2, c_3; T)$, related to the phase space variables as in (\[hjrel\]), such that $H =
0$. The determining differential equation is the vanishing of the right hand side of (\[genhjeqn\]) which does indeed vanish if $$\frac{\partial S_0}{\partial c_K} = \frac{3 i R^3}{\ell_p V}
\left( c_I c_J - n^{(K)} c_K \right)$$ where $(I, J, K)$ are cyclic permutations of $(1, 2, 3)$. Such a zero-energy function is given as $$\label{zeroe} S_0(c_1, c_2, c_3; T) = \frac{3 i R^3}{\ell_p V}
\left( c_1 c_2 c_3 - \frac{1}{2} n^I c_I^2 \right)$$ plus an irrelevant constant. It is clear that this function does *not* solve the full Hamilton-Jacobi equation (\[genhjeqn\]) since the time derivative of $S_0$ does not vanish for arbitrary potentials $V$. Of course this *would* be the general solution for the case where $V$ is just the cosmological constant with no time dependence.
As we will see in the next section (and as was seen in [@lqginf]), such zero-energy functions are often a useful basis for an ansatz that can be used to find a general solution to the Hamilton-Jacobi equation. While such a strategy proved useful for the Bianch type I model considered in the next section, it is not clear how to successfully proceed with such a strategy for general Bianchi class A models.
Bianchi Type I Classical and Quantum Solutions {#sec:bianchi1}
==============================================
We now turn our attention to the simplest, specific class A model: Bianchi I. The symmetry group of Bianchi I is the three dimensional translation group which is Abelian so the structure constants vanish and we have $n^I \equiv 0$. Furthermore, because of this commutativity, we may use the coordinate 1-forms $dx^I$ as our left-invariant 1-forms, $$\omega^I = dx^I,$$ which clearly satisfy the curl relations (\[domegaabs\]).
Since the structure constants vanish for the model under consideration, it is clear that the 3 dimensional spin connection (\[spincon\]) also vanishes and so the reality conditions on the configuration space variables read $$c_I = - \overline{c_I}$$ which can trivially be satisfied by defining new variables $$\label{newc} c^{\textrm{new}}_I := - i c^{\textrm{old}}_I$$ where $c^{\textrm{new}}_I$ can now be taken to be purely real (*i.e.* $c^{\textrm{old}}_I$ are purely imaginary in Bianchi I). Herein, we will use, simply, $c_I$ to denote the new, purely real variables. Of course, given the trivial reality conditions (\[realhomo2\]) on $p^I$, the conjugate momenta can be taken to be purely real. With these new $c_I$ variables, the Poisson bracket relation (\[pois\]) between the phase space variables now reads $$\label{realpois} \{ c_I, \ p^J \} = \frac{1}{R^3 \ell_p}
\delta_I^J.$$
We can now adapt the derivation of the previous section to this specific model. Using the real variables and $n^I \equiv 0$ in the Hamiltonian (\[hamiltonian\]) we get the Bianchi I Hamiltonian $$\label{hamb} H(c_I, p_J; T) = 2 R^3 \left[ p^1 p^2 c_1 c_2 \left(
1 - \frac{\ell_p^2 V(T)}{3} \frac{p^3}{c_1 c_2} \right) +
\begin{array}{c}
\textrm{cyc.} \\
\textrm{perm.} \\
\end{array}
\right]^{\frac{1}{2}}$$ where, again, we have chosen the positive root. Given the slightly different Poisson bracket (\[realpois\]) for the real variables, a Hamilton-Jacobi function $S$ is now defined as $$\label{hjrelreal} p^I = \frac{1}{R^3 \ell_p} \frac{\partial
S}{\partial c_I}, \qquad P = \frac{\partial S}{\partial T}$$ and the corresponding Hamilton-Jacobi equation $P = H$ reads $$\label{realhjeqn} \frac{\partial S}{\partial T} = \frac{2}{\ell_p}
\left[ \frac{\partial S}{\partial c_1} \frac{\partial S}{\partial
c_2} c_1 c_2 \left( 1 - \frac{\ell_p V(T)}{3 R^3 c_1 c_2}
\frac{\partial S}{\partial c_3} \right) +
\begin{array}{c}
\textrm{cyc.} \\
\textrm{perm.} \\
\end{array}
\right]^{\frac{1}{2}}.$$
To obtain a classical solution for this model, we begin with the zero-energy solution (\[zeroe\]) $$S_0 = \frac{3 R^3}{\ell_p V} c_1 c_2 c_3$$ and take the ansatz $$\label{su} S_u := \frac{3 R^3}{\ell_p V} c_1 c_2 c_3 (1 + u(T))$$ where $u(T)$ is a complex function of T alone. Upon substituting this ansatz into the Hamilton-Jacobi equation (\[realhjeqn\]) we find that the $c_I$ variables cancel out on both sides and we are left with an ordinary differential equation in $u$ $$\label{ueq} \dot{u} = \frac{\dot{V}}{V}(1 + u) + \frac{2i}{\ell_p}
(1 + u) \sqrt{3u}$$ where the dot denotes differentiation with respect to $T$. Since this is an ordinary, differential equation, there exists a one (complex) dimensional solution space and, therefore, we have obtained a one-parameter family of solutions to the classical Bianchi I model via the Hamilton-Jacobi function (\[su\]).
Armed with a solution to the classical model we can now proceed to build a quantum Bianchi I framework following the usual quantization procedure. Since $c_I$ and $p^I$ are real valued, we take the corresponding operators $\hat{c}_I$ and $\hat{p}^J$ to be Hermitian $$(\hat{c}_I)^\dag = \hat{c}_I, \quad (\hat{p}^I)^\dag = \hat{p}^I.$$ The canonical commutation relations between these operators follow from the classical Poisson brackets and are given as $$\begin{aligned}
[ \hat{c}_I, \ \hat{p}^J ] & = & i \{ c_I, \ p^J \} = \frac{i}{R^3
\ell_p} \delta_I^J, \\
\left[ \hat{c}_I, \ \hat{c}_J \right] & = & i \{ c_I, \ c^J \} = 0, \\
\left[ \hat{p}^I, \ \hat{p}^J \right] & = & i \{ p_I, \ p^J \} =
0.\end{aligned}$$ We can represent the vectors in the Hilbert space as functions $\Psi : \mathbb{R}^3 \to \mathbb{C}$ of the three configuration space variables $c_1$, $c_2$, and $c_3$. In this representation, the phase space operators act as $$\label{oprep} \hat{c}_I \Psi = c_I \Psi, \qquad \hat{p}^I \Psi = -
\frac{i}{R^3 \ell_p} \frac{\partial}{\partial c_I} \Psi.$$ It is easy to check that these satisfy the above commutation relations.
The next step in this procedure is to construct a Hamiltonian operator corresponding to the classical expression (\[hamb\]). In terms of the classical variables, we may order the terms of the Hamiltonian (\[hamb\]) as $$\label{hamord} H = 2 R^3 c_1 c_2 c_3 \left[ \frac{1}{c_2 (c_3)^2}
p^1 \frac{1}{c_1} p^2 \left( 1 - \frac{\ell_p^2 V}{3 c_1 c_2} p^3
\right) +
\begin{array}{c}
\textrm{cyc.} \\
\textrm{perm.} \\
\end{array}
\right]^{\frac{1}{2}}.$$ Using the operator representation (\[oprep\]) we can construct the corresponding operator as $$\label{hamop} \hat{H} = - \frac{2 i}{\ell_p} c_1 c_2 c_3 \left[
\frac{1}{c_2 (c_3)^2} \frac{\partial}{\partial c_1} \frac{1}{c_1}
\frac{\partial}{\partial c_2} \left( 1 + \frac{i \ell_p V}{3 R^3
c_1 c_2} \frac{\partial}{\partial c_3} \right) +
\begin{array}{c}
\textrm{cyc.} \\
\textrm{perm.} \\
\end{array}
\right]^{\frac{1}{2}}.$$ Our goal now is to find a solution $\Psi(c_1, c_2, c_3; T)$ to the resulting Schrödinger equation $$\label{schro} i \frac{\partial \Psi}{\partial T} = \hat{H} \Psi.$$
We introduce the semi-classical state $$\label{psiu} \Psi_u := e^{i S_u} = e^{\frac{3iR^3}{\ell_p V}c_1
c_2 c_3 (1 + u)}$$ where $S_u$ is the classical Hamilton-Jacobi solution (\[su\]). If we denote the operator under the square root of (\[hamop\]) as $\hat{\Omega}$ so that $$\label{hamshort} \hat{H} = -\frac{2i}{\ell_p} c_1 c_2 c_3
\sqrt{\hat{\Omega}}$$ then it can be shown by direct computation that $\Psi_u$ is an eigenvector of $\hat{\Omega}$ with time dependent eigenvalue: $$\begin{aligned}
\hat{\Omega} \Psi_u & = & 3 u \left( \frac{3 R^3}{\ell_p V} (1 +
u)
\right)^2 \Psi_u \\
\Rightarrow \sqrt{\hat{\Omega}} \Psi_u & = & \sqrt{3 u \left(
\frac{3 R^3}{\ell_p V} (1 + u) \right)^2} \Psi_u.\end{aligned}$$ Hence, the corresponding action of the Hamiltonian operator (\[hamshort\]) can immediately be determined as $$\label{hamact} \hat{H} \Psi_u = - \frac{6 R^3 i}{\ell_p^2 V} c_1
c_2 c_3 (1+u) \sqrt{3 u} \Psi_u.$$
The surprising fact is that the semi-classical state (\[psiu\]) is, in fact, a full solution to the Schrödinger equation (\[schro\]). To see this, we compute the time derivative of $\Psi_u$ to be $$\frac{\partial \Psi_u}{\partial T} = i \frac{3 R^3}{\ell_p V} c_1
c_2 c_3 \left( - \frac{\dot{V}}{V} (1 + u) + \dot{u} \right)
\Psi_u$$ and, upon substitution of the differential equation (\[ueq\]), this becomes $$\frac{\partial \Psi_u}{\partial T} = - \frac{6 R^3}{\ell_p^2 V}
c_1 c_2 c_3 (1 + u) \sqrt{3 u} \Psi_u.$$ Comparison of this with the action of the Hamiltonian operator (\[hamact\]) clearly shows that $\Psi_u$ solves Schrödinger’s equation (\[schro\]). Hence, equation (\[psiu\]) gives a one parameter family of quantum states for an inflationary universe.
Normalizability and the Relationship with the Kodama State {#sec:normal}
----------------------------------------------------------
A famous solution to the quantum constraint equations of quantum gravity with a cosmological constant is the so-called Kodama state [@kodama]. For a general (*i.e.* non-homogeneous) connection, it can be written as $$\label{kodama} \Psi_{K}[A] := \mathcal{N} e^{-i S_{CS}[A]} =
\mathcal{N} e^{\frac{3}{2 \Lambda \ell_p^2} \int_\mathcal{S}
Y_{CS}[A]}$$ where $Y_{CS}[A]$ is the Chern-Simons invariant $$\label{chernsimons} Y_{CS}[A] := \frac{1}{2} \textrm{Tr} \left(A
\wedge dA + \frac{2}{3} A \wedge A \wedge A \right)$$ and $\mathcal{N}$ is a normalization factor depending only on the topology of the manifold [@soo]. Similar to the homogeneous, inflationary solution found above, the Kodama state is also a semi-classical state corresponding to the Hamilton-Jacobi functional $$\label{hjchern} S_{CS}[A] = \frac{3i}{2 \Lambda G}
\int_\mathcal{S} Y_{CS}[A]$$ which solves the classical constraint equations of gravity.
One of the motivations for studying our inflationary model was to try to find states similar to the Kodama state (\[kodama\]) in a framework in which the state’s properties could more easily be calculated. In this section, we investigate the similarities between the Kodama state (\[kodama\]) and our inflationary state (\[psiu\]) and discuss the relevant properties of the latter.
In order to see what the Kodama state looks like in a Bianchi spacetime, we need to compute the Chern-Simons invariant (\[chernsimons\]) using the homogeneous connection (\[homovar\]). We do this by explicitly expanding the connection $A$ as $A^i \tau_i$ (recall that $\tau_i = -\frac{i}{2} \sigma_i$) to obtain $$Y_{CS} = \frac{1}{4}(n^i c_i^2 - 2 c_1 c_2 c_3) \omega^1 \wedge \omega^2 \wedge
\omega^3$$ where $\omega^I$ are the left-invariant basis 1-forms. Substituting this into the Hamilton-Jacobi functional (\[hjchern\]) we get $$\label{hjchernhomo} S_{CS} = \frac{3 i R^3 \ell_p}{4 \Lambda}
\left( \frac{1}{2} n^i c_i^2 - c_1 c_2 c_3 \right)$$ which is the same as the zero-energy Hamilton-Jacobi function $S_0$ (\[zeroe\]) with the scalar potential $V(T)$ chosen to be constant. Of course, this similarity is not so surprising. The $S_{CS}$ function was chosen so that the Hamiltonian constraint vanished while $S_0$ is defined as that function which causes the Hamiltonian (\[hamiltonian\]), which is the integral of the square root of the Hamiltonian constraint, to vanish.
Using this result we can specialize the Kodama state to Bianchi I to obtain $$\label{kodhomo} \Psi_K = \mathcal{N} e^{-i S_{CS}} = \mathcal{N}
e^{i \frac{3 R^3 \ell_p}{4 \Lambda} c_1 c_2 c_3}.$$ Comparing this with the inflationary state $\Psi_u$ from equation (\[psiu\]) we see they are the same state up to the time varying function $u(T)$ (with the appropriate choice of constant potential $V(T)$). We also emphasize, once again, that both the Kodama state $\Psi_K$ and our new state $\Psi_u$ are semi-classical states as well as full solutions to their respective theories (the former solves the constraint equations while the latter solves Schrödinger’s equation). These similarities suggest that $\Psi_u$ is a *modified* Kodama state; it is the analogue of the Kodama state when one couples a scalar field to the quantum gravitational field.
Due to the fact that the configuration space variables $c_I$ are real, we may take the inner product between two states $| \Psi
\rangle$ and $| \Phi \rangle$ to be $$\langle \Psi | \Phi \rangle = \int_{\mathbb{R}^3} dc_1 dc_2 dc_3 \
\overline{\Psi}(c_1, c_2, c_3) \Phi(c_1, c_2, c_3)$$ where, as usual, $\Psi(c_1, c_2, c_3) := \langle c_1, c_2, c_3 |
\Psi \rangle$ and $| c_1, c_2, c_3 \rangle$ are the simultaneous eigenvectors of the $\hat{c}_I$ operators. Using this inner product, it is clear that both the Kodama state (\[kodhomo\]) and the modified Kodama state (\[psiu\]) are both continuously normalizable (also called delta function normalizable) because they are simply a phase.
We now recall the fact there there is a one parameter family of solutions $u(T)$ to the differential equation (\[ueq\]), each one of which corresponding to a different state $\Psi_u$. If we label each solution $u_w(T)$ by its value $w := u(T_0)$ at some initial time $T = T_0$ then we may construct (truly) normalizable wavefunctions by making “wave packets” of these solutions $$\Psi_f(c_1, c_2, c_3; T) := \int dw \ f(w) \Psi_{u_w} = \int dw \
f(w) e^{i\frac{R^3}{\ell_p V}c_1 c_2 c_3 (1 + u_w)}$$ where $f(w)$ is a normalizable function with support on a suitable interval similar to that discussed in [@lqginf][^5]. By normalizable, we mean that $$\int dw \ |f(w)|^2 < \infty.$$ This type of normalization is not possible in the Kodama state (\[kodhomo\]) as there is no such one-parameter family of solutions to take wave packets of.
Here we see that, by introducing a scalar field into our model, we are able to describe the gravitational field (encoded by the configuration space variables $c_I$) with respect to the scalar field $T \approx \phi$. In this framework, the modified Kodama state (\[psiu\]), which is only continuously normalizable in the standard inner product, is able to form a normalizable state by making wave packets.
The Spatially Flat, Isotropic Model: A Special Case of Bianchi I {#sec:flatiso}
================================================================
Before discussing the implications of the above results, we wish to write the solution of the previous section specifically for a spatially flat and isotropic spacetime. In this case, the real, homogeneous configuration space variables are all equal $$c_1 = c_2 = c_3 =: A$$ as are their conjugate momenta $$p^1 = p^2 = p^3 =: E.$$ The Hamiltonian (\[hamb\]) then reads $$H = 2 R^3 A^3 \left[ 3 \frac{1}{A^2} E \frac{1}{A^2} E \left(1 -
\frac{\ell_p^2 V}{3 A^2} E \right) \right]^{\frac{1}{2}}$$ where we have written the terms in the order in which they will be written in the corresponding Hamiltonian operator.
We will now write the classical and quantum solutions without much commentary as the derivation is completely analogous to that of the previous section. The classical Hamilton-Jacobi function is $$\label{isohj} S_v(A, T) = \frac{R^3}{\ell_p V(T)} A^3 (1 + v(T))$$ where $v$ is a function of $T$ alone and satisfies the ordinary differential equation $$\dot{v} = \frac{ \dot{V}}{V}(1 + v) + \frac{6i}{\ell_p} (1 + v)
\sqrt{3 v}.$$ The representation of the $\hat{A}$ and $\hat{E}$ operators can be taken as $$\label{isorep} \hat{A} \Psi = A \Psi, \qquad \hat{E} \Psi = -
\frac{i}{R^3 \ell_p} \frac{\partial}{\partial A} \Psi$$ and the corresponding solution to Schrödinger’s equation (\[schro\]) is $$\Psi_v := e^{i S_v} = e^{\frac{i R^3}{\ell_p V} A^3 (1+v)}$$ which, as in (\[psiu\]), is also the semi-classical state.
This wavefunction $\Psi_v$ is different from the wavefunction derived in [@lqginf] using the same framework. The difference is due to the use of different operator orderings in the Hamiltonian. The benefit of the solution in this paper is that it stems from the more general Bianchi I solution (\[psiu\]) whereas the solution described in [@lqginf] cannot be generalized. Furthermore, $\Psi_v$ is a semi-classical state that is also a full solution to Schrödinger’s equation and, in this respect, is more closely related to the Kodama state, as was discussed in the previous section. The wavefunction of [@lqginf] does not share this property but is in fact written as a *modified* semi-classical state based on the classical solution (\[isohj\]).
The Spectrum of the Scale Factor {#sec:interpret}
--------------------------------
As a simple example of how one may go about gaining information from such a quantum cosmological framework, we can ask what the spectrum of eigenvalues is for the operator corresponding to the classical scale factor in a flat, isotropic universe. Recalling that $E_i$ is just the densitized triad of the spatial manifold, one can easily show that the metric takes the form $$ds^2 = - N^2 dT^2 + E(T) ((dx^1)^2 + (dx^2)^2 + (dx^3)^2)$$ where $N$ is the *undensitized* lapse function described in detail in [@lqginf]. Hence, $E(T)$ is the square root of the scale factor.
Since the operator $\hat{E}$ is simply a (scaled) derivative operator (\[isorep\]), its eigenvalues are analogous to the familiar momentum eigenvalues from ordinary quantum mechanics. The eigenvectors of $\hat{E}$ are given by $$\psi_E(A) = e^{i E R^3 \ell_p A}$$ and are labelled by the real eigenvalues $E$. As in ordinary quantum mechanics, if the domain of $A$ is $\mathbb{R}$ (as was assumed in [@lqginf] and in this paper), then the spectrum of $\hat{E}$ is continuous. This runs contrary to the prediction of loop quantum gravity [@carlo] and, more specifically, to the cosmological predictions of Martin Bojowald [@bojvolop], that the spectrum of volume eigenvalues be quantized. If the scale factor had discrete eigenvalues, it would suggest that the universe cannot evolve continuously in a classical manner but must proceed by some quantum evolution. The fact that we do not find this in our model may be a hint that our approach to quantum cosmology is too naive.
Another possibility that this result may suggest is that the domain of $A$ should not be taken as $\mathbb{R}$ but as a finite interval. If quantum states were required to have periodic boundary conditions on such a finite interval then the spectrum would be quantized just as angular momentum is quantized in ordinary quantum mechanics. Further study is required in order to resolve this issue.
Discussion and Future Projects {#sec:disc}
==============================
We have shown that the incorporation of a scalar field into the Ashtekar-Sen formalism of general relativity allows for a choice of time gauge which leads to a standard Hamiltonian theory. By restricting this formalism to a class of homogeneous spacetimes (a subclass of Bianchi class A), we obtain a Hamiltonian theory with a finite number of degrees of freedom that describes a homogeneous universe filled with a scalar field. The finite number of degrees of freedom allows us to straightforwardly quantize to obtain a corresponding quantum framework to describe inflationary cosmology.
By looking strictly at the Bianchi I model we are able to find a one-parameter family of classical solutions. That is, for every solution $u(T)$ of the differential equation (\[ueq\]) we obtain a homogeneous spacetime solution given by the Hamilton-Jacobi function $S_u$ (\[su\]). The solutions $u(T)$ also label a class of quantum states (\[psiu\]) that are full solutions to the corresponding quantum theory. These quantum states are the semi-classical states $e^{iS_u}$ corresponding to the classical solutions given by $S_u$. We considered flat and isotropic spacetimes as a special case of Bianchi I in order to provide an alternative solution to that give in [@lqginf].
We close our discussion with a brief review of what is left to be done and some possible directions this work may go in the future.
**Reality conditions and a physical inner product.** In the Bianchi class A framework of section \[sec:classa\], the classical configuration space variables $c_I$ were, in general, complex numbers. A quantization of such a complex Hamiltonian theory requires a specially constructed inner product so that expectation values of real quantities turn out to be real. This problem was avoided in the Bianchi I case because the 3-dimensional spin connection vanished and we were able to redefine the theory in terms of purely real variables. In such a framework, the standard inner product suffices. For other Bianchi models, the spin connection does not vanish identically, the $c_I$ cannot be taken to be real, and a new inner product will need to be defined. In order to have a well-defined quantum framework for the class A models, such an inner product will need to be found that is applicable to all models.
**Hermitian ordering of the Hamiltonian.** Throughout this paper, and in [@lqginf], we have used an operator ordering that leads to a non-Hermitian Hamiltonian operator. As is well known, a Hamiltonian must be Hermitian in order for probability to be conserved. Hence, it may be required to solve the quantum theory using a Hermitian ordering of the Hamiltonian in order to obtain a probabilistic interpretation for the wavefunction solution. If this is, indeed, required, it is hoped that the techniques used in this paper will lead to a similar solution to the Schrödinger equation for the new Hamiltonian. Such a solution has been searched for without any success to date.
However, given the difficulty in interpreting *any* wavefunction of the universe it is not clear that the ordinary interpretation of quantum mechanics can be applied to a theory of quantum cosmology. For this reason, we have gone ahead with the non-Hermitian Hamiltonian operator as it leads to an exact solution analogous to the Kodama state. Of course, such a solution will also be a solution to the Schrödinger equation using a Hermitian Hamiltonian *at the semi-classical level*.
**Numerical analysis.** All of the cosmological solutions based on the framework of this paper have depended on the solution to an ordinary differential equation (*e.g.* equation (\[ueq\])) which cannot be written in closed form. Some initial numerical analysis has been done on such equations [@lqginf] but, to fully understand the dynamics of the corresponding quantum solutions $\Psi_u$ (\[psiu\]), it will be necessary to better understand the dynamics of $u(T)$. It seems that the best way to do this is through computational integration of the differential equation.
Another motivation for the research discussed in this paper was to examine the dynamics of a quantum Bianchi IX universe in which the symmetry group of space is $SU(2)$. The study of classical spacetimes suggests that the dynamics of the spatial geometry of a general manifold close to a singularity (such as the big bang singularity) can accurately be modelled by a Bianchi IX spacetime [@bkl] and that the evolution of such a Bianchi IX model is chaotic [@ryshep; @bianchaos]. A recent result of the loop quantum cosmology program [@bojrev] is that loop quantum gravity effects become important close to such classical singularities and that such effects curb the chaotic evolution of the quantum Bianchi IX model which lead to a more regular evolution of the spatial geometry in such regimes [@bojbianc1; @bojbianc2].
It will be interesting to study the Bianchi IX model in our framework to see whether or not the evolution is chaotic close to classical singularities. Since an exact wavefunction solution was not found for the Bianchi IX model, it may be necessary to evolve an initial state numerically using the Bianchi IX Hamiltonian operator (the operator version of equation (\[hamiltonian\]) with $n^I \equiv 1$ for all $I$) in order to proceed with this analysis.
**Spatial perturbations and the cosmic microwave background.** The greatest success of standard, inflationary cosmology is the accurate prediction of the spectrum of perturbations observed in the cosmic microwave background (CMB) [@liddle]. Due to the exponential expansion of space during the inflationary epoch and the dynamics of quantum scalar fields in such spacetimes, scales well below the Planck length are expanded to macroscopic scales in most inflationary models [@transplanck]. Since many believe that low energy physics no longer applies at such small scales, the precise measurement of the CMB may provide the first observational evidence for quantum gravity. Indeed, a recent semi-classical analysis [@lqgcmb] suggests loop quantum gravitational effects may leave an observational signature in the CMB.
The general framework described in this paper may prove to be an advantageous starting point for a more accurate calculation of the CMB spectrum that incorporates the quantum nature of the gravitational field. While very little work has been done in this regard, one possible direction is to follow an analogous linearization procedure to that described in [@linkod] about one of our Bianchi models. In that paper, the authours are able to define an inner product which is necessary for any computation. While they find that the linearized Kodama state is non-normalizable (in the Lorentzian theory) it is hoped that the modifications introduced by the scalar field and the choice of time gauge described in this paper will lead to a normalizable state. This was indeed the case for the purely homogeneous states, as described in section \[sec:normal\]. The successful calculation of the CMB spectrum from a theory of quantum gravity would then give us an observable prediction which could either support or falsify our approach to quantum cosmology.
Acknowledgements {#acknowledgements .unnumbered}
================
The authour would like to thank Lee Smolin and Stephon Alexander for their continued collaboration in this research program and for useful discussions during the course of this work.
[99]{}
A.R. Liddle and D.H. Lyth *Cosmological Inflation and Large-Scale Structure* Cambridge University Press: Cambridge (2000).
C. Rovelli *Quantum Gravity* To be published by Cambridge University Press in October, 2004. Available online at http://www.cpt.univ-mrs.fr/\~rovelli/book.pdf.
T. Thiemann *Introduction to Modern Canonical Quantum General Relativity* \[arXiv:gr-qc/0110034\].
C. Rovelli “Loop Quantum Gravity”, Living Rev. Relativity **1**, 1 (1998). \[Online article\]: cited on , http://www.livingreviews.org/lrr-1998-1.
L. Smolin *Quantum Gravity with a Positive Cosmological Constant* \[arXiv:hep-th/0209079\].
A. Ashtekar *New Variables for Classical and Quantum Gravity* Phys. Rev. Lett. **57** 2244 (1986); *New Hamiltonian Formulation of General Relativity* Phys. Rev. **D36** 1587 (1987).
A. Ashtekar *Lectures on Non-Perturbative Canonical Gravity* World Scientific: Singapore (1991).
M. Bojowald and H.A. Morales-Técotl *Cosmological Applications of Loop Quantum Gravity* Lect. Notes Phys.**646**, 421 (2004) \[arXiv:gr-qc/0306008\].
M. Bojowald *ABsence of a Singularity in Loop Quantum Cosmology* Phys. Rev. Lett. **86** 5227 (2001) \[arXiv:gr-qc/0102069\].
S. Alexander, J. Malecki and L. Smolin *Loop Quantum Gravity and Inflation*, Phys. Rev.**D70** 044025 (2004) \[arXiv:hep-th/0309045\].
H. Kodama, Prog. of Theo. Phys. **80**, 1024 (1988); Phys. Rev. D **D42** 2548 (1990).
P.A.M. Dirac *Lectures on Quantum Mechanics* Belfer Graduate School of Science Monographs **2**, Yeshiva University Press: New York (1964).
M.P. Ryan and L.C. Shepley *Homogeneous Relativistic Cosmologies* Princeton University Press: Princeton (1975).
H. Kodama *Specialization of Ashtekar’s Formalism to Bianchi Cosmology*, Progr. of Theo. Phys. **80**, 6, 1024 (1988).
M. Bojowald *Homogeneous Loop Quantum Cosmology* Class. Quant. Grav. **20** 2595 (2003) \[arXiv:gr-qc/0303073\].
M. Bojowald, G. Date and K. Vandersloot *Homogeneous Loop Quantum Cosmology: The Role of the Spin Connection* Class. Quant. Grav. **21** 1253 (2004) \[arXiv:gr-qc/0311004\].
P.A.M. Dirac *The Principles of Quantum Mechanics, 4th Ed.* Oxford University Press: Oxford (1958).
H. Goldstein, C. Poole and J. Safko *Classical Mechanics* Addison Wesley: San Fransisco (2002).
C. Soo *Wave Function of the Universe and Chern-Simons Perturbation Theory* Class. Quant. Grav. **19** 1051 (2002) \[arXiv:gr-qc/0109046\].
M. Bojowald *Loop Quantum Cosmology: II. Volume Operator* Class. Quant. Grav. **17** 1509 (2000) \[arXiv:gr-qc/9910104\].
V.A. Belinskii, I.M. Khalatnikov and E.M. Lifshitz *A General Solution of the Einstein Equations with a Time Singularity* Adv. in Phys. **31**, 6, 639 (1982).
T. Damour, H. Henneaux and H. Nicolai *Cosmological Billiards*, Class. Quant. Grav. **20**, R145 (2003) \[arXiv:hep-th/0212256\].
M. Bojowald and G. Date *A Non-Chaotic Quantum Bianchi IX Universe and the Quantum Structure of Classical Singularities* Phys. Rev. Lett. **92** 071302 (2004) \[arXiv:gr-qc/0311003\].
J. Martin and R. Brandenberger *Trans-Planckian Problem of Inflationary Cosmology* Phys. Rev. D **63** 123501 (2001) \[arXiv:hep-th/0005432\]; *Dependence of the Spectra of Fluctuations in Inflationary Cosmology on Trans-Planckian Physics* Phys. Rev. D **68** 063513 (2003) \[arXiv:hep-th/0305161\].
S. Tsujikawa, P. Singh and R. Maartens *Loop Quantum Gravity Effects on Inflation and the CMB* \[arXiv:astro-ph/0311015\].
L. Friedel and L. Smolin *The Linearization of the Kodama State* \[arXiv:hep-th/0310224\].
[^1]: email: [email protected]
[^2]: Thorough, technical introductions to loop quantum gravity are available [@carlo; @thiemann] as well as shorter, less technical reviews [@carlorev; @leerev], the latter being closely related to the work discussed in this paper.
[^3]: Note that slow-roll inflation [@liddle] is achieved by a *decreasing* $\phi$ (*i.e.* $\phi$ is “rolling down the hill” of the potential). This means that the *forward* progression of time corresponds to a *decreasing* $T$.
[^4]: A complete description of homogeneous spacetimes is given in [@ryshep] where the reader is referred to for a more detail derivation of the facts stated here.
[^5]: In that paper [@lqginf], it was shown that that the value of the function $u(T)$ is constrained such that the metric remain real and maintain a Lorentzian signature. Hence, $w$ is similarly constrained to such values.
|
---
abstract: 'Gradient schemes is a framework which enables the unified convergence analysis of many different methods – such as finite elements (conforming, non-conforming and mixed) and finite volumes methods – for $2^{\rm nd}$ order diffusion equations. We show in this work that the gradient schemes framework can be extended to variational inequalities involving mixed Dirichlet, Neumann and Signorini boundary conditions. This extension allows us to provide error estimates for numerical approximations of such models, recovering known convergence rates for some methods, and establishing new convergence rates for schemes not previously studied for variational inequalities. The general framework we develop also enables us to design a new numerical method for the obstacle and Signorini problems, based on hybrid mimetic mixed schemes. We provide numerical results that demonstrate the accuracy of these schemes, and confirm our theoretical rates of convergence.'
address:
- |
School of Mathematical Sciences, Monash University, Victoria 3800, Australia\
and Umm-Alqura University
- 'School of Mathematical Sciences, Monash University, Victoria 3800, Australia.'
author:
- Yahya Alnashri
- Jérôme Droniou
bibliography:
- 'ref2.bib'
title: 'Gradient schemes for the Signorini and the obstacle problems, and application to hybrid mimetic mixed methods'
---
Introduction
============
Numerous problems arising in fluid dynamics, elasticity, biomathematics, mathematical economics and control theory are modelled by second order partial differential equations with one-sided constraints on the solution or its normal derivative on the boundary. Such models also appear in free boundary problems, in which partial differential equations (PDEs) are written on a domain whose boundary is not given but has to be located as a part of the solution. From the mathematical point of view, these models can be recast into variational inequalities [@G1; @G12; @G13]. Linear variational inequalities are used in the study of triple points in electrochemical reactions [@KDJPHM], of contact in linear elasticity [@hild2007], and of lubrication phenomena [@capriz1977]. Other models involve non-linear variational inequalities; this is for example the case of unconfined seepage models, which involve non-linear quasi-variational inequalities obtained through the Baiocchi transform [@baiocchi], or quasi-linear classical variation inequalities through the extension of the Darcy velocity inside the dry domain [@A1; @Desai.Li:83; @A25; @A26]. It is worth mentioning that several of these linear and non-linear models involve possibly heterogeneous and anisotropic tensors. For linear equations, this happens in models of contact elasticity problems involving composite materials (for which the stiffness tensor depends on the position), and in lubrication problems (the tensor is a function of the first fundamental form of the film). For non-linear models, heterogeneity and anisotropy are encountered in seepage problems in porous media.
In this work, we consider two types of linear elliptic variational inequalities: the Signorini problem and the obstacle problem. The Signorini problem is formulated as $$\begin{aligned}
-\mathrm{div}(\Lambda\nabla\bar{u})= f &\mbox{\quad in $\Omega$,} \label{seepage1}\\
\bar{u} =0 &\mbox{\quad on $\Gamma_{1}$,} \label{seepage2}\\
\Lambda\nabla\bar{u}\cdot \mathbf{n}= 0 &\mbox{\quad on $\Gamma_{2}$,} \label{seepage3}\\
\left.
\begin{array}{r}
\dsp \bar{u} \leq a\\
\dsp \Lambda\nabla\bar{u}\cdot \mathbf{n} \leq 0\\
\dsp \Lambda\nabla\bar{u}\cdot \mathbf{n} (a-\bar{u}) = 0
\end{array} \right\}
& \mbox{\quad on}\; \Gamma_{3}.
\label{sigcondition}\end{aligned}$$ The obstacle problem is $$\begin{aligned}
(\mathrm{div}(\Lambda\nabla\bar{u})+f)(g-\bar{u}) &= 0 \mbox{\quad in $\Omega$,} \label{obs1}\\
-\mathrm{div}(\Lambda\nabla\bar{u}) &\leq f \mbox{\quad in $\Omega$,} \label{obs2}\\
\bar{u}&\leq g \mbox{\quad in $\Omega$,} \label{obs3}\\
\bar{u} &= 0 \mbox{\quad on $\partial\Omega$.} \label{obs4}\end{aligned}$$ Here $\Omega$ is a bounded open set of ${{\mathbb R}}^d$ ($d\ge 1$), $\mathbf{n}$ is the unit outer normal to $\partial\Omega$ and $(\Gamma_1,\Gamma_2,\Gamma_3)$ is a partition of $\partial\Omega$ (precise assumptions are stated in the next section).
Mathematical theories associated with the existence, uniqueness and stability of the solutions to variational inequalities have been extensively developed [@G10; @G12; @G1]. From the numerical point of view, different methods have been considered to approximate variational inequalities. Bardet and Tobita [@A9] applied a finite difference scheme to the unconfined seepage problem. Extensions of the discontinuous Galerkin method to solve the obstacle problem can be found in [@A13; @A14]. Although this method is still applicable when the functions are discontinuous along the elements boundaries, the exact solution has to be in the space $H^{2}$ to ensure the consistency and the convergence. Numerical verification methods, which aim at finding a set in which a solution exists, have been developed for a few variational inequality problems. In particular we cite the obstacle problems, the Signorini problem and elasto-plastic problems (see [@A19] and references therein).
Falk [@F1] was the first to provide a general error estimate for the approximation by conforming methods of the obstacle problem. This estimate showed that the $\mathbb{P}_1$ finite element error is $\mathcal{O}(h)$. Yang and Chao [@A28] showed that the convergence rate of non-conforming finite elements method for the Signorini problem has order one, under an $H^{5/2}$-regularity assumption on the solution.
Herbin and Marchand [@A2] showed that if $\Lambda \equiv I_{d}$ and if the grid satisfies the orthogonality condition required by the two-point flux approximation (TPFA) finite volume method, then for both problems the solutions provided by this scheme converge in $L^{2}(\Omega)$ to the unique solution as the mesh size tends to zero.
The gradient schemes framework is a discretisation of weak variational formulations of PDEs using a small number of discrete elements and properties. Previous studies [@B10] showed that this framework includes many well-known numerical schemes: conforming, non-conforming and mixed finite elements methods (including the non-conforming “Crouzeix–Raviart” method and the Raviart–Thomas method), hybrid mimetic mixed methods (which contain hybrid mimetic finite differences, hybrid finite volumes/SUSHI scheme and mixed finite volumes), nodal mimetic finite differences, and finite volumes methods (such as some multi-points flux approximation and discrete duality finite volume methods). Boundary conditions can be considered in various ways within the gradient schemes framework, which is a useful flexibility for some problems and engineering constraints (e.g. available meshes, etc.). This framework has been analysed for several linear and non-linear elliptic and parabolic problems, including the Leary-Lions, Stokes, Richards, Stefan’s and elasticity equations. It is noted that, for models based on linear operators as in only three properties are sufficient to obtain the convergence of schemes for the corresponding PDEs. We refer the reader to [@B1; @B2; @B9; @S1; @DE-fvca7; @zamm2013; @eym-12-stef] for more details.
The aim of this paper is to develop a gradient schemes framework for elliptic linear variational inequalities with different types of boundary conditions. Although we focus here on scalar models, as opposed to variational inequalities involving displacements, we notice that using [@B9] our results can be easily adapted to elasticity models. The gradient scheme framework enables us to design a unified convergence analysis of many numerical methods for variational inequalities; using the theorems stemming from this analysis, we recover known estimates for some schemes, and we establish new estimates for methods which were not, to our best knowledge, previously studied for variational inequalities. In particular, we apply our results to the hybrid mimetic mixed (HMM) method of [@B5], which contains the mixed/hybrid mimetic finite difference methods and, on certain grids (such as triangular grids), the TPFA scheme. We would like to point out that, to our best knowledge, although several papers studied the obstacle problem for *nodal* mimetic finite differences (see e.g. [@A-32; @MFD2]), no study has been conducted of the mixed/hybrid mimetic method for the obstacle problem, or of any mimetic scheme for the Signorini problem. Our analysis through the gradient schemes framework, and our application to the HMM method therefore seems to provide entirely new results for these schemes on meaningful variational inequality models. We also mention that we are mostly interested in low-order methods, since the models we consider could involve discontinuous data – as in heterogeneous media – which prevent the solution from being smooth. Finally, it seems to us that our unified analysis gives simpler proofs than the previous analyses in the literature. The study of linear models is a first step towards a complete analysis of non-linear models, a work already initiated in [@Y].
This paper is organised as follows. In Section \[main result\], we present the gradient schemes framework for variational inequalities, and we state our main error estimates. We then show in Section \[sec:examples\] that several methods, of practical relevance for anisotropic heterogeneous elliptic operators, fit into this framework and therefore satisfy the error estimates established in Section \[main result\]. This also enables us to design and analyse for the first time an HMM scheme for variational inequalities. We provide in Section \[sec:numer\] numerical results that demonstrate the excellent behaviour of this new scheme on test cases from the literature. Available articles on numerical methods for variational inequalities do not seem to present any test case with analytic solutions, on which theoretical rates of convergence can be rigorously checked; we therefore develop in Section \[sec:numer\] such a test case, and we use it to assess the practical convergence rates of HMM for the Signorini problem. Section \[proof\] is devoted to the proof of the main results. Since we deal here with possibly nonconforming schemes, the technique used in [@F1] cannot be used to obtain error estimates. Instead, we develop a similar technique as in [@B2] for gradient schemes for PDEs. Dealing with variational inequalities however requires us to establish new estimates with additional terms. These additional terms apparently lead to a degraded rate of convergence, but we show that the optimal rates can be easily recovered for many classical methods. We present in Section \[sec:approxbarriers\] an extension of our results to the case of approximate barriers, which is natural in many schemes in which the exact barrier is replaced with an interpolant depending on the approximation space. The paper is completed by two short sections: a conclusion, and an appendix recalling the definition of the normal trace of $H_{\div}$ functions.
Assumptions and main results {#main result}
============================
Weak formulations
-----------------
Let us start by stating our assumptions and weak formulations for the Signorini and the obstacle problems.
We make the following assumptions on the data in –:
1. $\Omega$ is an open bounded connected subset of $\mathbb{R}^{d}$, $d\ge 1$ and $\O$ has a Lipschitz boundary,
2. ${\Lambda}$ is a measurable function from $\Omega$ to $\mathrm{M}_{d}(\mathbb{R})$ (where $\mathrm{M}_{d}(\mathbb{R})$ is the set of $d\times d$ matrices) and there exists $\underline{\lambda}$, $\overline{\lambda} >0$ such that, for a.e. $x \in \Omega$, $\Lambda(x)$ is symmetric with eigenvalues in $[\underline{\lambda},\overline{\lambda}]$,
3. $\Gamma_{1}, \Gamma_{2}$ and $\Gamma_{3}$ are measurable pairwise disjoint subsets of $\partial\Omega$ such that $\Gamma_{1}\cup \Gamma_{2} \cup \Gamma_{3} = \partial\Omega$ and $\Gamma_{1}$ has a positive $(d-1)$-dimensional measure,
4. $f \in L^{2}(\Omega)$, $a\in L^2(\partial\Omega)$.
\[hyseepage\]
Under Hypothesis \[hyseepage\], the weak formulation of Problem – is $$\label{weseepage}
\left\{
\begin{array}{ll}
\dsp \mbox{Find}\; \bar{u} \in \mathcal{K}:=\{ v \in H^{1}(\Omega)\; : \; \gamma(v)=0\; \mbox{on}\; \Gamma_{1},\; \gamma(v) \leq a\; \mbox{on}\; \Gamma_{3}\}\; \mbox{ s.t.},
\\
\dsp \forall v\in \mathcal{K}\,,\quad
\int_\O \Lambda(x)\nabla \bar{u}(x) \cdot \nabla(\bar{u}-v)(x){\, \mathrm{d}}x
\leq \int_\O f(x)(\bar{u}(x)-v(x)){\, \mathrm{d}}x\,,
\end{array}
\right.$$ where $\gamma:H^1(\O) \mapsto H^{1/2}(\partial\O)$ is the trace operator. We refer the reader to [@A2-17] for the proof of equivalence between the strong and the weak formulations. It was shown in [@G10] that, if $\mathcal K$ is not empty (which is the case, for example, if $a\ge 0$ on $\partial\Omega$) then there exists a unique solution to Problem . In the sequel, we will assume that the barrier $a$ is such that $\mathcal K$ is not empty.
Our assumptions on the data in – are:
1. $\Omega$ and $\Lambda$ satisfy (1) and (2) in Hypothesis \[hyseepage\],
2. $f \in L^{2}(\O)$, $g \in L^{2}(\O)$.
\[hyobs\]
Under Hypotheses \[hyobs\], we consider the obstacle problem – under the following weak form: $$\label{weobs}
\left\{
\begin{array}{ll}
\dsp \mbox{Find}\; \bar{u} \in \mathcal{K}:=\{ v \in H_{0}^{1}(\Omega)\; : \; v \leq g\; \mbox{in}\; \Omega\}\;\mbox{ such that, for all $v\in \mathcal{K}$,}\\
\dsp\int_\O \Lambda(x)\nabla \bar{u}(x) \cdot \nabla(\bar{u}-v)(x){\, \mathrm{d}}x
\leq \int_\O f(x)(\bar{u}(x)-v(x)){\, \mathrm{d}}x.
\end{array}
\right.$$ It can be seen [@G11] that if $\bar{u}\in C^{2}(\overline{\O})$ and $\Lambda$ is Lipschitz continuous, then – and are indeed equivalent. The proof of this equivalence can be easily adapted to the case where the solution belongs to $H^{2}(\O)$. If $g$ is such that $\mathcal K$ is not empty, which we assume from here on, then has a unique solution
Construction of gradient schemes {#gradient discretisation}
--------------------------------
Gradient schemes provide a general formulation of different numerical methods. Each gradient scheme is based on a gradient discretisation, which is a set of discrete space and operators used to discretise the weak formulation of the problem under study. Actually, a gradient scheme consists of replacing the continuous space and operators used in the weak formulations by the discrete counterparts provided by a gradient discretisation. In this part, we define gradient discretisations for the Signorini and the obstacle problems, and we list the properties that are required of a gradient discretisation to give rise to a converging gradient scheme.
### Signorini problem {#sec:signorini_problem}
(Gradient discretisation for the Signorini problem). A gradient discretisation $\mathcal{D}$ for Problem is defined by $\mathcal{D}=(X_{\mathcal{D}, \Gamma_{1}}, \Pi_{\mathcal{D}}, \mathbb{T}_{\mathcal{D}}, \nabla_{\mathcal{D}})$, where
1. the set of discrete unknowns $X_{\mathcal{D}, \Gamma_{1}}$ is a finite dimensional vector space on $\mathbb{R}$, taking into account the homogeneous boundary conditions on $\Gamma_{1}$,
2. the linear mapping $\Pi_{\mathcal{D}} : X_{\mathcal{D}, \Gamma_{1}} \longrightarrow L^{2}(\Omega)$ is the reconstructed function,
3. the linear mapping $\mathbb{T}_{\mathcal{D}} : X_{\mathcal{D}, \Gamma_{1}}\longrightarrow H_{\Gamma_{1}}^{1/2}(\partial\Omega)$ is the reconstructed trace, where $H^{1/2}_{\Gamma_1}(\partial\O)=\{v\in H^{1/2}(\partial\O)\,:\,v=0\mbox{ on }\Gamma_1\}$,
4. the linear mapping $\nabla_{\mathcal{D}} : X_{\mathcal{D}, \Gamma_{1}} \longrightarrow L^{2}(\Omega)^{d}$ is a reconstructed gradient, which must be defined such that $\|\nabla_{\mathcal{D}}\cdot\|_{L^{2}(\Omega)^{d}}$ is a norm on $X_{\mathcal{D},{\Gamma_1}}$.
\[gdseepage\]
As explained above, a gradient scheme for is obtained by simply replacing in the weak formulation of the problem the continuous space and operators by the discrete space and operators coming from a gradient discretisation.
(Gradient scheme for the Signorini problem). Let $\mathcal{D}$ be the gradient dicretisation in the sense of Definition \[gdseepage\]. The gradient scheme scheme for Problem is $$\label{gsseepage}
\left\{
\begin{array}{ll}
\dsp \mbox{Find}\; u \in \mathcal{K}_{\mathcal{D}}:=\{ v \in X_{\mathcal{D},\Gamma_1}\; : \; \mathbb{T}_{\mathcal{D}}v \leq a\; \mbox{on}\; \Gamma_{3}\}\; \mbox{ such that, $\forall v\in \mathcal{K}_{\mathcal{D}}$},\\
\dsp\int_\O \Lambda(x)\nabla_{\mathcal{D}}u(x) \cdot \nabla_{\mathcal{D}}(u-v)(x){\, \mathrm{d}}x
\leq \int_\O f(x)\Pi_{\mathcal{D}}{(u-v)(x)}{\, \mathrm{d}}x.
\end{array}
\right.$$
The accuracy of a gradient discretisation is measured through three indicators, related to its *coercivity*, *consistency* and *limit-conformity*. The good behaviour of these indicators along a sequence of gradient discretisations ensures that the solutions to the corresponding gradient schemes converge towards the solution to the continuous problem.
To measure the coercivity of a gradient discretisation $\mathcal{D}$ in the sense of Definition \[gdseepage\], we define the norm $C_{\mathcal{D}}$ of the linear mapping $\Pi_{\mathcal D}$ by $$\begin{aligned}
C_{\mathcal{D}} = {\displaystyle \max_{v \in X_{\mathcal{D},\Gamma_{1}}\setminus\{0\}}\Big(\frac{\|\Pi_{\mathcal{D}}v\|_{L^{2}(\Omega)}}{\|\nabla_{\mathcal{D}} v\|_{L^{2}(\Omega)^{d}}}}+ \frac{\|\mathbb{T}_{\mathcal{D}}v\|_{H^{1/2}(\partial\Omega)}}{\|\nabla_{\mathcal{D}} v\|_{L^{2}(\Omega)^{d}}}\Big).
\label{coercivityseepage}\end{aligned}$$
The consistency of a gradient discretisation $\mathcal{D}$ in the sense of Definition \[gdseepage\] is measured by $S_{\mathcal{D}} : \mathcal{K}\times \mathcal K_\disc \longrightarrow [0, +\infty)$ defined by $$\forall (\varphi,v) \in \mathcal{K}{\times}{\mathcal K_\disc}
, \; S_{\mathcal{D}}(\varphi,v)= \| \Pi_{\mathcal{D}} v - \varphi \|_{L^{2}(\Omega)}
+ \| \nabla_{\mathcal{D}} v - \nabla \varphi \|_{L^{2}(\Omega)^{d}}.
\label{consistencyseepage}$$
To measure the limit-conformity of the gradient discretisation $\mathcal{D}$ in the sense of Definition \[gdseepage\], we introduce $W_{\mathcal{D}} : H_{\rm{div}}(\Omega) \longrightarrow [0, +\infty)$ defined by $$\begin{gathered}
\forall \bpsi \in H_{\rm{div}}(\Omega),\\
W_{\mathcal{D}}(\bpsi)
= \sup_{v\in X_{\mathcal{D},\Gamma_{1}}\setminus \{0\}}\frac{1}{\|\nabla_{\mathcal{D}} v\|_{L^{2}(\Omega)^{d}}} \Big|\int_{\Omega}(\nabla_{\mathcal{D}}v\cdot \bpsi + \Pi_{\mathcal{D}}v \cdot\mathrm{div} (\bpsi)) {\, \mathrm{d}}x
-\langle \gamma_{\bfn}(\bpsi), \mathbb{T}_{\mathcal{D}}v \rangle \Big|,
\label{conformityseepage}\end{gathered}$$ where $H_{\rm{div}}(\Omega)= \{\bpsi \in L^{2}(\Omega)^{d}{\;:\;} \mathrm{div}\bpsi \in L^{2}(\Omega)\}$. We refer the reader to the appendix (Section \[traceoperator\]) for the definitions of the normal trace $\gamma_{\bfn}$ on $H_{\div}(\O)$, and of the duality product $\langle \cdot, \cdot\rangle$ between $(H^{1/2}(\partial\Omega))'$ and $H^{1/2}(\partial\Omega)$.
In order for a sequence of gradient discretisations $(\mathcal D_m)_{m\in{{\mathbb N}}}$ to provide converging gradient schemes for PDEs, it is expected that $(C_{\mathcal D_m})_{m\in{{\mathbb N}}}$ remains bounded and that $(\inf_{v\in \mathcal K_{\disc_m}}S_{\mathcal D_m}(\cdot,v))_{m\in{{\mathbb N}}}$ and $(W_{\mathcal D_m})_{m\in{{\mathbb N}}}$ converge pointwise to $0$. These properties are precisely called the *coercivity*, the *consistency* and the *limit-conformity* of the sequence of gradient discretisations $(\mathcal D_m)_{m\in{{\mathbb N}}}$, see [@B1].
The definition of $C_{\mathcal D}$ does not only include the norm of $\Pi_{\mathcal D}$, as in the obstacle problem case detailed below, but also quite naturally the norm of the other reconstruction operator $\mathbb{T}_{\mathcal D}$.
The definition takes into account the non-zero boundary conditions on a part of $\partial\Omega$. This was already noticed in the case of gradient discretisations adapted for PDEs with mixed boundary conditions, but a major difference must be raised here. $W_{\mathcal{D}}$ will be applied to $\bpsi = \Lambda\nabla\bar{u}$. For PDEs [@B1; @B2; @S1] the boundary conditions ensure that $\gamma_{\bfn}(\bpsi) \in L^{2}(\partial\Omega)$ and thus that $\langle \gamma_{\bfn}(\bpsi), \mathbb{T}_{\mathcal{D}}v \rangle$ can be replaced with $\int_{\Gamma_{3}}\gamma_{\bfn}(\bpsi)\mathbb{T}_{\mathcal{D}}{v} {\, \mathrm{d}}x$ in $W_\disc$. Hence, in the study of gradient schemes for PDEs with mixed boundary conditions, $\mathbb{T}_{\mathcal{D}}v$ only needs to be in $L^{2}(\partial\Omega)$. For the Signorini problem we cannot ensure that $\gamma_{\bfn}(\bpsi) \in L^{2} (\partial\Omega)$; we only know that $\gamma_{\bfn}(\bpsi) \in H^{1/2}(\partial\Omega)$. The definition of $\mathbb{T}_\disc$ therefore needed to be changed to ensure that this reconstructed trace takes values in $H^{1/2}(\partial\Omega)$ instead of $L^2(\partial\Omega)$.
### Obstacle problem
Gradient schemes for the obstacle problems already appear in [@Y]. We just recall the following definitions for the sake of legibility.
The definition of a gradient discretisation for the obstacle problem is not different from the definition of a gradient discretisation for elliptic PDEs with homogeneous Dirichlet conditions [@B1; @B2].
(Gradient discretisation for the obstacle problem). A gradient discretisation $\mathcal{D}$ for homogeneous Dirichlet boundary conditions is defined by a triplet $\mathcal{D}=(X_{\mathcal{D},0}, \Pi_{\mathcal{D}},\nabla_{\mathcal{D}})$, where
1. the set of discrete unknowns $X_{\mathcal{D},0}$ is a finite dimensional vector space over $\mathbb{R}$, taking into account the boundary condition .
2. the linear mapping $\Pi_{\mathcal{D}} : X_{\mathcal{D},0} \longrightarrow L^{2}(\Omega)$ gives the reconstructed function,
3. the linear mapping $\nabla_{\mathcal{D}} : X_{\mathcal{D},0} \longrightarrow L^{2}(\Omega)^{d}$ gives a reconstructed gradient, which must be defined such that $\|\nabla_{\mathcal{D}}\cdot\|_{L^{2}(\Omega)^{d}}$ is a norm on $X_{\mathcal{D},0}$.
\[gdobs\]
(Gradient scheme for the obstacle problem). Let $\mathcal{D}$ be a gradient discretisation in the sense of Definition \[gdobs\]. The corresponding gradient scheme for (\[weobs\]) is given by $$\label{gsobs}
\left\{
\begin{array}{ll}
\dsp \mbox{Find}\; u \in \mathcal{K}_{\mathcal{D}}:=\{ v \in X_{\mathcal{D},0}\; : \; \Pi_{\mathcal{D}} v \leq g\; \mbox{in}\; \Omega\}\;
\mbox{ such that, $\forall v\in \mathcal{K}_{\mathcal{D}}$},\\
\dsp\int_\O \Lambda(x)\nabla_{\mathcal{D}}u(x) \cdot \nabla_{\mathcal{D}}(u-v)(x){\, \mathrm{d}}x
\leq \int_\O f(x)\Pi_{\mathcal{D}}{(u-v)(x)}{\, \mathrm{d}}x.
\end{array}
\right.$$
The coercivity, consistency and limit-conformity of a gradient discretisation in the sense of Definition \[gdobs\] are defined through the following constant and functions:
$$\begin{aligned}
C_{\mathcal{D}} = {\displaystyle \max_{v \in X_{\mathcal{D},0}\setminus\{0\}}\frac{\|\Pi_{\mathcal{D}}v\|_{L^{2}(\Omega)}}{\|\nabla_{\mathcal{D}} v\|_{L^{2}(\Omega)^{d}}}},
\label{coercivityobs}\end{aligned}$$
$$\forall (\varphi,v) \in \mathcal{K}\times \mathcal K_\disc, \; S_{\mathcal{D}}(\varphi,v)= \| \Pi_{\mathcal{D}} v - \varphi \|_{L^{2}(\Omega)}
+ \| \nabla_{\mathcal{D}} v - \nabla \varphi \|_{L^{2}(\Omega)^{d}},
\label{consistencyobs}$$
and $$\forall \bpsi \in H_{\mathrm{div}}(\Omega),\;
W_{\mathcal{D}}(\bpsi)=
\sup_{v\in X_{\mathcal{D},0}\setminus \{0\}}\frac{1}{\|\nabla_{\mathcal{D}} v\|_{L^{2}(\Omega)^{d}}} \Big|\int_{\Omega}(\nabla_{\mathcal{D}}v\cdot \bpsi + \varPi_{\mathcal{D}}v \cdot\mathrm{div} (\bpsi)) {\, \mathrm{d}}x
\Big|.
\label{conformityobs}$$
The only indicator that changes with respect to [@B1; @B2] is $S_\disc$. For PDEs, the consistency requires to consider $S_{\mathcal D}$ on $H^1_0(\Omega)\times X_{\mathcal D,0}$ and to ensure that $\inf_{v\in X_{\mathcal D_m}}S_{\mathcal D_m}(\varphi,v)\to 0$ as $m\to\infty$ for any $\varphi\in H^1_0(\Omega)$. Here, the domain of $\mathcal S_{\mathcal D}$ is adjusted to the set to which the solution of the variational inequality belongs (namely $\mathcal K$), and to the set in which we can pick the test functions of the gradient scheme (namely $\mathcal K_\disc$).
We note that this double reduction does not necessarily facilitate, with respect to the PDEs case, the proof of the consistency of the sequence of gradient discretisations. In practice, however, the proof developed for the PDE case also provides the consistency of gradient discretisations for variational inequalities.
Error estimates
---------------
We present here our main error estimates for the gradient schemes approximations of Problems and .
### Signorini problem {#signorini-problem}
Under Hypothesis \[hyseepage\], let $\bar{u} \in \mathcal{K}$ be the solution to Problem (\[weseepage\]). If $\mathcal{D}$ is a gradient discretisation in the sense of Definition \[gdseepage\] and $\mathcal{K}_{\mathcal{D}}$ is non-empty, then there exists a unique solution $u \in \mathcal K_\disc$ to the gradient scheme (\[gsseepage\]). Furthermore, this solution satisfies the following inequalities, for any $v_\disc\in\mathcal K_\disc$: $$\begin{gathered}
\|\nabla_{\mathcal{D}}u - \nabla\bar{u}\|_{L^{2}(\Omega)^{d}} \leq\\
\sqrt{\frac{2}{\underline{\lambda}}G_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})^{+}}
+
\frac{1}{\underline{\lambda}}
[W_{\mathcal{D}}(\Lambda\nabla\bar{u})+(\overline{\lambda}+\underline{\lambda})
S_{\mathcal{D}}(\bar{u}, v_{\mathcal{D}})]
,
\label{errorseepage1}\end{gathered}$$ and $$\begin{gathered}
\|\Pi_{\mathcal{D}}u - \bar{u}\|_{L^{2}(\Omega)} \leq\\
C_{\mathcal{D}}\sqrt{\frac{2}{\underline{\lambda}}G_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})^+}+
\frac{1}{\underline{\lambda}}
[{C_\disc}W_{\mathcal{D}}(\Lambda\nabla\bar{u})+
(C_{\mathcal{D}}\overline{\lambda}+\underline{\lambda})S_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})],
\label{errorseepage2}\end{gathered}$$ where $$G_{\mathcal{D}}(\bar{u}, v_{\mathcal{D}})= \langle \gamma_\bfn(\Lambda\nabla\bar u), \mathbb{T}_{\mathcal{D}}v_{\mathcal{D}}-\gamma\bar{u} \rangle
\quad\mbox{ and }\quad G_{\disc}^{+}= {\rm max}(0,G_{\disc}).$$ Here, the quantities $C_{\mathcal{D}}$, $S_{\mathcal{D}}(\bar{u}$, $v_{\mathcal{D}})$ and $W_{\mathcal{D}}$ are defined by , and . \[errorseepage\]
### Obstacle problem
An error estimate for gradient schemes for the obstacle problem is already given in [@Y Theorem 1]. For low-order methods with piecewise constant approximations, such as the HMM schemes, this theorem provides an $\mathcal O(\sqrt{h})$ rate of convergence for the function and the gradient ($h$ is the mesh size). The following theorem improves [@Y Theorem 1] by introducing the free choice of interpolant $v_\disc$. This enables us, in Section \[improveobs\], to establish much better rates of convergence – namely $\mathcal O(h)$ for HMM, for example.
Under Hypothesis \[hyobs\], let $\bar{u} \in \mathcal{K}$ be the solution to Problem (\[weobs\]). If $\mathcal{D}$ is a gradient discretisation in the sense of Definition \[gdobs\] and if $\mathcal{K}_{\mathcal{D}}$ is non-empty, then there exists a unique solution $u \in \mathcal{K}_{\mathcal{D}}$ to the gradient scheme (\[gsobs\]). Moreover, if $\mathrm{div}(\Lambda\nabla\bar{u})\in L^{2}(\Omega)$ then this solution satisfies the following estimates, for any $v_\disc\in \mathcal K_\disc$: $$\begin{gathered}
\|\nabla_{\mathcal{D}}u - \nabla\bar{u}\|_{L^{2}(\Omega)^{d}} \leq\\
\sqrt{\frac{2}{\underline{\lambda}}E_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})^{+}}
+
\frac{1}{\underline{\lambda}}
[W_{\mathcal{D}}(\Lambda\nabla\bar{u})+(\overline{\lambda}+\underline{\lambda})
S_{\mathcal{D}}(\bar{u}, v_{\mathcal{D}})]
,
\label{errorobs1}\end{gathered}$$ $$\begin{gathered}
\|\Pi_{\mathcal{D}}u - \bar{u}\|_{L^{2}(\Omega)} \leq\\
C_{\mathcal{D}}\sqrt{\frac{2}{\underline{\lambda}}E_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})^+}+
\frac{1}{\underline{\lambda}}
[{C_\disc}W_{\mathcal{D}}(\Lambda\nabla\bar{u})+
(C_{\mathcal{D}}\overline{\lambda}+\underline{\lambda})S_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})],
\label{errorobs2}\end{gathered}$$ where $$E_{\mathcal{D}}(\bar
u,v_{\mathcal{D}})=\int_{\Omega}(\mathrm{div}(\Lambda\nabla\bar{u})+f)(\bar{u}-\Pi_{\mathcal{D}}v_{\mathcal{D}}){\, \mathrm{d}}x \quad
\mbox{ and } \quad E_{\disc}^{+}= {\rm max}(0,E_{\disc}).$$ Here, the quantities $C_{\mathcal{D}}$, $S_{\mathcal{D}}(\bar{u}, v_{\mathcal{D}})$ and $W_{\mathcal{D}}$ are defined by , and . \[errorobs\]
We note that, if $\Lambda$ is Lipschitz-continuous, the assumption $\div(\Lambda\nabla\bar{u}) \in L^{2}(\Omega)$ is reasonable given the $H^{2}$-regularity result on $\bar u$ of [@A2-23-4].
Compared with the previous general estimates, such as Falk’s Theorem [@F1], our estimates seem to be simpler since it only involves one choice of interpolant $v_\disc$ while Falk’s estimate depends on choices of $v_{h} \in K_{h}$ and $v \in K$. Yet, our estimates provide the same final orders of convergence as Falk’s estimate. We also note that Estimates , , and are applicable to conforming and non-conforming methods, whereas the estimates in [@F1] seem to be applicable only to conforming methods.
Orders of convergence
---------------------
In what follows, we consider polytopal open sets $\Omega$ and gradient discretisations based on polytopal meshes of $\Omega$, as defined in [@B10; @S1]. The following definition is a simplification (it does not include the vertices) of the definition in [@B10], that will however be sufficient to our purpose.
\[def:polymesh\] Let $\Omega$ be a bounded polytopal open subset of ${{\mathbb R}}^d$ ($d\ge 1$). A polytopal mesh of $\O$ is given by ${{\mathcal T}}= (\mesh,\edges,{\mathcal{P}})$, where:
1. $\mesh$ is a finite family of non empty connected polytopal open disjoint subsets of $\O$ (the cells) such that $\overline{\O}= \dsp{\cup_{K \in \mesh} \overline{K}}$. For any $K\in\mesh$, $|K|>0$ is the measure of $K$ and $h_K$ denotes the diameter of $K$.
2. $\edges$ is a finite family of disjoint subsets of $\overline{\O}$ (the edges of the mesh in 2D, the faces in 3D), such that any $\edge\in\edges$ is a non empty open subset of a hyperplane of ${{\mathbb R}}^d$ and $\edge\subset \overline{\O}$. We assume that for all $K \in \mesh$ there exists a subset ${{{\edges}_K}}$ of $\edges$ such that $\dr K = \dsp{\cup_{\edge \in {{{\edges}_K}}}} \overline{\edge}$. We then denote by $\mesh_\edge = \{K\in\mesh\,:\,\edge\in{{{\edges}_K}}\}$. We then assume that, for all $\edge\in\edges$, $\mesh_\edge$ has exactly one element and $\edge\subset\partial\O$, or $\mesh_\edge$ has two elements and $\edge\subset\O$. We let ${{{\edges}_{\rm int}}}$ be the set of all interior faces, i.e. $\edge\in\edges$ such that $\edge\subset \O$, and ${{{\edges}_{\rm ext}}}$ the set of boundary faces, i.e. $\edge\in\edges$ such that $\edge\subset \dr\O$. For $\edge\in\edges$, the $(d-1)$-dimensional measure of $\edge$ is $|\edge|$, the centre of gravity of $\edge$ is ${\overline{x}_\edge}$, and the diameter of $\edge$ is $h_\edge$.
3. ${\mathcal{P}}= (x_K)_{K \in \mesh}$ is a family of points of $\O$ indexed by $\mesh$ and such that, for all $K\in\mesh$, $\xcv\in K$ ($\xcv$ is sometimes called the “centre” of $\cv$). We then assume that all cells $K\in\mesh$ are strictly $\xcv$-star-shaped, meaning that if $x$ is in the closure of $K$ then the line segment $[\xcv,x)$ is included in $K$.
For a given $K\in \mesh$, let $\bfn_{K,\sigma}$ be the unit vector normal to $\sigma$ outward to $K$ and denote by $d_{K,\sigma}$ the orthogonal distance between $x_K$ and $\sigma\in\mathcal E_K$. The size of the discretisation is $h_\mesh=\sup\{h_K\,:\; K\in \mesh\}$.
For most gradient discretisations for PDEs based on first order methods, including HMM methods and conforming or non-conforming finite elements methods, explicit estimates on $S_\disc$ and $W_\disc$ can be established [@S1]. The proofs of these estimates are easily transferable to the above setting of gradient discretisations for variational inequalities, and give $$W_{\mathcal{D}}(\bpsi) \leq Ch_{\mathcal M} \| \bpsi \|_{H^{1}(\Omega)^{d}}, \quad \forall \bpsi \in H^{1}(\Omega)^{d},$$ $${\rm{inf}}_{v_{\mathcal{D}}\in \mathcal{K}_{\mathcal{D}}}S_{\mathcal{D}}(\varphi,v_{\mathcal{D}}) \leq Ch_{\mathcal M} \| \varphi \|_{H^{2}(\Omega)}, \quad \forall \varphi \in H^{2}(\Omega) \cap \mathcal K.$$ Hence, if $\Lambda$ is Lipschitz-continuous and $\bar u\in H^2(\O)$, then the terms involving $W_\disc$ and $S_\disc$ in , , and are of order $\mathcal O(h_{\mathcal M})$, provided that $v_{\mathcal D}$ is chosen to optimise $S_{\disc}$. Since the terms $G_{\mathcal D}$ and $E_{\mathcal D}$ can be bounded above by $C||\gamma(\bar u)-
\mathbb{T}_\disc v_\disc||_{L^2(\partial\O)}$ and $C||\bar u-\Pi_\disc v_\disc||_{L^2(\O)}$, respectively, we expect these to be of order $h_{\mathcal M}$. Hence, the dominating terms in the error estimates are $\sqrt{G_\disc^+}$ and $\sqrt{E_\disc^+}$. In first approximation, these terms seem to behave as $\sqrt{h_{\mathcal M}}$ for a first order conforming or non-conforming method. This initial brute estimate can however usually be improved, as we will show in Theorems \[theorem:improveseepage\] and \[theorem:improveobse\], even for non-conforming methods based on piecewise constant reconstructed functions, and lead to the expected $\mathcal O(h_{\mathcal M})$ global convergence rate.
### Signorini problem {#sec:improvesig}
For a first order conforming numerical method, the $\mathbb{P}_1$ finite element method for example, if $\bar u$ is in $H^2(\O)$, then the classical interpolant $v_\disc\in X_{\disc,\Gamma_1}$ constructed from the values of $\bar u$ at the vertices satisfies [@C3] $$\begin{aligned}
\| \Pi_{\mathcal{D}}v_{\mathcal{D}} - \bar{u} \|_{L^{2}(\Omega)} &\leq C h_{\mathcal M}^{2} \| D^{2}\bar{u}\|_{L^{2}(\Omega)^{d\times d}},\\
\| \nabla_{\mathcal{D}}v_{\mathcal{D}} - \nabla\bar{u} \|_{L^{2}(\Omega)^d} &\leq Ch_{\mathcal M} \| D^{2}\bar{u}\|_{L^{2}(\Omega)^{d\times d}},\\
||\mathbb{T}_\disc v_\disc-\gamma(\bar u)
||_{L^2(\partial\O)}&\leq Ch_{\mathcal M}^2 \| D^{2}\gamma(\bar{u})\|_{L^{2}(\partial\Omega)^{d\times d}}.\end{aligned}$$ If $a$ is piecewise affine on the mesh then this interpolant $v_\disc$ lies in $\mathcal K_\disc$ and can therefore be used in Theorem \[errorseepage\]. We then have $S_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})= \mathcal{O}(h_\mesh)$ and $G_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})= \mathcal{O}(h_\mesh^{2})$, and – give an order one error estimate on the $H^1$ norm. If $a$ is not linear, the definition of $\mathcal K_\disc$ is usually relaxed, see Section \[sec:approxbarriers\].
A number of low-order methods have piecewise constant approximations of the solution, e.g. finite volume methods or finite element methods with mass lumping. For those, there is no hope of finding an interpolant which gives an order $h_{\mathcal M}^2$ approximation of $\bar u$ in $L^2(\O)$ norm. We can however prove, for such methods, that $G_\disc$ behaves better than the expected $\sqrt{h_\mesh}$ order. In the following theorem, we denote by $W^{2,\infty}(\partial\O)$ the functions $v$ on $\partial\O$ such that, for any face $F$ of $\partial\O$, $v_{|F}\in W^{2,\infty}(F)$.
Under Hypothesis \[hyseepage\], let $\disc$ be a gradient discretisation in the sense of Definition \[gdseepage\], such that $\mathcal K_\disc\not=\emptyset$, and let ${{\mathcal T}}=(\mesh,\edges,{\mathcal{P}})$ be a polytopal mesh of $\O$. Let $\bar{u}$ and $u$ be the respective solutions to Problems and . We assume that the barrier $a$ is constant, that $\gamma_\bfn(\Lambda\nabla\bar u)\in L^2(\partial\O)$ and that $\gamma(\bar u)\in W^{2,\infty}(\partial\O)$. We also assume that there exists an interpolant $v_{\mathcal{D}} \in \mathcal{K}_{\mathcal{D}}$ such that $S_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})\le C_1h_{\mathcal M}$ and that, for any $\sigma\in{{{\edges}_{\rm ext}}}$, $||\mathbb{T}_{\mathcal{D}}v_{\mathcal{D}}-\gamma(\bar{u})(x_{\sigma})||_{L^2(\edge)}\le C_2 h_\sigma^2|\edge|^{1/2}$, where $x_{\sigma}\in\sigma$ (here, the constants $C_i$ do not depend on the edge or the mesh). Then, there exists $C$ depending only on $\O$, $\Lambda$, $\bar u$, $a$, $C_1$, $C_2$ and an upper bound of $C_\disc$ such that $$\|\nabla_{\mathcal{D}}u - \nabla\bar{u}\|_{L^{2}(\Omega)^{d}}
+ \|\Pi_{\mathcal{D}}u - \bar{u}\|_{L^{2}(\Omega)}
\leq Ch_{\mathcal M}+CW_{\disc}(\Lambda\nabla\bar u).
\label{eq:improverrorseepage}$$ \[theorem:improveseepage\]
\[rem:makesense\] Since $\gamma_\bfn(\Lambda\nabla\bar u) \in L^2(\partial\O)$ the reconstructed trace $\mathbb{T}_\disc$ can be taken with values in $L^2(\partial\Omega)$ (see the discussion at the end of Section \[sec:signorini\_problem\]). In particular, $\mathbb{T}_\disc v$ can be piecewise constant. The $H^{1/2}(\partial\Omega)$ norm in the definition of $C_\disc$ is then replaced with the $L^2(\partial\O)$ norm, and the duality product in is replaced with a plain integral on $\partial\O$.
For the two-point finite volume method, an error estimate similar to is stated in [@A2], under the assumption that the solution is in $H^2(\O)$. It however seems to us that the proof in [@A2] uses an estimate which requires $\gamma(\bar u)\in W^{2,\infty}(\dr\O)$, as in Theorem \[theorem:improveseepage\].
We notice that the assumption that $a$ is constant is compatible with some models, such as an electrochemical reaction and friction mechanics [@A2-17]. This condition on $a$ can be relaxed or, as detailed in Section \[sec:approxbarriers\], $a$ can be approximated by a simpler barrier – the definition of which depends on the considered scheme.
### Obstacle problem {#improveobs}
As explained in the introduction of this section, to estimate the order of convergence for the obstacle problem we only need to estimate $E_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})$. This can be readily done for $\mathbb{P}_1$ finite elements, for example. If $g$ is linear or constant, letting $v_{\disc} = \bar u$ at all vertices (as in [@finite-element]) shows that $v_{\disc}$ is an element of $\mathcal K_{\disc}$, and that $S_{\disc}(\bar u, v_{\disc})= \mathcal O (h_\mesh)$ and $E_{\disc}(\bar u, v_{\disc})= \mathcal O (h_\mesh^{2})$. Therefore, Theorem \[errorobs\] provides an order one error estimate.
The following theorem shows that, as for the Signorini problem, the error estimate for the obstacle problem is of order one even for methods with piecewise constant reconstructed functions.
Under Hypothesis \[hyobs\], let $\disc$ be a gradient discretisation in the sense of Definition \[gdobs\], such that $\mathcal K_\disc\not=\emptyset$, and let ${{\mathcal T}}=(\mesh,\edges,{\mathcal{P}})$ be a polytopal mesh of $\O$. Let $\bar{u}$ and $u$ be the respective solutions to Problems and . We assume that $g$ is constant, that $\div(\Lambda\nabla\bar u)\in L^2(\O)$, and that $\bar u\in W^{2,\infty}(\O)$. We also assume that there exists an interpolant $v_{\mathcal{D}} \in \mathcal{K}_{\mathcal{D}}$ such that $S_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})\le C_1h_{\mathcal M}$ and, for any $K\in\mesh$, $||\Pi_{\mathcal{D}}v_{\mathcal{D}}-\bar{u}(x_{K})||_{L^2(K)}\le C_2h_K^2|K|^{1/2}$ (here, the various constants $C_i$ do not depend on the cell or the mesh). Then, there exists $C$ depending only on $\O$, $\underline{\lambda}$, $\overline{\lambda}$, $C_1$, $C_2$ and an upper bound of $C_\disc$ such that $$\|\nabla_{\mathcal{D}}u - \nabla\bar{u}\|_{L^{2}(\Omega)^{d}}
+ \|\Pi_{\mathcal{D}}u - \bar{u}\|_{L^{2}(\Omega)}
\leq Ch_{\mathcal M}{+CW_{\disc}(\Lambda\nabla\bar u).}
\label{eq:improverrorobs}$$ \[theorem:improveobse\]
As for the Signorini problem, under regularity assumptions the condition that $g$ is constant can be relaxed, see Remark \[relax:gconst\]. However, if $g$ is not constant it might be more practical to use an approximation of this barrier in the numerical discretisation – see Section \[sec:approxbarriers\].
The various regularity assumptions made on the data or the solutions in all previous theorems are used to state optimal orders of convergence. The convergence of gradient schemes can however be established under the sole regularity assumptions stated in Hypothesis \[hyseepage\] and \[hyobs\], see [@Y].
Examples of gradient schemes {#sec:examples}
============================
Many numerical schemes can be seen as gradient schemes. We show here that two families of methods, namely conforming Galerkin methods and HMM methods, can be recast as gradient schemes when applied to variational inequalities.
Galerkin methods
----------------
Any Galerkin method, including conforming finite elements and spectral methods of any order, fits into the gradient scheme framework.
For the Signorini problem, if $V$ is a finite dimensional subspace of $\{v\in H^1(\O)\,:\,\gamma(v)=0\mbox{ on $\Gamma_1$}\}$, we let $X_{\disc,\Gamma_1}=V$, $\Pi_\disc={\rm Id}$, $\nabla_\disc=\nabla$ and $\mathbb{T}_\disc=\gamma$. Then $(X_{\disc,\Gamma_{1}},\Pi_\disc,\mathbb{T}_\disc,\nabla_\disc)$ is a gradient discretisation and the corresponding gradient scheme corresponds to the Galerkin approximation of . Then, $C_\disc$ defined by is bounded above by the maximum between the Poincaré constant and the norm of the trace operator $\gamma:H^1(\O)\to H^{1/2}(\dr\O)$.
Conforming Galerkin methods for the obstacle problem consist in taking $X_{\disc,0}=V$ a finite dimensional subspace of $H_0^1(\O)$, $\Pi_\disc={\rm Id}$ and $\nabla_\disc=\nabla$. For this gradient discretisation, $C_\disc$ (defined by ) is bounded above by the Poincaré constant in $H^1_0(\O)$.
For both the Signorini and the obstacle problems, $W_\disc$ is identically zero and the error estimate is solely dictated by the interpolation error $S_\disc(\bar u,v)$ and by $G_\disc(\bar u,v)$ or $E_\disc(\bar u,v)$, as expected. For $\mathbb{P}_1$ finite elements, if the barrier ($a$ or $g$) is piecewise affine on the mesh then the classical interpolant $v$ constructed from the nodal values of $\bar u$ belongs to $\mathcal K$. As we saw, using this interpolant in Estimates and leads to the expected order 1 convergence.
If the barrier is more complex, then it is usual to consider some piecewise affine approximation of it in the definition of the scheme, and the gradient scheme is then modified to take into account this approximate barrier (see Section \[sec:approxbarriers\]); the previous natural interpolant of $\bar u$ then belongs to the (approximate) discrete set $\widetilde{\mathcal K}_\disc$ used to define the scheme, and gives a proper estimate of $S_\disc(\bar u,v)$ and $G_\disc(\bar u,v)$ or $E_\disc(\bar u,v)$.
Hybrid mimetic mixed methods (HMM) {#sec:hmm}
----------------------------------
HMM is a family of methods, introduced in [@B5], that contains the hybrid finite volume methods [@sushi], the (mixed-hybrid) mimetic finite differences methods [@bre-05-fam] and the mixed finite volume methods [@dro-06-mix]. These methods were developed for anisotropic heterogeneous diffusion equations on generic grids, as often encountered in engineering problems related to flows in porous media. So far, HMM schemes have only be considered for PDEs. The generic setting of gradient schemes enable us to develop and analyse HMM methods for variational inequalities.
The (conforming) mimetic methods studied in [@A-32] for obstacle problems are different from the mixed-hybrid mimetic method of the HMM family; they are based on nodal unknowns rather than cell and face unknowns. These conforming mimetic methods however also fit into the gradient scheme framework [@B10].
### Signorini problem {#sec:HMM.sig}
Let ${{\mathcal T}}$ be a polytopal mesh of $\Omega$. The discrete space to consider is $$\label{def.XDG}
\begin{aligned}
X_{\disc,\Gamma_{1}}=\{ &v=((v_{K})_{K\in \mathcal{M}}, (v_{\sigma})_{\sigma \in \mathcal{E}})\;:\; v_{K} \in {{\mathbb R}}, v_{\sigma} \in {{\mathbb R}},\\
& v_{\sigma}=0 \; \mbox{for all}\; \sigma \in {{{\edges}_{\rm ext}}}\mbox{ such that }\sigma\subset \Gamma_{1} \}
\end{aligned}$$ (we assume here that the mesh is compatible with $(\Gamma_i)_{i=1,2,3}$, in the sense that for any $i=1,2,3$, each boundary edge is either fully included in $\Gamma_i$ or disjoint from this set). The operators $\Pi_\disc$ and $\nabla_\disc$ are defined as follows: $$\forall v\in X_{\disc,\Gamma_1}\,,\;\forall K\in\mathcal M\,:\,\Pi_\disc v=v_K\mbox{ on $K$},$$ $$\forall v\in X_{\disc,\Gamma_1}\,,\;\forall K\in\mathcal M\,:\,\nabla_\disc v=\nabla_{K}v+
\frac{\sqrt{d}}{d_{K,\sigma}}(A_{K}R_{K}(v))_{\sigma}\mathbf{n}_{K,\sigma}\mbox{ on }{\rm co}(\{\sigma,x_K\})$$ where ${\rm co}(S)$ is the convex hull of the set $S$ and
- $\nabla_{K}v= \frac{1}{|K|}\sum_{\sigma\in {{{\edges}_K}}}|\sigma|(v_{\sigma}-v_K)\mathbf{n}_{K,\sigma}$,
- $R_{K}(v)=(v_{\sigma}-v_{K}-\nabla_{K}v\cdot (x_{\sigma}-x_{K}))_{\sigma\in\mathcal E_K}\in {{\mathbb R}}^{{\rm Card}({{{\edges}_K}})}$,
- $A_K$ is an isomorphism of the vector space ${\rm Im}(R_K)$.
It is natural to consider a piecewise constant trace reconstruction: $$\label{def:TD}
\forall \sigma\in\mathcal E_{\rm ext}\,:\,\mathbb{T}_\disc v = v_\sigma\mbox{ on $\sigma$}.$$ This reconstructed trace operator does not take values in $H^{1/2}_{\Gamma_1}(\partial\O)$, and the corresponding gradient discretisation $\disc$ is therefore not admissible in the sense of Definition \[gdseepage\]. However, for sufficiently regular $\bar u$, we can consider reconstructed traces in $L^2(\partial\O)$ only, see Remark \[rem:makesense\].
It is proved in [@B1] that, if $\Gamma_1=\dr\O$, then $(X_{\disc,\Gamma_1},\Pi_\disc,\nabla_\disc)$ is a gradient discretisation for homogeneous Dirichlet boundary conditions, whose gradient scheme gives the HMM method when applied to elliptic PDEs.
With $ \mathcal{K}_{\disc}:= \{ v \in X_{\disc,\Gamma_{1}} \; : \; v_{\sigma} \leq a
\mbox{ on $\sigma$, for all $\sigma \in \mathcal{E}_{\rm ext}$ such that $\sigma\subset \Gamma_{3}$}\}$, the gradient scheme corresponding to this gradient discretisation can be recast as $$\label{HMM-ineq-sig}
\left\{
\begin{array}{l}
\dsp \mbox{Find}\; u \in \mathcal K_\disc \mbox{ such that, for all } v\in \mathcal{K}_{\disc}\,,\\
\dsp\sum_{K\in \mesh}|K|\Lambda_K\nabla_{K}u\cdot \nabla_{K}(u-v)+ \dsp\sum_{K\in \mesh}R_{K}(u-v)^{T}\mathbb{B}_{K}R_{K}(u) \\
~\hfill\leq
\dsp\sum_{K\in \mesh}(u_{K}-v_{K})\dsp\int_{K}f(x) {\, \mathrm{d}}x.
\end{array}
\right.$$ where $\Lambda_K$ is the value of $\Lambda$ in the cell $K$ (it is usual – although not mandatory – to assume that $\Lambda$ is piecewise constant on the mesh) and, for $K\in\mesh$, $\mathbb{B}_K$ is a symmetric positive definite matrix of size ${\rm Card}(\mathcal E_K)$. This matrix is associated to $A_K$, see [@B1] for the details.
Under classical assumptions on the mesh and on the matrices $\mathbb{B}_K$, it is shown in [@B1] that the constant $C_\disc$ can be bounded above by quantities not depending on the mesh size, and that $W_\disc(\bpsi)$ is of order $\mathcal O(h_\mesh)$. If $d\le 3$ and $\bar u\in H^2(\O)$, we can construct the interpolant $v_\disc=((v_K)_{K\in\mesh},(v_\edge)_{\edge\in\mesh})$ with $v_K=\bar u(x_K)$ and $v_\edge=\bar u({\overline{x}_\edge})$ and, by [@S1 Propositions 13.14 and 13.15], we have $$S_\disc(\bar u,v_\disc)=||\Pi_\disc v_\disc-\bar u||_{L^2(\O)}+ ||\nabla_\disc v_\disc-\nabla u||_{L^2(\O)^d}\le Ch_\mesh||\bar u||_{H^2(\O)}.$$ Under the assumption that $\bar u\in W^{2,\infty}(\O)$, this estimate is also proved in [@B10] (with $||\bar u||_{H^2(\O)}$ replaced with $||\bar u||_{W^{2,\infty}(\O)}$). If $\bar u\in\mathcal K$ and $a$ is constant, this interpolant $v_\disc$ belongs to $\mathcal K_\disc$. Moreover, given the definition of $\mathbb{T}_\disc$, we have $(\mathbb{T}_\disc v_\disc)_{|\sigma}-\gamma(\bar u)(\overline{x}_\sigma)=v_\sigma-\bar u(\overline{x}_\sigma)=0$. Hence, using this $v_\disc$ in Theorem \[theorem:improveseepage\], we see that the HMM scheme for the Signorini problem enjoys an order 1 rate of convergence. A non-constant barrier $a$ can be approximated by a piecewise constant barrier on ${{{\edges}_{\rm ext}}}$ and Theorem \[th:improvedapproxseepage\] shows that, with this approximation, we still have an order 1 rate of convergence. Note that since this convergence also involves the gradients, a first order convergence is optimal for a low-order method such as the HMM method.
### Obstacle problem {#obstacle-problem-2}
We still take ${{\mathcal T}}$ a potytopal mesh of $\O$, and we consider $\disc=(X_{\disc,0},\Pi_\disc,\nabla_\disc)$ given by $X_{\disc,0}=X_{\disc,\dr\O}$ (see ) and $\Pi_\disc$ and $\nabla_\disc$ as in Section \[sec:HMM.sig\]. Using $\disc$ in , we obtain the HMM method for the obstacle problem. Setting $\mathcal{K}_{\disc}:= \{ v \in X_{\disc,0} \; : \; v_{K} \leq g \; \mbox{on $K$,
for all } K \in \mesh\}$, this method reads $$\label{HMM-ineq-obs}
\left\{
\begin{array}{l}
\dsp \mbox{Find}\; u \in \mathcal{K}_{\disc}\mbox{ such that, for all } v\in \mathcal{K}_{\disc}\,,\\
\dsp\sum_{K\in \mesh}|K|\Lambda_K\nabla_{K}u\cdot \nabla_{K}(u-v)+ \dsp\sum_{K\in \mesh}R_{K}(u-v)^{T}\mathbb{B}_{K}R_{K}(u) \\
~\hfill\leq
\dsp\sum_{K\in \mesh}(u_{K}-v_{K})\dsp\int_{K}f(x) {\, \mathrm{d}}x
\end{array}
\right.$$
Using the same interpolant as in Section \[sec:HMM.sig\], we see that Theorem \[theorem:improveobse\] (for a constant barrier $g$) and Theorem \[th:improvedapproxobs\] (for a piecewise constant approximation of $g$) provide an order 1 convergence rate.
To our knowledge, the HMM method has never been considered before for the Signorini or the obstacle problem. These methods are however fully relevant for these models, since HMM schemes have been developed to deal with anisotropic heterogeneous diffusion processes in porous media (processes involved in seepage problems for example). The error estimates obtained here on the HMM method, through the development of a gradient scheme framework for variational inequalities, are therefore new and demonstrate that this framework has applications beyond merely giving back known results for methods such as $\mathbb{P}_1$ finite elements.
Numerical results {#sec:numer}
=================
We present numerical experiments to highlight the efficiency of the HMM methods for variational inequalities, and to verify our theoretical results. To solve the inequalities and , we use the monotony algorithm of [@A2-23]. We introduce the fluxes $(F_{K,\sigma}(u))_{K\in\mesh,\,\sigma\in\mathcal E_K}$ defined for $u\in H_\disc$ ($H_\disc = X_{\disc,\Gamma_1}$ for the Signorini problem, $H_\disc = X_{\disc,0}$ for the obstacle problem) through the formula [@B5 Eq. (2.25)] $$\begin{gathered}
\forall K \in \mesh,\;\forall v \in H_\disc :\\
\sum_{\sigma \in \mathcal{E}_K}|\sigma| F_{K,\sigma}(u)
(v_K-v_\sigma)=|K|\Lambda_K \nabla_K u \cdot \nabla_K v
+R_K^{T}(u)\mathbb{B}_K R_{K}(v).\end{gathered}$$ Note that the flux $F_{K,\sigma}$ thus constructed is an approximation of $-\frac{1}{|\sigma|}\int_\sigma \Lambda\nabla \bar u\cdot\mathbf{n}_{K,\sigma}$. The variational inequalities and can then be re-written in terms of the balance and conservativity of the fluxes in the following ways:
- Signorini problem: $$\begin{aligned}
\sum_{\sigma \in \mathcal{E}_K}|\sigma|F_{K,\sigma}(u)= m(K)f_K, &\quad \forall K \in \mesh,\\
F_{K,\sigma}(u)+F_{L,\sigma}(u)= 0, &\quad \forall \sigma\in\mathcal E_{\rm int}\mbox{ with }
\mesh_\sigma=\{K,L\},\\
u_\sigma = 0, &\quad \forall \sigma \in \mathcal{E}_{\rm ext} \mbox{ such that }\sigma\subset \Gamma_1,\\
F_{K,\sigma}(u)=0, &\quad \forall K \in \mesh\,,\forall \sigma \in \mathcal{E}_K \mbox{ such that }\sigma\subset \Gamma_2,\\
F_{K,\sigma}(u)(u_\sigma - a_\sigma)=0 , & \quad\forall K \in \mesh\,, \forall \sigma \in \mathcal{E}_K \mbox{ such that }\sigma\subset \Gamma_3,\\
-F_{K,\sigma}(u)\leq 0, &\quad \forall K \in \mesh\,, \forall \sigma \in \mathcal{E}_K \mbox{ such that }\sigma\subset \Gamma_3,\\
u_\sigma \leq a_\sigma,&\quad \forall \sigma \in \mathcal{E}_{\rm ext} \mbox{ such that }\sigma\subset \Gamma_{3} .
\end{aligned}$$ Here, $a_\sigma$ is a constant approximation of $a$ on $\sigma$.
- Obstacle problem: $$\begin{aligned}
\left(-\sum_{\sigma \in \mathcal{E}_K}|\sigma|F_{K,\sigma}(u)+ m(K)f_K\right)(g_K - u_K)=0 , &\quad \forall K \in \mesh,\\
F_{K,\sigma}(u)+F_{L,\sigma}(u)= 0, &\quad \forall \sigma\in\mathcal E_{\rm int}\mbox{ with }
\mesh_\sigma=\{K,L\},\\
\sum_{\sigma \in \mathcal{E}_K}|\sigma|F_{K,\sigma}(u)\leq m(K)f_K, &\quad \forall K \in \mesh,\\
u_K \leq g_K, &\quad \forall K \in \mesh,\\
u_\sigma=0, &\quad \forall \sigma \in \mathcal E_{\rm ext}.
\end{aligned}$$ Here, $g_K$ is an approximation of $g$ on $K$, see Section \[sec:approxbarriers\].
The iteration monotony algorithm detailed in [@A2-23] can be naturally applied to these formulations. We notice that by [@B-13 Section 4.2] the HMM method is monotone when applied to isotropic diffusion on meshes made of acute triangles; in that case, the convergence of the monotony algorithm can be established as in [@A2-23] for finite element and two-point finite volume methods. In our numerical tests, we noticed that, even when applied on meshes for which the monotony of HMM is unknown or fails, the monotony algorithm actually still converges.
All tests given below are performed by MATLAB code using the PDE toolbox.
Test 1 {#test-1 .unnumbered}
------
We investigate the Signorini problem from [@A28]: $$\begin{aligned}
-\Delta \bar u= 2\pi \sin (2\pi x) &\mbox{\quad in $\Omega=(0,1)^2$,} \nonumber\\
\bar{u} =0 &\mbox{\quad on $\Gamma_{1}$,} \nonumber\\
\nabla\bar{u}\cdot \mathbf{n}= 0 &\mbox{\quad on $\Gamma_{2}$,} \nonumber\\
\left.
\begin{array}{r}
\dsp \bar{u} \geq 0\\
\dsp\nabla\bar{u}\cdot \mathbf{n} \geq 0\\
\dsp\bar{u}\nabla\bar{u}\cdot \mathbf{n}= 0
\end{array} \right\}
& \mbox{\quad on}\; \Gamma_{3},\nonumber\end{aligned}$$ with $\Gamma_1 = [0,1]\times \{1\}$, $\Gamma_2 = (\{0\}\times [0,1]) \cup (\{1\} \times [0,1])$ and $\Gamma_3=[0,1]\times \{0\}$. Here, the domain is meshed with triangles produced by INITMESH.
Figure \[fig:A31\] presents the graph of the approximate solution obtained on a mesh of size $0.05$. The solution compares very well with linear finite elements solution from [@A-31]; the graph seems to perfectly capture the point where the condition on $\Gamma_3$ changes from Dirichlet to Neumann. Table \[tab:A31\] shows the number of iterations (NITER) of the monotony algorithm, required to obtain the HMM solution for various mesh sizes. Herbin in [@A2-23] proves that this number of iterations is theoretically bounded by the number of edges included in $\Gamma_3$. We observe that NITER is much less than this worst-case bound. We also notice the robustness of this monotony algorithm: reducing the mesh size does not significantly affect the number of iterations.
![The HHM solution for Test 1 with $h=0.05$. The line represents where the boundary condition on $\Gamma_3$ changes from Neumann to Dirichlet.[]{data-label="fig:A31"}](hmmsA31.eps)
Mesh size $h$ 0.0625 0.0500 0.0250 0.0156
------------------------------------- -------- -------- -------- --------
$ \#\{ \sigma \subset \Gamma_{3}\}$ 16 20 40 46
NITER 4 5 5 6
: Number of iterations of the monotony algorithm in Test 1
\[tab:A31\]
Test 2 {#test-2 .unnumbered}
------
Most studies on variational inequalities, for instance [@A30; @A28], consider test cases for which the exact solutions are not available; rates of convergence are then assessed by comparing coarse approximate solutions with a reference approximate solution, computed on a fine mesh. To more rigorously assess the rates of convergence, we develop here a new heterogeneous test case for Signorini boundary conditions, which has an analytical solution with non-trivial one-sided conditions on $\Gamma_3$ (the analytical solution switches from homogeneous Dirichlet to homogeneous Neumann at the mid-point of this boundary).
In this test case, we consider here – with the geometry of the domain presented in Figure \[fig-geomexact\], left.
[c@c]{}
&
The exact solution is $$\begin{aligned}
\bar u(x,y)=\left\{
\begin{array}{ll}
P(y)h(x) & \mbox{for $y \in (0,\frac{1}{2})$ and $x \in (0,1)$},\\
x g(x)G(y) & \mbox{for $y \in (\frac{1}{2},1)$ and $ x \in (0,1)$}.
\end{array} \right.\end{aligned}$$ To ensures that holds on the lower part of $\Gamma_1$, and that the normal derivatives $\Lambda\nabla \bar u\cdot\mathbf{n}$ at the interface $y=\frac12$ match, we take $P$ such that $P(0)=P(\frac{1}{2})=P'(\frac{1}{2})=0$. Assuming that $P<0$ on $(0,\frac12)$, the conditions $\partial_n \bar u = -\partial_x \bar u= 0$ and $\bar u < 0$ on the lower half of $\Gamma_3$, as well as the homogeneous condition $\partial_n \bar u =0$ on the lower half of $\Gamma_2$, will be satisfied if $h$ is such that $h'(0)=h'(1)=0$ and $h(0)=1$. We choose $$P(y)=-y\left(y-\frac{1}{2}\right)^2\quad\mbox{ and }h(x)= \cos (\pi x).$$ Given our choice of $P$, ensuring that $\div(\Lambda\nabla \bar u)\in L^2(\Omega)$ (i.e. that $\bar u$ and the normal derivatives match at $y=\frac12$) can be done by taking $G$ such that $G(\frac12)=G'(\frac12)=0$. The boundary condition on the upper part of $\Gamma_1$ is satisfied if $G(1)=0$. The boundary conditions $\partial_n \bar u=0$ on the upper part of $\Gamma_2$, and $\bar u=0$ and $\partial_n \bar u<0$ on the upper part of $\Gamma_3$, are enforced by taking $g$ such that $g(1)+g'(1)=0$ and $g(0)>0$ (with $G>0$ on $(\frac12,1)$). We select here $$g(x)=\cos^2 \left(\frac{\pi}{2}x\right)=\frac{1+\cos(\pi x)}{2} \quad \mbox{and} \quad G(y)=(1-y)\left( y-\frac{1}{2}\right)^2.$$ This $\bar u$ is then the analytical solution to the following problem $$\begin{aligned}
-\mathrm{div}(\Lambda\nabla\bar{u})= f &\mbox{\quad in $\Omega$,} \\
\bar{u} =0 &\mbox{\quad on $\Gamma_{1}$,}\\
\Lambda\nabla\bar{u}\cdot \mathbf{n}= 0 &\mbox{\quad on $\Gamma_{2}$,}\\
\left.
\begin{array}{r}
\dsp \bar{u} \leq 0\\
\Lambda\nabla\bar{u}\cdot \mathbf{n} \leq 0\\
\bar{u}\Lambda\nabla\bar{u}\cdot \mathbf{n}= 0
\end{array} \right\}
& \mbox{\quad on}\; \Gamma_{3},\end{aligned}$$ with, for $y \le \frac{1}{2}$, $$f(x,y)=100\left[\pi^2y \cos(\pi x)\left(y - \frac{1}{2}\right)^2 - 2y \cos(\pi x)
- 2 \cos(\pi x)(2y - 1)\right],$$ and, for $y >\frac{1}{2}$, $$\begin{aligned}
f(x,y)={}&\frac{\pi^2 x}{2}\cos(\pi x)(y - 1)\left(y - \frac{1}{2}\right)^2 -
2x\cos^2\left(\frac{\pi x}{2}\right)(3y - 2)\\
& + \pi \sin(\pi x)(y - 1)\left(y - \frac12\right)^2.\end{aligned}$$
![The HHM solution for Test 2 on an hexagonal mesh with $h=0.07$.[]{data-label="fig:approx"}](test2-sol)
We test the scheme on a sequence of meshes (mosly) made of hexagonal cells. The third mesh in this sequence (with mesh size $h=0.07$) is represented in Figure \[fig-geomexact\], right.
The relative errors on $\bar u$ and $\nabla\bar u$, and the corresponding orders of convergence (computed from one mesh to the next one), are presented in Table \[tab:exact3\]. For the HMM method, Theorem \[theorem:improveseepage\] predicts a convergence of order 1 on the gradient if the solution belongs to $H^2$ and $\Lambda\nabla\bar u\in H^1$; the observed numerical rates are not slightly above these values, probably due to the fact that $\Lambda$ is not Lipschitz-continuous here (and thus the regularity $\Lambda\nabla\bar u\in H^1$ is not satisfied). As in the previous test, the number of iterations of the monotony algorithm remains well below the theoretical bound.
mesh size $h$ 0.24 0.13 0.07 0.03
-------------------------------------------------------------------------------------- -------- -------- -------- --------
$\frac{\| \bar u - \Pi_\disc u\|_{L^{2}(\O)}}{\|\bar u\|_{L^2(\O)}}$ 0.6858 0.2531 0.1355 0.0758
Order of convergence – 1.60 0.92 0.84
$\frac{\| \nabla \bar u - \nabla_\disc u\|_{L^{2}(\O)}}{\|\nabla\bar u\|_{L^2(\O)}}$ 0.4360 0.2038 0.1041 0.0542
Order of convergence – 1.22 0.99 0.95
$\#\{\sigma \subset \Gamma_{3}\}$ 20 40 80 160
NITER 5 5 6 7
: Error estimate and number of iterations for Test 2
\[tab:exact3\]
The solution to the HMM scheme for the third mesh in the family used in Table \[tab:exact3\] is plotted in Figure \[fig:approx\]. On $\Gamma_3$, the saturated constraint for the exact solution changes from Neumann $\nabla\bar u\cdot\mathbf{n}=0$ to Dirichlet $\bar u=0$ at $y=0.5$. It is clear on Figure \[fig:approx\] that the HMM scheme captures well this change of constraint. The slight bump visible at $y=1/2$ is most probably due to the fact that the mesh is not aligned with the heterogeneity of $\Lambda$ (hence, in some cells the diffusion tensor takes two -very distinct- values, and its approximation by one constant value smears the solution).
When meshes are aligned with data heterogeneities, such bumps do not appear. This is illustrated in Figure \[fig:kershaw\], in which we represent the solution obtained when using a “Kershaw” mesh as in the FVCA5 benchmark [@HH08]. This mesh has size $h=0.16$, 34 edges on $\Gamma_3$, and the monotony algorithm converges in 7 iterations. The relative $L^2$ error on $\bar u$ and $\nabla\bar u$ are respectively $0.017$ and $0.019$. As expected on these kinds of extremely distorted meshes, the solution has internal oscillations, but is otherwise qualitatively good. In particular, despite distorted cells near the boundary $\Gamma_3$, the transition between the Dirichlet and the Neumann boundary conditions is well captured.
--------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------
![The HHM solution (left) for Test 2 on a Kershaw mesh (right).[]{data-label="fig:kershaw"}](test2-sol_kershaw.png "fig:"){width=".55\linewidth"} ![The HHM solution (left) for Test 2 on a Kershaw mesh (right).[]{data-label="fig:kershaw"}](test2-grid_kershaw.png "fig:"){width=".4\linewidth"}
--------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------
Test 3 {#test-3 .unnumbered}
------
In this case test, we apply the HMM method to the homogeneous obstacle problem: $$\begin{aligned}
(\Delta \bar u+C)(\psi - \bar{u}) = 0, &\quad\mbox{in $\Omega=(0,1)^2$,} \\
-\Delta \bar u\geq C,&\quad \mbox{in $\Omega$,} \\
\bar{u}\geq \psi, &\quad\mbox{in $\Omega$,}\\
\bar{u} = 0, &\quad\mbox{on $\partial\Omega$.}\end{aligned}$$ The constant $C$ is negative, and the obstacle function $\psi(x,y)= - \min (x, 1-x, y, 1-y)$ satisfies $\psi =0$ on $\partial\O$.
Figure \[fig:A34\] shows the graph of the HMM solution to the above obstacle problem with respect to $h=0.05$. Table \[tab:A34\] illustrates the performance of the method and the algorithm. Here again, the number of iterations required to obtain the solution is far less than the number of cells, which is a theoretical bound in the case of obstacle problem [@A2-23]. Our results compare well with the results obtained by semi-iterative Newton-type methods in [@brugnano2009], which indicate that decreasing $|C|$ contributes to the difficulty of the problem (leading to an increase number of iterations). We note that, for a mesh of nearly 14,000 cells, we only require 29 iterations if $|C|=5$ and 14 iterations if $|C|=20$. On a mesh of 10,000 cells, the semi-iterative Newton-type method of [@brugnano2009] requires 32 iterations if $|C|=5$ and 9 iterations if $|C|=20$.
![The HHM solution for Test 3 with $h=0.05$, $C=-20$.[]{data-label="fig:A34"}](hmmo1A34.eps)
mesh size $h$ 0.016 0.025 0.050 0.062
----------------- ------- ------- ------- -------
$\# \mesh$ 14006 5422 1342 872
NITER $(C=-5)$ 29 20 12 11
NITER $(C=-10)$ 23 18 10 9
NITER $(C=-15)$ 19 13 9 8
NITER $(C=-20)$ 14 12 7 8
: The number of iterations for various $C$
\[tab:A34\]
Proofs of the theorems {#proof}
======================
.5pc [**Proof of Theorem \[errorseepage\].**]{} Since $\mathcal{K}_{\mathcal{D}}$ is a non-empty convex closed set, the existence and uniqueness of a solution to Problem (\[gsseepage\]) follows from Stampacchia’s theorem. We note that $\Lambda\nabla\bar{u}\in H_{\div}(\O)$ since $$\label{seep:distr}
-\mathrm{div}(\Lambda\nabla\bar{u}) = f\mbox{ in the sense of distributions}$$ (use $v=\bar u+\psi$ in , with $\psi\in C^\infty_c(\O)$). Taking $\bpsi=\Lambda\nabla\bar{u}$ in the definition of $W_\disc$, for any $z \in X_{\mathcal{D},\Gamma_{1}}$ we get $$\begin{gathered}
\int_{\Omega}\nabla_{\mathcal{D}} z \cdot \Lambda\nabla\bar{u}{\, \mathrm{d}}x + \int_{\Omega}\Pi_{\mathcal{D}} z \cdot \mathrm{div}(\Lambda\nabla\bar{u}){{\, \mathrm{d}}x}
\leq\\
\|\nabla_{\mathcal{D}} z\|_{L^{2}(\Omega)^{d}}
W_{\mathcal{D}}(\Lambda\nabla\bar{u})
+\langle \gamma_\bfn(\Lambda\nabla\bar u),
\mathbb{T}_{\mathcal{D}}z\rangle.
\label{seepagepr1}\end{gathered}$$ Let us focus on the last term on the right-hand side of the above inequality. For any $v_{\mathcal{D}} \in \mathcal K_\disc$, we have $$\begin{gathered}
\langle \gamma_\bfn(\Lambda\nabla\bar{u}), \mathbb{T}_{\mathcal{D}}(v_{\mathcal{D}}-u) \rangle=\\
\langle \gamma_\bfn(\Lambda\nabla\bar{u}), \mathbb{T}_{\mathcal{D}}v_{\mathcal{D}}-\gamma\bar{u} \rangle
+\langle \gamma_\bfn(\Lambda\nabla\bar{u}),\gamma\bar{u}-\mathbb{T}_{\mathcal{D}}u\rangle.
\label{seepagepr2}\end{gathered}$$ The definition of the space $H_{\Gamma_{1}}^{1/2}(\partial\Omega)$ shows that there exists $w \in H^1(\Omega)$ such that $\gamma w = \mathbb{T}_{\mathcal{D}}u$ (this implies that $w\in \mathcal K$). According to the definition of the duality product , we have $$\langle \gamma_\bfn(\Lambda\nabla\bar{u}),\gamma\bar{u}- \mathbb{T}_{\mathcal{D}}u\rangle
=\int_{\Omega}\Lambda\nabla\bar{u}\cdot\nabla(\bar{u}-w){\, \mathrm{d}}x+\int_{\Omega}\mathrm{div}(\Lambda\nabla\bar{u})(\bar{u}-w) {\, \mathrm{d}}x.$$ Since $\bar{u}$ satisfies , it follows that $$\label{seepagepr3}
\langle \gamma_\bfn(\Lambda\nabla\bar{u}),\gamma\bar{u}- \mathbb{T}_{\mathcal{D}}u\rangle
=\int_{\Omega}\Lambda\nabla\bar{u}\cdot\nabla(\bar{u}-w){\, \mathrm{d}}x
-\int_{\Omega}f(\bar{u}-w) {\, \mathrm{d}}x,$$ which leads to, taking $w$ as a test function in the weak formulation , $$\label{seepagepr3.1}
\langle \gamma_\bfn(\Lambda\nabla\bar{u}),\gamma\bar{u}- \mathbb{T}_{\mathcal{D}}u\rangle
\leq 0.$$ From this inequality, recalling and and setting $z = v_{\mathcal{D}}- u \in X_{\mathcal{D},\Gamma_{1}}$ in (\[seepagepr1\]), we deduce that $$\begin{gathered}
\label{takeover}
\int_{\Omega}\nabla_{\mathcal{D}}(v_{\mathcal{D}} - u)\cdot \Lambda\nabla\bar{u}{\, \mathrm{d}}x
+\int_{\Omega}f\Pi_{\mathcal{D}}(u-v_{\mathcal{D}}) {\, \mathrm{d}}x
\leq\\
\|\nabla_{\mathcal{D}}(v_\disc - u)\|_{L^{2}(\Omega)^{d}}
W_{\mathcal{D}}(\Lambda\nabla\bar{u})+G_{\mathcal{D}}(\bar{u},v_{\mathcal{D}}).\end{gathered}$$ We use the fact that $u$ is the solution to Problem (\[gsseepage\]) to bound from below the term involving $f$ and we get $$\begin{gathered}
\int_{\Omega}\Lambda\nabla_{\mathcal{D}}(v_{\mathcal{D}} - u)\cdot(\nabla\bar{u}-\nabla_{\mathcal{D}}u) {\, \mathrm{d}}x \leq\\
\|\nabla_{\mathcal{D}}(v_{\mathcal{D}} - u)\|_{L^{2}(\Omega)^{d}}
W_{\mathcal{D}}(\Lambda\nabla\bar{u})+G_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})^+\end{gathered}$$ (note that we also used the bound $G_\disc\le G_\disc^+$). Adding and subtracting $\nabla_{\mathcal{D}}v_{\mathcal{D}}$ in $\nabla\bar u-\nabla_\disc u$, and using Cauchy-Schwarz’s inequality, this leads to $$\begin{gathered}
\underline{\lambda}\|\nabla_{\mathcal{D}}v_{\mathcal{D}} - \nabla_{\mathcal{D}}u\|_{L^{2}(\Omega)^{d}}^{2}
\leq \\
\|\nabla_{\mathcal{D}}v_{\mathcal{D}} - \nabla_{\mathcal{D}}u\|_{L^{2}(\Omega)^{d}}
\bigg(W_{\mathcal{D}}(\Lambda\nabla\bar{u})
+\overline{\lambda}S_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})\bigg)
+G_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})^+.\end{gathered}$$ Applying Young’s inequality gives $$\|\nabla_{\mathcal{D}}v_{\mathcal{D}} - \nabla_{\mathcal{D}}u\|_{L^{2}(\Omega)^{d}} \leq
\sqrt{\frac{2}{\underline{\lambda}}G_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})^+
+\frac{1}{\underline{\lambda}^{2}}\left[W_{\mathcal{D}}(\Lambda\nabla\bar{u})+\overline{\lambda}S_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})\right]^{2}}.
\label{proof1obs}$$ Estimate (\[errorseepage1\]) follows from $ \| \nabla_{\mathcal{D}}u-\nabla\bar{u} \|_{L^{2}(\Omega)^{d}} \leq \| \nabla_{\mathcal{D}}u - \nabla_{\mathcal{D}}v_{\mathcal{D}}\|_{L^{2}(\Omega)^{d}} + \| \nabla_{\mathcal{D}}v_{\mathcal{D}}-\nabla\bar{u} \|_{L^{2}(\Omega)^{d}}
$ and $\sqrt{a+b}\leq\sqrt{a}+\sqrt{b}$. Using (\[coercivityseepage\]) and , we obtain $$\|\Pi_{\mathcal{D}}v_{\mathcal{D}}-\Pi_{\mathcal{D}}u\|_{L^{2}(\Omega)} \leq
C_{\mathcal{D}}\sqrt{\frac{2}{\underline{\lambda}}G_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})^+
+\frac{1}{\underline{\lambda}^{2}}\left[W_{\mathcal{D}}(\Lambda\nabla\bar{u})+\overline{\lambda}S_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})\right]^{2}
}.$$ By writing $\|\Pi_{\mathcal{D}}u-\bar{u}\|_{L^{2}(\Omega)} \leq
\|\Pi_{\mathcal{D}}u-\Pi_{\mathcal{D}}v_{\mathcal{D}}\|_{L^{2}(\Omega)}
+ \|\Pi_{\mathcal{D}}v_{\mathcal{D}}-\bar{u}\|_{L^{2}(\Omega)}$, the above inequality shows that $$\begin{gathered}
\|\Pi_{\mathcal{D}}u-\bar{u}\|_{L^{2}(\Omega)} \leq\\
C_{\mathcal{D}}\sqrt{\frac{2}{\underline{\lambda}}G_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})^+
+\frac{1}{\underline{\lambda}^{2}}\bigg(W_{\mathcal{D}}(\Lambda\nabla\bar{u})+\overline{\lambda}S_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})\bigg)^{2}
}
+S_{\mathcal{D}}(\bar{u},v_{\mathcal{D}}).\end{gathered}$$ Applying $\sqrt{a+b}\leq\sqrt{a}+\sqrt{b}$ again, Estimate is obtained and the proof is complete.
[**Proof of Theorem \[errorobs\].**]{}
The proof of this theorem is very similar as the proof of [@Y Theorem 1]. We however give some details for the sake of completeness.
As for the Signorini problem, the existence and uniqueness of the solution to Problem follows from Stampacchia’s theorem. Let us now establish the error estimates. Under the assumption that $\mathrm{div}(\Lambda\nabla \bar{u}) \in L^{2}(\Omega)$, we note that $\Lambda\nabla\bar{u} \in
H_{\mathrm{div}}(\Omega)$. For any $w\in X_{\mathcal{D},0}$, using $\bpsi=\Lambda\nabla \bar{u}$ in the definition (\[conformityobs\]) of $W_\disc$ therefore implies $$\int_{\Omega}\nabla_{\mathcal{D}} w \cdot \Lambda\nabla\bar{u} {\, \mathrm{d}}x + \int_{\Omega}\Pi_{\mathcal{D}} w \cdot \mathrm{div}(\Lambda\nabla\bar{u}) {\, \mathrm{d}}x \leq \|\nabla_{\mathcal{D}} w\|_{L^{2}(\Omega)^{d}}
W_{\mathcal{D}}(\Lambda\nabla\bar{u}).
\label{8}$$ For any $v_{\mathcal{D}} \in \mathcal{K}_{\mathcal{D}}$, one has $$\begin{gathered}
\label{th:error-init}
\int_{\Omega}\Pi_{\mathcal{D}}(u-v_{\mathcal{D}})\mathrm{div}(\Lambda\nabla\bar{u}){\, \mathrm{d}}x=
\int_{\Omega}(\Pi_{\mathcal{D}}u-g)(\mathrm{div}(\Lambda\nabla\bar{u})+f) {\, \mathrm{d}}x\\
+\int_{\Omega}(g-\Pi_{\mathcal{D}}v_{\mathcal{D}})(\mathrm{div}(\Lambda\nabla\bar{u})+f) {\, \mathrm{d}}x
-\int_{\Omega}f\Pi_{\mathcal{D}}(u-v_{\mathcal{D}}) {\, \mathrm{d}}x.\end{gathered}$$ It is well known that the solution to the weak formulation satisfies in the sense of distributions (use test functions $v=\bar u-\varphi$ in , with $\varphi\in C^\infty_c(\O)$ non-negative). Hence, under our regularity assumptions, holds a.e. and, since $u \in \mathcal K_\disc$, we obtain ${\displaystyle\int_{\Omega}(\Pi_{\mathcal{D}}u-g)(\mathrm{div}(\Lambda\nabla\bar{u})+f){\, \mathrm{d}}x} \leq 0$. Hence, $$\begin{aligned}
\int_{\Omega}\Pi_{\mathcal{D}}(u-v_{\mathcal{D}})&\mathrm{div}(\Lambda\nabla\bar{u}){\, \mathrm{d}}x\\
\le& \int_{\Omega}(g-\Pi_{\mathcal{D}}v_{\mathcal{D}})(\mathrm{div}(\Lambda\nabla\bar{u})+f){\, \mathrm{d}}x
-\int_{\Omega}f\Pi_{\mathcal{D}}(u-v_{\mathcal{D}}){\, \mathrm{d}}x\\
=& \int_{\Omega}(g-\bar{u})(\mathrm{div}(\Lambda\nabla\bar{u})+f){\, \mathrm{d}}x
+\int_{\Omega}(\bar{u}-\Pi_{\mathcal{D}}v_{\mathcal{D}})(\mathrm{div}(\Lambda\nabla\bar{u})+f){\, \mathrm{d}}x\\
& -\int_{\Omega}f\Pi_{\mathcal{D}}(u-v_{\mathcal{D}}) {\, \mathrm{d}}x.\end{aligned}$$ Our regularity assumptions ensure that $\bar{u}$ satisfies (\[obs1\]). Therefore, by definition of $E_\disc$, $$\label{obsinequality}
\int_{\Omega}\varPi_{\mathcal{D}}(u-v_{\mathcal{D}})\mathrm{div}(\Lambda\nabla\bar{u}) {\, \mathrm{d}}x
\le E_\disc(\bar u,v_\disc)-\int_{\Omega}f\Pi_{\mathcal{D}}(u-v_{\mathcal{D}}) {\, \mathrm{d}}x.$$ From this inequality and setting $w = v_{\mathcal{D}}- u \in X_{\mathcal{D},0}$ in (\[8\]), we obtain $$\begin{gathered}
\int_{\Omega}\nabla_{\mathcal{D}}(v_{\mathcal{D}}- u)\cdot \Lambda\nabla\bar{u}{\, \mathrm{d}}x +
\int_{\Omega}f\Pi_{\mathcal{D}}(u-v_{\mathcal{D}}) {\, \mathrm{d}}x
\leq\\
\|\nabla_{\mathcal{D}}({v_\disc} - u)\|_{L^{2}(\Omega)^{d}}
W_{\mathcal{D}}(\Lambda\nabla\bar{u})+E_{\mathcal{D}}(\bar{u},v_{\mathcal{D}}).\end{gathered}$$ The rest of proof can be handled in much the same way as proof of Theorem \[errorseepage\], taking over the reasoning from . See also [@Y] for more details.
[**Proof of Theorem \[theorem:improveseepage\].**]{} We follow the same technique used in [@A2]. According to Remark \[rem:makesense\] and due to the regularity on the solution, since $\mathbb{T}_\disc v_\disc=\gamma(\bar u)=0$ on $\Gamma_1$ and $\gamma_\bfn(\Lambda\nabla\bar u)=0$ on $\Gamma_2$, we can write $$G_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})=
\int_{\Gamma_{3}}(\mathbb{T}_\disc v_{\mathcal{D}}-\gamma(\bar u))
\gamma_\bfn(\Lambda\nabla\bar{u}){\, \mathrm{d}}x.$$
We notice first that the assumptions on $\bar u$ ensure that it is a solution of the Signorini problem in the strong sense. Applying Theorem \[errorseepage\] we have $$\begin{gathered}
\label{to.plug}
\|\nabla_{\mathcal{D}}u - \nabla\bar{u}\|_{L^{2}(\Omega)^{d}}
+ \|\Pi_{\mathcal{D}}u - \bar{u}\|_{L^{2}(\Omega)}\\
\leq
{C}\left(W_{\mathcal{D}}(\Lambda\nabla\bar{u})
+ S_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})+ \sqrt{G_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})^+} \right)\end{gathered}$$ [with $C$ depending only on $\underline{\lambda}$, $\overline{\lambda}$ and an upper bound of $C_\disc$.]{} [Since $S_\disc(\bar u,v_\disc)\le C_1h_{\mathcal M}$ by assumption, it remains to]{} estimate the last term $G_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})^+$. We start by writing $$\begin{aligned}
G_\disc(\bar u,v_\disc)&=
\int_{\Gamma_3} (\mathbb{T}_\disc v_{\mathcal{D}}-a)
\gamma_\bfn(\Lambda\nabla\bar{u}){\, \mathrm{d}}x
+\int_{\Gamma_3} (a-\gamma(\bar u))
\gamma_\bfn(\Lambda\nabla\bar{u}){\, \mathrm{d}}x\\
&=\int_{\Gamma_3} (\mathbb{T}_\disc v_{\mathcal{D}}-a)
\gamma_\bfn(\Lambda\nabla\bar{u}){\, \mathrm{d}}x\\
&=\sum_{\sigma\in\edges\,,\;\sigma\subset\Gamma_3}\int_\sigma (\mathbb{T}_\disc v_{\mathcal{D}}-a)\gamma_\bfn(\Lambda\nabla\bar{u}){\, \mathrm{d}}x\\
&=:\sum_{\sigma\in\edges\,,\;\sigma\subset\Gamma_3} G_\sigma(\bar u,v_\disc),\end{aligned}$$ where the term involving $(a-\gamma(\bar u))
\gamma_\bfn(\Lambda\nabla\bar{u})$ has been eliminated by using . We then split the study on each $\edge$ depending on the cases: either (i) $\gamma\bar u<a$ a.e. on $\sigma$, or (ii) ${\rm meas}(\{ y\in \sigma\,:\, \gamma\bar{u}(y)-a=0\})>0$ (where ${\rm meas}$ is the $(d-1)$-dimensional measure).
In Case (i), we have $\gamma_\bfn(\Lambda\nabla\bar{u})=0$ in $\sigma$ since $\gamma\bar{u}$ satisfies . Hence, $G_\sigma(\bar u,v_\disc)=0$.
In Case (ii), let us denote $\nabla_\edge$ the tangential gradient to $\edge$, and let us recall that, as a consequence of Stampacchia’s lemma, if $w\in W^{1,1}(\edge)$ then $\nabla_\edge w=0$ a.e. on $\{y\in\edge\,:\,w(y)=0\}$ (here, “a.e.” is for the measure ${\rm meas}$). Hence, with $w=\gamma\bar u-a$ we obtain at least one $y_{0}\in \sigma$ such that $(\gamma\bar{u}-a)(y_{0})=0$ and $\nabla_\edge(\gamma\bar{u}-a)(y_{0})=0$. Let $F$ be the face of $\partial\Omega$ that contains $\sigma$. Using a Taylor’s expansion along a path on $F$ between $y_0$ and $x_\sigma$, we deduce $$\label{taylor-exp}
|(\gamma\bar{u}-a)(x_\sigma)|\le L_F h_\edge^2,$$ where $L_F$ depends on $F$ and on the Lipschitz constant of the tangential derivative $\nabla_{\partial\O} (\gamma\bar{u}-a)$ on $F$ (the Lipschitz-continuity of $\nabla_{\partial\O} (\gamma\bar{u}-a)$ on this face follows from our assumption that $\gamma\bar{u}-a\in W^{2,\infty}(\partial\Omega)$). Recalling that $||\mathbb{T}_\disc v_\disc-\gamma\bar u(x_\sigma)||_{L^2(\edge)}\le C_2h_\edge^2|\edge|^{1/2}$, we infer that $||\mathbb{T}_\disc v_\disc-a||_{L^2(\edge)} \leq Ch_{\mathcal M}^{2}|\edge|^{1/2}$, and therefore that $|G_\sigma(\bar{u},v_{\mathcal{D}})| \leq C
h_{\mathcal M}^{2}||\gamma_\bfn(\Lambda\nabla\bar{u})||_{L^2(\edge)}|\edge|^{1/2}$. Here, $C$ depends only on $\O$, $\bar u$ and $a$.
Gathering the upper bounds on each $G_\sigma$ and using the Cauchy–Schwarz inequality gives $G_\disc(\bar u,v_\disc) \le Ch_{\mathcal M}^2 ||\gamma_\bfn(\Lambda\nabla\bar{u})||_{L^2(\dr\O)}$, with $C$ depending only on $\O$, $\Lambda$, $\bar u$ and $a$. This implies $G_\disc(\bar u,v_\disc)^+
\le C h_\mesh^2$, and the proof is complete by plugging this estimate into .
[**Proof of Theorem \[theorem:improveobse\].**]{} The proof is very similar to the proof of Theorem \[theorem:improveseepage\]. As in this previous proof, we only have to estimate $E_\disc(\bar u,v_\disc)$, which can be re-written as $$\begin{aligned}
E_\disc(\bar u,v_\disc)&=
\int_\O (\div(\Lambda\nabla\bar u)+f)(\bar u-g){\, \mathrm{d}}x
+\int_\O (\div(\Lambda\nabla\bar u)+f)(g-\Pi_\disc v_\disc){\, \mathrm{d}}x\\
&=\int_\O (\div(\Lambda\nabla\bar u)+f)(g-\Pi_\disc v_\disc){\, \mathrm{d}}x\\
&=\sum_{K\in\mathcal M}\int_K (\div(\Lambda\nabla\bar u)+f)(g-\Pi_\disc v_\disc){\, \mathrm{d}}x
=:\sum_{K\in\mathcal M}E_K(\bar u,v_\disc),\end{aligned}$$ where the term involving $(\div(\Lambda\nabla\bar u)+f)(\bar u-g)$ has been eliminated using . Each term $E_K(\bar u,v_\disc)$ is then estimated by considering two cases, namely: (i) either $\bar u<g$ on $K$, in which case $E_K(\bar u,v_\disc)=0$ since $f+\div(\Lambda\bar u)=0$ on $K$, or (ii) $|\{y\in K\,:\,\bar u(y)=g\}|>0$, in which case $E_K(\bar u,v_\disc)$ is estimated by using a Taylor expansion and the assumption $||\Pi_\disc v_\disc -\bar u(x_K)||_{L^2(K)}\le C_2 h_K^2|K|^{1/2}$.
\[relax:gconst\] If $(x_K)_{K\in\mathcal M}$ are the centres of gravity of the cells, the assumption that $g$ is constant can be relaxed under additional regularity hypotheses. Namely, if $F:=\div(\Lambda\nabla\bar u)+f\in H^1(K)$ and $g\in H^2(K)$ for any cell $K$, then by letting $F_K$ and $g_K$ be the mean values on $K$ of $F$ and $g$, respectively, we can write $$\begin{aligned}
E_K(\bar u,v_\disc) =& \int_K F(g-\Pi_\disc v_\disc){\, \mathrm{d}}x\\
=& \int_K F(g(x_K)-\Pi_\disc v_\disc){\, \mathrm{d}}x + \int_K (F-F_K)(g-g_K){\, \mathrm{d}}x\\
&+ \int_K F(g_K-g(x_K)){\, \mathrm{d}}x=:T_{1,K}+T_{2,K}+T_{3,K}.\end{aligned}$$ The term $T_{1,K}$ is estimated as in the proof (since $\bar u(x_K) -g(x_K)$ can be estimated using a Taylor expansion about $y_0$). We estimate $T_{2,K}$ by using the Cauchy-Schwarz inequality and classical estimates between an $H^1$ function and its average: $$|T_{2,K}|\le ||F-F_K||_{L^2(K)}||g-g_K||_{L^2(K)}\le C h_K^2 ||F||_{H^1(K)}||g||_{H^1(K)}.$$ As for $T_{3,K}$, we use the fact that $g\in H^2(\O)$ and that $x_K$ is the center of gravity of $K$ to write $|g(x_K)-g_K|\le C h_K^2||g||_{H^2(K)}$. Combining all these estimates to bound $E_K(\bar u,v_D)$ and using Cauchy-Schwarz inequalities leads to an upper bound in $\mathcal O(h_{\mathcal M}^2)$ for $E_\disc(\bar u,v_D)$.
The case of approximate barriers {#sec:approxbarriers}
================================
For general barrier functions ($a$ in the Signorini problem, $g$ in the obstacle problem), it might be challenging to find a proper approximation of $\bar u$ inside $\mathcal K_\disc$. Consider for example the $\mathbb{P}_1$ finite element method; the approximation is usually constructed using the values of $\bar u$ at the nodes of the mesh, which only ensures that this approximation is bounded above by the barrier at these nodes, not necessarily everywhere else in the domain. It is therefore usual to relax the upper bound imposed on the solution to the numerical scheme, and to consider only approximate barriers in the schemes.
The Signorini problem
---------------------
We consider the gradient scheme with, instead of $\mathcal K_{\disc}$, the following convex set: $$\widetilde{\mathcal{K}}_{\disc} = \{ v \in X_{\disc,\Gamma_{1}}\;:\; \mathbb{T}_{\disc}v \leq a_{\disc} \; \mbox{on}\; \Gamma_3 \},$$ where $a_{\disc} \in L^2(\partial\O)$ is an approximation of $a$. The following theorems state error estimates for this modified gradient scheme.
\[errorseepage-approx\] Under the assumptions of Theorem \[errorseepage\], if $\widetilde{\mathcal K}_\disc$ is not empty then there exists a unique solution $u$ to the gradient scheme in which $\mathcal K_\disc$ has been replaced with $\widetilde{\mathcal K}_\disc$. Moreover, if $a-a_\disc\in H^{1/2}_{\Gamma_1}(\partial\O)$, Estimates and hold for any $v_\disc \in \widetilde{\mathcal K}_\disc$, provided that $G_{\disc}(\bar{u},v_{\disc})$ is replaced with $$\widetilde{G}_{\disc}(\bar u,v_\disc)= G_\disc(\bar u,v_\disc)+
\langle \gamma_\bfn(\Lambda\nabla\bar{u}), a-a_{\disc}\rangle$$
The proof is identical to that of Theorem \[errorseepage\], provided we can control the left-hand side of . For any $v_{\disc} \in \widetilde{\mathcal{K}}_{\disc}$, we write $$\begin{gathered}
\langle \gamma_\bfn(\Lambda\nabla\bar{u}), \mathbb{T}_{\mathcal{D}}(v_{\mathcal{D}}-u) \rangle=
\langle \gamma_\bfn(\Lambda\nabla\bar{u}), \mathbb{T}_{\mathcal{D}}v_{\mathcal{D}}-\gamma\bar{u} \rangle\\
+\langle \gamma_\bfn(\Lambda\nabla\bar{u}),\gamma\bar{u}-(\mathbb{T}_{\mathcal{D}}u+a-a_{\disc})\rangle
+\langle \gamma_\bfn(\Lambda\nabla\bar{u}), a-a_{\disc}\rangle.\end{gathered}$$ We note that the first term in the right-hand side is $G_\disc(\bar u,v_\disc)$; hence, the first and last terms in this right-hand side sums up to $\widetilde{G}_\disc(\bar u,v_\disc)$ and we only need to prove that the second term is non-positive. We take $w \in H^1(\O)$ such that $\gamma(\bar w)= \mathbb{T}_{\disc}u+a-a_{\disc}$, and we notice that $w$ belongs to $\mathcal K$ since $\mathbb{T}_\disc u$ and $a-a_\disc$ both belong to $H^{1/2}_{\Gamma_1}(\partial\O)$, and since $\mathbb{T}_\disc u+a-a_\disc\le a$ on $\Gamma_3$. Similarly to , the definition of $w$ shows that $$\langle \gamma_\bfn(\Lambda\nabla\bar{u}),\gamma\bar{u}-(\mathbb{T}_{\mathcal{D}}u+a-a_{\disc})\rangle
=\int_\O \Lambda\nabla\bar u\cdot\nabla(\bar u-w){\, \mathrm{d}}x-\int_\O f(\bar u-w){\, \mathrm{d}}x.$$ Hence, by , $\langle \gamma_\bfn(\Lambda\nabla\bar{u}),\gamma\bar{u}-(\mathbb{T}_{\mathcal{D}}u+a-a_{\disc})\rangle\le 0$ and, as required, the left-hand side of is bounded above by $\widetilde{G}_\disc(\bar u,v_\disc)$.
In the case of the $\mathbb{P}_1$ finite element method, $a_\disc$ is the $\mathbb{P}_1$ approximation of $a$ on $\dr\O$ constructed from its values at the nodes. The interpolant $v_\disc$ of $\bar u$ mentioned in Section \[sec:improvesig\] is bounded above by $a=a_\disc$ at the nodes, and therefore everywhere by $a_\disc$; $v_\disc$ therefore belongs to $\widetilde{\mathcal K}_\disc$ and can be used in Theorem \[errorseepage-approx\]. Under $H^2$ regularity of $a$, we have $||a_\disc-a||_{L^2(\dr\O)}=\mathcal O(h_{\mathcal M}^2)$. Hence, if $\gamma_\bfn(\Lambda\nabla\bar u)\in L^2(\dr\O)$, we see that $\widetilde{G}_\disc(\bar u,v_\disc)=\mathcal O(h_{\mathcal M}^2)$ and Theorem \[errorseepage-approx\] thus gives an order one estimate.
For several low-order methods (e.g. HMM, see [@B1]), the interpolant $v_\disc$ of the exact solution $\bar u$ is constructed such that $\mathbb{T}_\disc v_\disc=\gamma(\bar u)({\overline{x}_\edge})$ on each $\edge\in{{{\edges}_{\rm ext}}}$. The natural approximate barrier is then piecewise constant, and an order one error estimate can be obtained as shown in the next result.
Under Hypothesis \[hyseepage\], let $\disc$ be a gradient discretisation, in the sense of Definition \[gdseepage\], and let ${{\mathcal T}}=(\mesh,\edges,{\mathcal{P}})$ be a polytopal mesh of $\O$. Let $a_\disc\in L^2(\partial\O)$ be such that $a_\disc-a=0$ on $\Gamma_1$ and, for any $\sigma\in{{{\edges}_{\rm ext}}}$ such that $\sigma\subset \Gamma_3$, $a_\disc=a(x_\sigma)$ on $\sigma$, where $x_\sigma\in\sigma$. Let $\bar u$ be the solution to , and let us assume that $\gamma_\bfn(\Lambda\nabla\bar u)\in L^2(\partial\O)$ and that $\gamma(\bar u)-a\in W^{2,\infty}(\partial\O)$. We also assume that there exists an interpolant $v_{\mathcal{D}} \in X_{\disc,\Gamma_{1}}$ such that $S_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})\le C_1h_{\mathcal M}$ with $C_1$ not depending on $\disc$ or ${{\mathcal T}}$, and, for any $\sigma\subset \Gamma_3$, $\mathbb T_{\mathcal{D}}v_{\mathcal{D}}=\gamma(\bar{u})(x_{\sigma})$ on $\sigma$.
Then $\widetilde{\mathcal K}_\disc\not=\emptyset$ and, if $u$ is the solution to with $\mathcal K_\disc$ replaced with $\widetilde{\mathcal K}_\disc$, it holds $$\|\nabla_{\mathcal{D}}u - \nabla\bar{u}\|_{L^{2}(\Omega)^{d}}
+ \|\Pi_{\mathcal{D}}u - \bar{u}\|_{L^{2}(\Omega)}
\leq Ch_{\mathcal M}+CW_{\disc}(\Lambda\nabla\bar u)
\label{eq:improverrorseepage2}$$ where $C$ depends only on $\O$, $\Lambda$, $\bar u$, $a$, $C_1$ and an upper bound of $C_\disc$. \[th:improvedapproxseepage\]
Clearly, $v_{\disc}\in \widetilde{\mathcal K}_\disc$. The conclusion follows from Theorem \[errorseepage-approx\] if we prove that $\widetilde{G}_\disc(\bar u,v_\disc)^+=\mathcal O(h_{\mathcal M}^2)$. Note that, following Remark \[rem:makesense\], it makes sense to consider a piecewise constant reconstructed trace $\mathbb{T}_\disc v$.
With our choices of $a_\disc$ and $\Pi_\disc v_\disc$, and using the fact that $\bar u$ is the solution to the Signorini problem in the strong sense (and satisfies therefore ), we have $$\begin{aligned}
\widetilde{G}_\disc(\bar u,v_\disc)&=
\int_{\Gamma_3} (\mathbb T_\disc v_\disc-\gamma\bar u)\gamma_\bfn(\Lambda\nabla\bar{u}){\, \mathrm{d}}x
+\int_{\Gamma_3} (a-a_{\disc})\gamma_\bfn(\Lambda\nabla\bar{u}){\, \mathrm{d}}x\\
&=
\int_{\Gamma_3} \gamma_\bfn(\Lambda\nabla\bar{u})(\mathbb T_\disc v_\disc-a_\disc){\, \mathrm{d}}x\\
&=\sum_{\sigma\in{{{\edges}_{\rm ext}}},\,\sigma\subset \Gamma_3}\int_\sigma (\gamma\bar u (x_{\sigma})-a(x_\edge))\gamma_\bfn(\Lambda\nabla\bar{u}){\, \mathrm{d}}x\\
&=:\sum_{\sigma\in{{{\edges}_{\rm ext}}},\,\sigma\subset \Gamma_3}\widetilde G_\sigma(\bar u,v_\disc).\end{aligned}$$ We then deal edge by edge, considering two cases as in the proof of Theorem \[theorem:improveseepage\]. In Case (i), where $\gamma\bar u<a$ a.e. on $\sigma$, we have $\widetilde G_\sigma(\bar{u},v_{\mathcal{D}})=0$ since $\gamma_\bfn(\Lambda\nabla\bar{u})=0$ in $\sigma$. In Case (ii), we estimate $\widetilde G_\sigma(\bar u,v_\disc)$ using the Taylor expansion .
The obstacle problem {#sec:approxbarriers obs}
--------------------
With $g_\disc\in L^2(\O)$ an approximation of $g$, we consider the new convex set $$\widetilde{\mathcal{K}}_{\disc} = \{ v \in X_{\disc,0}\;:\; \Pi_{\disc}v \leq g_{\disc} \; \mbox{in}\; \O \}$$ and we write the gradient scheme with $\widetilde{\mathcal K}_\disc$ instead of $\mathcal K_\disc$. The following theorems are the equivalent for the obstacle problem of Theorems \[errorseepage-approx\] and \[th:improvedapproxseepage\].
\[errorobs-approx\] Under the assumptions of Theorem \[errorobs\], if $\widetilde{\mathcal K}_\disc$ is not empty then there exists a unique solution $u$ to the gradient scheme in which $\mathcal K_\disc$ has been replaced with $\widetilde{\mathcal K}_\disc$. Moreover, Estimates and hold for any $v_\disc \in \widetilde{\mathcal K}_\disc$, provided that $E_{\disc}(\bar{u},v_{\disc})$ is replaced with $$\widetilde{E}_{\disc}(\bar u,v_\disc)= E_\disc(\bar u,v_\disc)+ \int_{\O} (\div(\Lambda\nabla\bar{u})+f)(g_\disc-g) {\, \mathrm{d}}x.$$
We follow exactly the proof of Theorem \[errorobs\], except that we introduce $g_\disc$ instead of $g$ in . The first term in the right-hand side of this equation is then still bounded above by $0$, and the second term is written $$\begin{gathered}
\int_\O (g_\disc-\Pi_\disc v_\disc)(\div(\Lambda\nabla \bar u)+f){\, \mathrm{d}}x
=\int_\O (g_\disc-g)(\div(\Lambda\nabla \bar u)+f){\, \mathrm{d}}x\\
+\int_\O (g-\Pi_\disc v_\disc)(\div(\Lambda\nabla \bar u)+f){\, \mathrm{d}}x.\end{gathered}$$ The first term in this right-hand side corresponds to the additional term in $\widetilde{E}_\disc(\bar u,v_\disc)$, whereas the second term is the one handled in the proof of Theorem \[errorobs\].
Let $\disc$ be a gradient discretisation, in the sense of Definition \[gdobs\], and let ${{\mathcal T}}=(\mesh,\edges,{\mathcal{P}})$ be a polytopal mesh of $\O$. Let $g_\disc\in L^2(\O)$ be defined by $$\forall K\in\mesh\,,\; g_\disc=g(x_K)\mbox{ on $K$}.$$ Let $\bar u$ be the solution to , and let us assume that $\div(\Lambda\nabla\bar u)\in L^2(\O)$ and that $\bar u-g \in W^{2,\infty}(\O)$. We also assume that there exists an interpolant $v_{\mathcal{D}} \in X_{\disc,0}$ such that $S_{\mathcal{D}}(\bar{u},v_{\mathcal{D}})\le C_1h_{\mathcal M}$, with $C_1$ not depending on $\disc$ or ${{\mathcal T}}$, and that $\Pi_{\mathcal{D}}v_{\mathcal{D}}=\bar{u}(x_{K})$ on $K$, for any $K\in\mesh$.
Then $\widetilde{\mathcal K}_\disc\not=\emptyset$ and, if $u$ is the solution to with $\mathcal K_\disc$ replaced with $\widetilde{\mathcal K}_\disc$, $$\|\nabla_{\mathcal{D}}u - \nabla\bar{u}\|_{L^{2}(\Omega)^{d}}
+ \|\Pi_{\mathcal{D}}u - \bar{u}\|_{L^{2}(\Omega)}
\leq Ch_{\mathcal M}+CW_{\disc}(\Lambda\nabla\bar u)
\label{eq:improverrorobs2}$$ where $C$ depends only on $\O$, $\Lambda$, $\bar u$, $g$, $C_1$ and an upper bound of $C_\disc$. \[th:improvedapproxobs\]
The proof can be conducted by following the same ideas as in the proof of Theorem \[th:improvedapproxseepage\].
Let us now compare our results with previous studies. In [@C3; @F1], $\mathcal O(h_\mesh)$ error estimates are established for $\mathbb{P}_1$ and mixed finite elements applied to the Signorini problem and the obstacle problem. This order was obtained under the assumptions that $\bar u \in W^{1,\infty}(\O) \cap H^{2}(\O)$ and $a$ is constant for the Signorini problem and that $\Lambda\equiv I_{d}$, that $\bar u$ and $g$ are in $H^{2}(\O)$ for the case of the obstacle problem. Our results generalise these orders of convergence to the case of a Lipschitz-continuous $\Lambda$ and a non-constant $a$ (note that mixed finite elements are also part of the gradient schemes framework, see [@B10; @S1]).
Studies of non-conforming methods for variational inequalities are more scarce. We cite [@A30], that applies Crouzeix–Raviart methods to the obstacle problem and obtains an order $\mathcal O (h_\mesh)$ under strong regularity assumptions, namely $f \in L^{\infty}(\O)$, $\bar u -g \in W^{2,\infty}(\O)$, $g \in H^{2}(\O)$, $\Lambda\equiv I_{d}$ and the free boundary has a finite length. For the Signorini problem, under the assumptions that $a$ is constant, $\bar u \in W^{2,\infty}(\O)$ and $\Lambda\equiv I_{d}$, Wang and Hua [@A-31] give a proof of an order $\mathcal O (h_\mesh)$ for Crouzeix–Raviart methods. The natural Crouzeix–Raviart interpolants are piecewise linear on the edges and the cells, and if we therefore use it as $a_\disc$ for the Signorini problem, we see that under an $H^2$ regularity assumption on $a$, we have $||a-a_\disc||_{L^2(\dr\O)}\le Ch_\mesh^2$. Hence, the additional term in $\widetilde{G}_\disc(\bar u,v_\disc)$ in Theorem \[errorseepage-approx\] is of order $h_\mesh^2$ and this theorem therefore gives back the known $\mathcal O(h_\mesh)$ order of convergence for the Crouzeix–Raviart scheme applied to the Signorini problem. This is obtained under slightly more general assumptions, since $\Lambda$ does not need to be ${\rm Id}$ here. The same applies to the obstacle problem through Theorem \[theorem:improveobse\].
As shown in Section \[sec:examples\], our results also allowed us to obtain brand new orders of convergences, for methods not previously studied for (but relevant to) the Signorini or the obstacle problem.
We finally notice that most of previous research investigates the Signorini problem under the assumption that the barrier $a$ is a constant. On the contrary, Theorems \[errorseepage\], \[errorseepage-approx\] and \[th:improvedapproxseepage\] presented here are valid for non-constant barriers.
Conclusion
==========
We developed a gradient scheme framework for two kinds of variational inequalities, the Signorini problem and the obstacle problem. We showed that several classical conforming or non-conforming numerical methods for diffusion equations fit into this framework, and we established general error estimates and orders of convergence for all methods in this framework, both for exact and approximate barriers.
These results present new, and simpler, proofs of optimal orders of convergence for some methods previously studied for variational inequalities. They also allowed us to establish new orders of convergence for recent methods, designed to deal with anisotropic heterogeneous diffusion PDEs on generic grids but not yet studied for the obstacle or the Signorini problem.
We designed an HMM method for variational inequalities, and showed through numerical tests (including a new test case with analytical solution) the efficiency of these method.
**Acknowledgement**: the authors thank Claire Chainais-Hillairet for providing the initial MATLAB HMM code.
Appendix {#traceoperator}
=========
We recall here some classical notions on the normal trace of a function in $H_{\rm{div}}$. Let $\Omega \subset \mathbb{R}^{d}$ be a Lipschtiz domain. Then there exists a surjective trace mapping $\gamma: H^{1}(\Omega) \rightarrow H^{1/2}(\partial\Omega)$. For $\bpsi \in H_{\rm{div}}$, the normal trace $\gamma_{\bfn}(\bpsi) \in (H^{1/2}(\partial\Omega))'$ of $\bpsi$ is defined by $$\langle \gamma_{\bfn}(\bpsi), \gamma (w) \rangle=
\int_{\Omega}\bpsi\cdot\nabla w dx + \int_{\Omega}{\rm{div}}\bpsi\cdot w {\, \mathrm{d}}x, \quad \mbox{for any}\; w \in H^{1}(\Omega), \label{trace}$$ where $\langle \cdot, \cdot\rangle$ denotes the duality product between $H^{1/2}(\partial\Omega)$ and $(H^{1/2}(\partial\Omega))'$. Stoke’s formula in $H^1_0(\O)\times H_{\div}(\O)$ shows that this definition makes sense, i.e. the right-hand side is unchanged if we apply to $\widetilde{w}\in H^1(\O)$ such that $\gamma(w)=\gamma(\widetilde{w})$.
|
---
abstract: 'The companions of evaporating binary pulsars (black widows and related systems) show optical emission suggesting strong heating. In a number of cases large observed temperatures and asymmetries are inconsistent with direct radiative heating for the observed pulsar spindown power and expected distance. Here we describe a heating model in which the pulsar wind sets up an intrabinary shock (IBS) against the companion wind and magnetic field, and a portion of the shock particles duct along this field to the companion magnetic poles. We show that a variety of heating patterns, and improved fits to the observed light curves, can be obtained at expected pulsar distances and luminosities, at the expense of a handful of model parameters. We test this ‘IBS-B’ model against three well observed binaries and comment on the implications for system masses.'
author:
- 'Nicolas Sanchez & Roger W. Romani'
title: 'B-ducted Heating of Black Widow Companions'
---
Introduction
============
When PSR J1959+2048 was found to have a strongly heated low mass companion, visible in the optical, it was realized that this heating was an important probe of the millisecond pulsar (MSP) wind [@de88; @cvr95] and that study of the companion dynamics gave a tool to probe the MSP mass [@arc92; @vKBK11]. The discovery that many [*Fermi*]{} LAT sources were such evaporating pulsars alerted us to the fact that this wind-driving phase was not a rare phenomenon, but a common important phase of close binary pulsar evolution. It has also provided a range of examples from the classical black widows (BW) like PSR J1959+2048 with $0.02-0.04M_\odot$ companions, to the redbacks (RB) with $0.2-0.4M_\odot$ companions, to the extreme ‘Tiddarens’ (Ti) with $\le 0.01 M_\odot$ He companions [@rgfz16]. Several of these systems are near enough and bright enough to allow detailed optical photometry and spectroscopy. The results have been puzzling: the simple models that imply direct radiative pulsar heating of the companions’ day sides, while sufficient for the early crude measurements of simple optical modulation, often do not suffice to explain the colors, asymmetries [@stap01; @sh14] and heating efficiency revealed by higher quality observations [@rfc15].
These binaries are expected to sport an IntraBinary Shock (IBS) between the baryonic companion wind and the relativistic pulsar wind. This shock certainly reprocesses the relativistic particle/$B$ wind of the pulsar and Romani & Sanchez (2016, hereafter RS16) showed how radiation from the IBS could create the characteristic peak structures seen in RB and BW X-ray light curves [@robet14] and, in some cases, asymmetric surface heating.
However extreme asymmetries and high required efficiency were a challenge for this model, and it was suspected that direct particle heating of the companion surface might play an important role. Here we describe such heating, where a portion of the IBS shock particles cross the contact discontinuity and are ‘ducted’ to the companion surface along the open pole of a companion-anchored magnetic field. Such fields are expected since companion tidal locking ensures rapid rotation in these short-period binaries, while the presence of emission from the night side of the orbit suggests sub-photospheric convective heat transport; rotation and convection being the traditional ingredients for a stellar magnetic dynamo. Theoretical [e.g. @a92] and observational [e.g. @rfc15; @vsa16] evidence has been presented for such companion fields in black widows and related binaries. The idea that these fields could enhance BW heating was first described by @eich92 for PSR J1959+2048, where it was suggested that Alfvén wave propagation in the companion magnetosphere might, in analogy with the Sun-Earth system, increase the heating power by as much as $100\times$ over direct radiation.
We present a simple model for such ‘IBS-B’ heating assuming a dominant dipole field and implement it in a binary light curve fitting code. We show that this model explains a number of peculiarities in high-quality BW light curves. Several example fits are shown. In some cases extension beyond the simple dipole estimate may be required. However, the fits are good enough so that conclusions drawn from direct heating fits should be viewed with caution. Precision neutron star mass measurements will therefore require high quality companion observations and robust heating models.
Basic Model
===========
In direct heating the pulsar spindown power radiatively heats the facing side of the companion, raising the characteristic temperature from the unheated (‘Night’ side) $T_N$ to $$T_D^4=\eta {\dot E}/4\pi a^2 \sigma_{SB} +T_N^4
\eqno (1)$$ with $a=x_1 (1+q)/{\rm sin}\,i$ the orbital separation, $x_1$ the projected semi-major axis of the pulsar orbit, ${\dot E}=I\Omega{\dot \Omega}$ the pulsar spindown power for moment of inertia $I$ (spin angular frequency and derivative $\Omega$ and ${\dot \Omega}$), Stefan-Boltzmann constant $\sigma_{SB}$ and $\eta$ a heating efficiency. Effectively $\eta=1$ corresponds to isotropic pulsar emission of the full spindown power, with the impingent radiation fully absorbed by the companion. This model has been implemented in several light curve modeling codes eg. the ELC code [@oh00] and its descendant ICARUS [@bet13]. Such direct heating certainly describes the effect of the pulsar photon irradiation. Yet the observed radiative flux, peaking in GeV $\gamma$-rays, represents only a fraction of the pulsar spin-down power, with a heuristic scaling $L_\gamma \approx ({\dot E} \cdot 10^{33}{\rm erg/s})^{1/2}$. The bulk of this power is instead carried in a $e^+/e^-/B$ pulsar wind.
If the MSP wind terminates in an IBS, this may contribute to the surface heating (RS16). In that paper it was assumed that the beamed emission from the oblique relativistic shock was the primary heating source. Such radiation does indeed produced the double-peaked X-ray light curves of BW, RB and related binaries, with the X-ray peaks caustics from synchrotron emission beamed along the IBS surface. Two parameters control the IBS shape: the ratio between the companion and MSP wind momentum fluxes $\beta$, and the speed of the companion wind relative to the its orbital velocity $v_w=f_v v_{orb}$. Here $\beta = {\dot m}_W v_w c /{\dot E}$ sets the scale of the IBS shock with a characteristic standoff $$r_0= {{\beta^{1/2}} \over {(1+\beta^{1/2})}} a;
\eqno (2)$$ with $\beta<1$ IBS wrapping around the companion and $\beta>1$ shocks surrounding the pulsar. When $f_v \sim 1$ or smaller, the IBS is swept back in an Archemidean spiral and the X-ray light curves and surface heating are asymmetric.
Although simple estimates indicate that the post-shock particles can cool on the IBS flow time scale and the shocked particles have a modest bulk $\Gamma$ [@boget12] so that some of this cooling radiation can reach the companion, the general effect of an IBS shock is to deflect pulsar power [*away*]{} from the companion. In addition the observed IBS radiation, while a substantial portion of the the X-ray band flux, is only $L_X \sim 10^{-3}-10^{-4} {\dot E}$ [@apg15]. Since IBS X-rays are emitted closer to the companion than the pulsar gamma-rays, they can be more effective at companion heating by $\sim \beta ^{-1}$. However the observed ratios are typically $f_\gamma/f_X \sim 300$ (Table 1), so it seems difficult for IBS emission to dominate companion heating, unless $\beta$ is very small, or the IBS emission is primarily outside the X-ray band or is beamed more tightly to the companion than the pulsar emission. So while IBS radiation is doubtless useful for a detailed treatment of companion heating (and may contribute needed asymmetry), it is unlikely to dominate. The problem is particularly severe for systems whose observed $T_D$ indicates a higher heating power than that emitted by the pulsar into the solid angle subtended by the companion.
The solution appears to be more direct utilization of the pulsar wind. We will assume here that as the pulsar wind impacts (either directly or at a wind-induced IBS) a companion magnetic field a fraction of the particles thread the companion field lines and are ducted to the surface. Since the field and flow structure in the IBS are poorly understood, we do not attempt here to follow the details of how these particles load onto the companion magnetosphere. As noted in RS16, companion fields of plausible strength can be dynamically important at the IBS. In the small $\beta$ limit a companion with radius $r_\ast$ will have a surface dipole field $B_c$ dominating at the IBS for $B_c > (2 \beta {\dot E}/c) ^{1/2} \beta a^2/r_\ast^3$. With typical $a\sim 10^{11}$cm and $r_\ast \sim 0.1 R_\odot$ this is $B_c \sim 24 \beta^{3/2}$kG, which seems plausible for the large scale dipole of such rapidly rotating stars. However, given the evidence for a strong evaporative wind, we envision that this contributes significantly to the IBS-forming momentum flux, so the dipole field may be somewhat lower, and the wind controls the IBS shape. For larger $B_c$ the strength and orientation of the companion field will be important to the details of the IBS shape, although the basic energetics should be close to that of the geometry considered here. Note that even a weaker field can still duct the energetic $e^+/e^-$ of the shocked pulsar wind as long as the gyroradius is $< r_0$. For $r_0 \sim \beta^{1/2} a$ the condition on particle energy for ducting is $\gamma < r_0B_c(r_\ast/r_0)^3/1.7\times 10^3$cm $\sim 2 \times 10^4 B_c/\beta$, with $B_c$ in Gauss. At the IBS-controlling $B_c$ above this is $\gamma < 4 \times 10^8 \beta^{1/2}$, so if the post-shock energy flux is dominated by, say $\gamma < 10^6$, then fields $\sim 100\times$ weaker can still effectively duct.
A more immediate issue is whether these ducted particles can deposit the bulk of their energy at the field foot points. In general the fields are too weak for efficient synchrotron cooling on the inflow timescale with a mirror to synchrotron ratio $\zeta \sim 10^{15}\beta^{5/2}B_c/(a/10^{11}{\rm cm})$ [@ho86]. Thus even with $B_c$ important for the IBS structure this is $\zeta \sim 10^6 \beta^{5/2} (B_c/24{\rm kG})^{-2} (a/10^{11}{\rm cm})^{-1}$, so in an empty magnetosphere the $e^+/e^-$ will mirror above the pole unless $\beta$ is very small. Mirroring particles can radiate energy transverse to the companion surface and may be lost from the magnetosphere. However, if the postshock distribution is anisotropic, the fraction entering with small perpendicular momentum and precipitating to the pole may not be small. Also mirrored particles may be captured and mirror between poles until they precipitate. Finally other damping may be important, especially since the wind should flow out along the heated companion pole. A simple estimate for the column density traversed by the precipitating particles is $\beta {\dot E}/(\theta_c^2 c v_w^2 r m_P)$ $\sim 10^{22} \beta/[\theta_c^2 (v/500 {\rm km/s})^2 (r/10^{10} {\rm cm})] {\rm cm^{-2}}$. With a modest $\beta$ the (forward beamed) energy losses will slow a significant fraction of the precipitating particles, enhancing capture, especially if mirroring makes multiple reflections to the forward pole with $\theta_c < 1$. In any event, our geometrical model computes the effect of precipitating particle energy thermalized in the magnetic cap. If this is not a large fraction of the spindown power incident on the open zone above these caps, the particle heating effect appears plausible. More detailed sums will be needed to establish the true efficacy.
10.1truecm
-0.5truecm
To obtain the basic scaling in this model we start with a simplified symmetric bow shock geometry $$r_D(\theta) = r_0 \theta/{\rm sin}\theta
\eqno (3)$$ [@dy75], and describe the heating on the pulsar-facing side of the companion which has a spherically symmetric pulsar wind and a dipole field $$r_B(\theta) = r_\ast {\rm sin}^2\theta/{\rm sin}^2\theta_C
\eqno (4)$$ whose axis is aligned with the line of centers and which is anchored in a Roche-lobe filling companion of size $r_\ast = 0.46 q^{-1/3} a$, with $q=m_{P}/M_{c}$ the mass ratio and $a$ the orbital separation. The cap half angle $\theta_c$ describes the footpoints of the open field lines. This open zone is determined by the intersection of the field lines and the IBS; these are tangent when $$\theta_t = ({\rm tan}\,\theta_t)/3, \qquad \qquad i.e.\quad \theta_t=1.324
\eqno (5)$$ which implicitly determines the portion of the pulsar wind striking the open zone. If a fraction $f_{\dot E}$ of the power into this zone reaches the cap we have a heating luminosity $$L_h = f_{\dot E} {\dot E} (1-{\rm cos}\theta_P)/2,
\eqno (6)$$ where from the MSP the open zone subtends ${\rm cos}\theta_P= ({\rm sin}\theta_t -r_0\theta_t{\rm cos}\theta_t)/
(r_0^2\theta_t^2+{\rm sin}^2\theta_t-r_0 \theta_t {\rm sin}2\theta_t)^{1/2}$ (Figure 1). The area of the heated surface is $$A_c=2\pi r_\ast^2 (1-{\rm cos}\theta_C)
\eqno (7)$$ with $1-{\rm cos}\theta_C=1-[1-(r_\ast {\rm sin}^3 \theta_t)/(r_0\theta_t)]^{1/2}$. This gives the heating power, cap size and the average cap temperature $T_{eff}= (L_h/\sigma_B A_c)^{1/4}$. These quantities are shown for this simple model in Figure 1, with an assumed $f_{\dot E} =0.01$ and $T_{eff}$ in units of $$(L_h/\sigma_B a^2)^{1/4} = 13,800\,K\, {\dot E}_{34}^{1/4} a_\odot^{-1/2}
\eqno (8)$$ for characteristic MSP power ${\dot E} = {\dot E}_{34} 10^{34} {\rm erg\,s^{-1}}$, and orbital separation $a_\odot R_\odot$.
Note that the heating power increases and the cap size decreases with $\beta$, leading to an increase in $T_{eff}$ for strong companion winds. Indeed, it is believed that the heating drives the companion wind and so magnetic-induced winds may in fact have positive feedback, with stronger heating leading to larger outflow and larger $\beta$. Note also that high mass ratio systems and systems where the Roche lobe is under-filled, will in general have smaller companions and higher effective temperatures. Of course all of these factors also scale with the pulsar power and inversely with orbital separation $a$.
4.5truecm
-0.5truecm
Numerical Model
===============
Although this simple model illustrates some basic scaling we do not use it for direct data comparison. We instead assume an axially symmetric, equatorially concentrated pulsar wind, typically $$f_P(r,\theta) = {\dot E(\theta)}/(4\pi r^2c) = 3I\Omega{\dot \Omega}{\rm sin}^2\theta/(8\pi r^2 c).
\eqno (9)$$ but alternatively distributed as ${\rm sin}^4\theta$ [@tsl13]. We also use the more detailed IBS shape of @crw96, $r(\theta) = d ~ {\rm sin}\theta_1/{\rm sin}(\theta + \theta_1)$ with $\theta$ the angle subtended from the pulsar and $$\theta_1=\left [ {15\over 2} \left ( \left [
1+{4\over 5}\beta (1-\theta {\rm cot}\theta)\right ]^{1/2} -1 \right ) \right ]^{1/2}
\eqno (10)$$ that from the companion star. For finite $f_V$ the IBS is swept back along the orbit. See RS16 for details. Recently @wet17 have presented a similar discussion of this basic IBS geometry.
The orientation of the companion magnetic dipole axis is specified by $\theta_B$, $\phi_B$ (with 0,0 toward the pulsar). Since the pulsar heating may be a significant driver of the dynamo-generated field, we allow the field origin to be offset from the companion center. Here we only consider offsets along the line of centers by $\lambda_B$, where $\lambda_B=+1.0$ places the field origin at the companion ‘nose’ closest to the $L_1$ point. If we displace the field, we can also assume that the companion momentum flux, whether from a stellar wind or the dipole $B$ itself, has a similar offset. Of course in general an offset $B$ might be centered anywhere within the companion, and might well include higher order multipoles. Dipole dominance becomes an increasingly good approximation as the standoff distance increases $r_0 > r_\ast$.
In the numerical model we consider both direct photon heating and ducted particle heating. The direct heating is assumed to be from the SED-dominating observed $\gamma$ ray flux $f_\gamma$, so that at a point on the companion surface a distance $r$ from the pulsar with surface normal inclination $\xi$ the direct heating flux is $$f_{\rm D} = L_{\rm D} {\rm cos}\,\xi/(4\pi r^2) = f_\gamma (d/r)^2 {\rm cos}\,\xi.
\eqno (11)$$ One caveat is that the observed $f_\gamma$ is measured on the Earth line-of-sight while the heating $\gamma$-rays are directed near the orbital plane. In @rw10 and @phgg16 $\gamma$-ray beaming calculations were presented that in principle let one correct the observed flux to the heating flux. The beaming models are less certain for the MSP treated here and we do not attempt such correction, but note that for binaries observed at high inclination, the (presumably spin-aligned) MSP likely directs more flux toward its companion than we observe at Earth.
For the particle heating, we compute the IBS geometry and find the dipole field lines that are tangent to this complex, possibly asymmetric, surface. These divide the IBS particles into those inside the curve of tangent intersection, which may couple to the ‘forward’ pole from those outside the line which can in principle reach the opposite ‘back side’ magnetic pole. In fact, for each patch of the IBS surface we determine the threading field line’s footpoint on the polar cap and deposit the appropriate fraction of the pulsar wind power on the star. $L_P$ is the normalization for the total particle power (with a ${\rm sin}^2\theta$ distribution) that reaches the companion surface; this can be compared with ${\dot E}$. In general, dipole field line divergence ensures that the edges of the cap collect more MSP spindown power per unit area and are hence hotter, although the details depend on the IBS geometry. Thus the generic geometry is an edge-brightened cap offset from the companion nose (below the $L_1$ point). Some IBS regions connect to the the pole on the ‘night’ side of the star. The central field lines of this pole extend down-stream, away from the pulsar and so do not intersect the IBS. The result is a partial ring, and the back pole is in general much more weakly heated. Although this back-side heating is more sensitive to the details of the IBS geometry and the topology of the magnetotail (so our simple approximate solution is more likely to miss the detailed behavior), this backside illumination often appears to improve light curve fits and serves, in some cases to dominate the nighttime flux. This is interesting as it provides a non-hydrodynamic mechanism to transport heat to the night side of the star.
This IBS-B model follows only the geometrical effects, the gross energetics of the surface heating and the thermal re-radiation of the multi-temperature atmosphere. Both the $\gamma$-rays and the $e^+/e^-$ are penetrating so the deep heating approximation is good. We have ignored here the additional X-ray heating from the IBS (or any Compton component), as it seems energetically sub-dominant (Table 1). We also do not follow the detailed particle propagation or response of the companion field to the pulsar wind, beyond a simple cut-off at the IBS location. Thus many details which might be captured by a relativistic MHD numerical simulation are missing. However our intent is to allow comparison with observational data and so the present simplification which allows a full model to be computed in seconds on a modern workstation, is essential to allow model fitting and the exploration of parameter space. The prime IBS-B feature is that it [*collects*]{} pulsar spindown power, focusing to the companion, and does so in a way that can have strong asymmetries controlled by the magnetic field geometry. This gives it the potential to explain puzzling high temperatures and light curve asymmetries of black widow heating patterns.
8.99truecm
-0.8truecm
Figure 2 shows the components of this model starting with the IBS, then the dipole ducting field, the heated surface and the resultant light curves. Figure 3 shows an initial model with little particle heating compared with some sample magnetic geometries and their resulting light curves.
[lrrrrrrrrrr]{}\[ht!!\] J1301+0833& 6.7&6.53&0.078& 13.2& 26.9&1.23/0.67&0.053/0.053/0.053&1.1& 0.3 &259\
J1959+2048&16.0&9.17&0.089& 29.1& 30.4&1.73/2.50&0.806/1.054/1.364&1.7& 0.55&324\
J2215+5135& 7.4&4.14&0.468& 69.2& —&2.77/3.01&0.403/0.434/0.744&1.2& 0.7 &353\
Light Curve Fits
================
The basic IBS structure is set by the dimensionless parameters $\beta$ and $f_v$, the magnetic field geometry by $\theta_B$ and $\phi_B$. In practice the optical light curves are rather insensitive to $f_v$, so we fix this parameter at a large value, leaving the bow shock symmetric. As noted in RS16 X-ray light curves are much more sensitive to $f_v$-induced asymmetry. We also have the option of offsetting the field and wind centers from the companion center by $\lambda_B$. In these computations we always apply the direct heating of the observed $\gamma$-ray flux, as determined by fluxes from the 2nd [*Fermi*]{} Pulsar catalog (or the 4th [*Fermi*]{} source catalog, when pulse fluxes have not been published). This corresponds to a direct heating power $$L_D=4\pi f_\gamma d^2 = 1.2 \times 10^{33}{\rm erg\, s^{-1}} f_{\gamma,-11} d_{\rm kpc}^2
\eqno (12)$$ for typical parameters. This is often only a few percent of the characteristic spin-down power. We chose here examples where there are published multi-color optical light curves and radial velocity curves. Table 1 lists observed properties for our sample pulsars.
As it happens, one critical parameter in the modeling is the source distance. Early modeling work was largely based on the ELC code, where only the shapes of the light curves in the individual filters were considered. This provides immunity to zero point errors at the cost of most color information and, with arbitrary normalization, is sensitive to the source size only through shape (i.e. Roche lobe and tidal distortion) effects. In our ICARUS-based model, the observed fluxes are used, making the source size and distance important to the model fitting. For these MSP the primary distance estimate is from dispersion measure models. The NE2001 model [@ne2001] has become a standard reference, but the more recent @ymw17 [hereafter YMW17] model provides an updated comparison. The model uncertainties are hard to determine, but a 20% uncertainty is generally quoted. These MSP also have substantial proper motions and the distance is an important factor in determining Shklovskii effect decrease to the spindown power $${\dot E_{\rm Shk}}=4\pi^2 I v^2 /(c\,d\,P^2)
=9.6 \times 10^{33} I_{45} \mu_{10}^2 d_{\rm kpc} P_{\rm ms}^{-2}
\eqno (13)$$ with the proper motion in units of 10mas/y.
A final factor important to the optical modeling is the intervening extinction. All of our sample objects are in the north, so we can use the 3-D dust maps of @gsf15 to infer $A_V$ at the source distance, and a maximum $A_V$ along the pulsar line of sight.
8.9truecm
-0.5truecm
Application to PSR J1301+0833
-----------------------------
This $P=1.84$ms BW has a $P_b=6.53$hr orbit with a $\sim 0.03 M_\odot$ companion. We fit the 131 $g^\prime r^\prime i^\prime z^\prime$ photometric points generated from a combination of imaging at MDM [@lht14], Keck and Gemini archival data and integration of Keck LRIS spectra (calibrated to a stable neighboring star on the slit) described in RGFZ16. The dispersion measure $DM= 13.2\,{\rm cm^{-3} pc}$ implies a distance 1.2 kpc (YMW17) or 0.67 kpc (NE2001). As pointed out by @rgfz16 [RGFZ16] the combination of faint magnitude and modest $\sim 4500$K effective temperature requires a large $\sim 6$kpc distance for direct full-surface heating of a Roche-lobe filling star. However, the pulsar has a substantial proper motion. The inferred space velocity and Shklovskii-corrected ${\dot E}=6.7 I_{45} (1-0.31 d_{\rm kpc}) \times 10^{34}{\rm erg\, s^{-1}}$ are only reasonable for $\sim$kpc distances. So while the unconstrained direct model has a plausible $\chi^2=184$ ($\chi^2/DoF=1.46$), the distance is $\sim 4\times$ larger than compatible with the proper motion (and DM).
Thus to match the observed faint magnitude at the $\sim$ kpc distance the visible companion surface size must be substantially smaller than the volume equivalent Roche lobe radius $R_{\rm RL}$, [*i.e.*]{} fill factor $f_c = R_c/R_{\rm RL} << 1$. For direct heating, at $d=1.2$kpc we need $f_c=0.137$ ($\chi^2=207$); at $d=0.67$kpc the size is $f_c=0.076$ ($\chi^2=216$). These are smaller than the fill factor of a cold $e^-$ degeneracy supported object of the companion mass, and so direct heating cannot provide a good solution unless $d>1.5$kpc.
A magnetically ducted model provides a viable alternative, illuminating a small cap of a larger star for a fixed $d=1.2$kpc. For example, with a dipole field directed $\sim 25^\circ$ from the line of centers, offset $\lambda_B = -0.5$, we find a plausible $f_c=0.19$ model with $\chi^2=195$. Only a small fraction of the spin-down power is required for this heating, with $\beta L_p = 1.0 \times 10^{30} {\rm erg\, s^{-1}}$. The light curve errors are too large for a tight independent constraint on $\beta$. The area of the open zone above the caps scales with $\beta$ so $L_p \sim 1/\beta$. For this small $\theta_B$ the back pole field lines are far off axis and we do not illuminate this pole as it overproduces flux at binary minimum. The existing photometry strongly constrains the heated cap size and temperature, but the errors are too large to pin down the cap shape and location. The spectroscopic points, in particular, may have substantial systematic errors away from companion maximum. Improved photometry, especially in the near-IR will help understand this system, and a good X-ray light curve may help to measure $\beta$ and $f_v$.
8.9truecm
-0.5truecm
Application to PSR J1959+2048
-----------------------------
The original BW pulsar with $P=1.61$ms in a 9.2hr orbit is a prime target of these studies, especially since @vKBK11 measured a large radial velocity that, with existing inclination estimates, imply a large neutron star mass. Here we use the photometry published in @ret07; these 7B, 38V, 44R, 2I and 4K$_s$ points were kindly supplied by M. Reynolds in tabular form. The I points and 2 R points at binary minimum are from [*HST*]{} WFPC photometry.
Again the source distance is a crucial uncertainty. The NE2001 and YMW17 distances differ by 45%. In fact the range may be even larger since @vKBK11 estimated $d>2{\rm kpc}$ from spectral studies of the reddening, while @arc92 estimated $d\sim 1.2\,{\rm kpc}$ based on spectroscopy of the H$\alpha$ bow shock. A direct heating fit produces a good match to the multi-band light curves, with $\chi^2=256$. The error is dominated by the 4 $K_s$ points, which might have a zero point error with respect to our model. A shift of -0.3mag brings these into agreement, dropping $\chi^2$ to 164. However for this best fit the distance is $3.3$ kpc and the required extinction is larger than the maximum $A_V=1.36$ in this direction, which is only reached by $\sim$7kpc. Also, at this distance the Skhlovskii correction decreases the spin-down power by 2/3 to $4.7\times 10^{34} I_{45}{\rm erg\, s^{-1}}$. The observed gamma-ray flux corresponds to an isotropic luminosity of $L_\gamma = 2.2 \times 10^{34} {\rm erg\, s^{-1}}$, a large fraction of the spin-down power, but even more disturbingly the sky-integrated luminosity of the required direct heating flux is $9 \times 10^{34} {\rm erg\, s^{-1}}$, $4\times$ larger than the observed flux and twice the inferred spindown power. Of course, if $I_{45} > 2$ and the equatorial beaming increases the $\gamma$-ray flux above the Earth line of sight value by 4$\times$, then an efficient $\gamma$-ray pulsar can supply the required direct heating. Such a large $I_{45}$ would be of interest for the neutron star EoS and such high beaming efficiency would be of interest for pulsar magnetosphere models. As an alternative the source might be closer.
If we fix the distance at the YMW16 value of 1.73kpc and the corresponding $A_V=0.81$, the fit is much worse, with $\chi^2=935$ (although the $K_s$ fluxes match!). However this model requires a relatively small fill factor $f_c=0.31$ and an unacceptably small inclination $i=47^\circ$. Again the preference for smaller areas at larger temperatures implies concentrated heating and suggests that an IBS-B model can help.
We have indeed found an adequate ($\chi^2=295$) model for this distance/$A_V$ with the magnetic field pointing near the angular momentum axis, $\sim 90^\circ$ from the companion nose, and offset $\lambda_D= 0.3$ toward this nose. This fit has a more reasonable $f_c=0.48$ and $i=76^\circ$, and, in addition to the $6.1\times 10^{33} {\rm erg\, s^{-1}}$ of direct heating it uses $3.8\times 10^{33} {\rm erg\, s^{-1}}$ of particle flux. This is $\sim (30I_{45})^{-1}$ of the available spindown power at this distance, suggesting that this fraction of the pulsar at the open zone reaches the companion surface. The heated cap needs to be rather large with $\beta \approx 0.005$ placing the IBS standoff at $\sim 7\%$ of the companion separation. Our dipole approximation for the field geometry is suspect for such small standoff. Also, the X-ray orbital light curve of @huet12 allows a larger $\beta$. Figure 5 compares the direct (large d) and IBS-B models. As might be expected, the largest differences lie in the infrared bands that detect more of the unilluminated surface. Unfortunately the limited $K_s$ points here are not sufficient to control the model. Happily van Kerkwijk and Breton have collected JHK observations of PSR J1959+2048 (private communication); these will be very useful in choosing between the Direct-Free and IBS-B scenarios.
Application to PSR J2215+5135
-----------------------------
PSR J2215+5135 is a redback (RB) system, a $P=2.6$ms ${\dot E}=7.4I_{45} \times 10^{34}{\rm erg\, s^{-1}}$ millisecond pulsar in a $P_b=4.14$hr orbit with a $\sim 0.25 M_\odot$ companion. The LAT timing model includes a $189\pm 23$ mas/y proper motion, but this is almost certainly spurious, due to the typical redback timing noise, as the space velocity would exceed 100km/s at 110 pc. For this pulsar the two DM models give similar distances.
As in RS16 we fit the BVR light curves of @sh14 (103 $B$, 55 $V$, and 113 $R$ magnitudes). The companion radial velocity was measured in @rgfk15. As noted in RS16, direct fits are very poor unless an (arbitrary) phase shift of $\Delta \phi \sim 0.01$ is imposed on the model. With such a phase shift the direct fit has a minimum $\chi^2=901$ ($\chi^2=2268$ with no phase shift). The overall light curve shape and colors are quite good, so this large $\chi^2$ is evidently the result of low level stochastic flaring or underestimation of errors in the SH14 photometry (RS16). The observed maximum is quite wide and to match this shape, the model prefers $f_c \sim 0.9$ so that the ellipsoidal terms broaden the orbital light curve. In turn this requires a large distance $d_{\rm kpc}=5.6$. The best fit extinction is $A_V$=1.4, about twice the maximum in this direction. At this distance the required heating luminosity $3.8 \times 10^{35} {\rm erg\,s^{-1}}$ is $5\times$ larger than the standard spindown power.
If we fix to the YMW17 distance and extinction, we find that the best fit direct heating model adopts an unreasonably small inclination (and high irradiation power) to keep the light curve maximum wide. In addition the fill factor $f_c=0.19$ is very small, inconsistent with a main sequence companion. The $\chi^2=2787$ is poor (and with no imposed phase shift is $\chi^2=4208$), but larger inclination minima have $\chi^2$ many times worse.
IBS-B models do better and can naturally produce the phase shift. At 2.77kpc we reach $\chi^2=3030$, but we find that the $T_N$ determined from the colors (and spectral type) at minimum coupled with the faint magnitude limit the emission area so that the fill factor is $\sim 0.41$, still small for a main sequence companion. It seems quite robust that larger companions require $d_{\rm kpc}$ larger than the DM estimate. If we free the distance, we find a $\chi^2=1318$ minimum at 4.2kpc ($A_V=0.59$). The direct heating ($\gamma$-ray) power is $2.6\times 10^{34} {\rm erg\,s^{-1}}$ while the particle heating is only $2\times 10^{32} {\rm erg\,s^{-1}}$. The $f_c=0.64$ implies a companion radius slight inflated from the main sequence expectation. In RS16, IBS illumination models did produce lower $\chi^2$, but at $\sim 5$kpc distances with large heating powers of $\sim 3 \times 10^{35} {\rm erg\,s^{-1}}$, even with 100% efficient IBS reprocessing to companion-illuminating radiation. The present solution, while statistically worse, seems physically preferable. The existing X-ray light curve of PSR J2215+5135 is rather poor and the $\beta$ fit here is acceptable (although finite $f_v$ is preferred by the X-ray data).
8.9truecm
-0.5truecm
[lrrr|rrr|rrr]{}\[ht!!\] $\chi^2$, DoF& 184/126&207/127&195/125&256/90&935/91& 295/89& 901$^d$/266&2787$^d$/267&1318/263 $f_{\rm cor}$& 1.095& 1.017& 1.026& 1.098& 1.035& 1.041& 1.077& 1.033& 1.052 $i$(deg) & 51(3)& 52(2)& 43(3)& 65(3)& 47.1(4)& 75.8(9)& 78(3)& 28.0(2)& 82(1) $f_c$ & 0.68(8)&0.137(8)& 0.19(1)& 0.86(3)& 0.308(3)&0.476(4)&0.868(4)&0.187(1)& 0.64(1) $T_N$(K) &2656(89)&2674(62)&2410(90)&2989(310)&2050(131)&2670(16)&7872(100)&4902(36)& 6120(9) $L_D^b$ & 0.70(8)& 0.77(5)& 0.2& 9.0(9)& 6.1(1)& 0.61&37.6(48)& 29.6(4)& 2.6 $d$(kpc) & 4.7(2)& 1.23& 1.23& 3.3(2)& 1.73& 1.73& 5.9(1)& 2.8& 4.25(4) $\beta^c$& & & 0.18\[3\]& & &0.005(1)& & & 0.47(1) $\theta_B$(deg)& & & 25(4)& & & 89.9(8)& & & 87.5(9) $\phi_B$(deg)& & & 90& & & 90& & & 135(2) $L_P^b$ & & &5.3\[4\]$\times 10^{-4}$& & & 0.38(6)& & & 0.019(2) $q$ & 45.3& 42.1& 42.4& 70.0& 66.0& 66.3& 6.42& 6.16& 6.27 $M_c(M_\odot)$&0.032(4)&[*0.026(2)*]{}&0.041(7)&0.035(3)&[*0.059(1)*]{}&0.026(1)&0.22(1)&[*1.83(4)*]{}&0.20(1) $M_P(M_\odot)$& 1.43(18)&[*1.10(9)*]{}&1.75(30) &2.46(18)&[*3.90(8)*]{}&1.71(2)&1.40(5)&[*11.3(2)*]{}&1.27(1)
Models and Masses
=================
For our example fits we have chosen systems with published optical radial velocity amplitudes, so we have additional kinematic constraints on the models and can use the photometric fit parameters to probe the system sizes and masses. It is important to remember that the observed radial velocity amplitude $K_{\rm obs}$ (Table 1) is not the companion center-of-mass radial velocity. This needed quantity is $K_{\rm obs} f_{\rm cor}$, with $f_{\rm cor}$ an illumination-dependent correction. We find values ranging from $1.01 < f_{\rm cor} < 1.16$, where the largest values are for heating concentrated to the companion nose near the $L_1$ point. Even for a given illumination model $f_{\rm cor}$ depends weakly on the effective wavelength of the lines dominating the radial velocity measurement. Values appropriate to the model fits and the spectral types identified in the radial velocity studies are listed in Table 2.
With $f_{\rm cor}$ in hand we can determine the mass ratio as $$q=M_P/M_c= f_{\rm cor} K_{\rm obs} P_B/(2\pi x_1).
\eqno (14)$$ The companion mass is then $$M_c=4 \pi^2 x_1^3 (1+q)^2/(G\, P_B^2 {\rm sin}^3 i)
\eqno (15)$$ where $i$ is also determined from the model fits. If we know the nature of the companion we can then predict the minimum fill factor since we can compare with the volume equivalent Roche lobe radius $$R_L=0.46 (1+q) x_1 q^{-1/3}/{\rm sin}\, i.
\eqno (16)$$ For redbacks like PSR J2215+5135, we expect a main sequence companion with $R\approx (M/M_\odot ) R_\odot$. For black widows, we might assume that the companions are solar abundance planetary objects with radii $R \approx 0.135 (M/10^{-3} M_\odot)^{-1/8} R_\odot$, but if they are cold degeneracy pressure supported evolved stellar remnants, they would have an unperturbed radius $R=0.0126 (2/\mu_e)^{5/3} (M/M_\odot)^{-1/3} R_\odot$, with $\mu_e=2$ for a hydrogen-free composition. In practice these radii should be viewed as lower limits to the companion size, since it is widely believed that radiative and tidal heating can inflate the companion stars.
Given $P_B$, $x_1$ and $K_{\rm obs}$, for each $i$ and $f_{\rm cor}$ there is a solution for the binary component masses and for the companion Roche lobe size. Thus we can compare the model-fit $f_c$ with the minimum expected value given the companion type. We show this comparison in Figure 7, where the curves give the expected size for $f_{\rm cor} = 1.02,1.08,1.12$. For J2215 we show the main sequence prediction, while for J1301 and J1959 we show both the solar composition planetary size and the H-free degenerate object size. The H-free degenerate curves for the Tiddaren system PSR J1311$-$3430 are also plotted, to show that large inclinations and near Roche-lobe filling are required for this binary. The points with error flags show the model fits discussed above. The direct unconstrained fits (triangle points) are all near Roche-lobe filling and correspond to plausible neutron star masses, so these simple models would be attractive if it were not for the substantial distance and power problems discussed above. Direct fits locked to the DM-estimated distances (error bars without points) are all at small $f_c$ and small $i$. The fill-factor error bars are quite small as the source size is fixed given the temperature and flux, for a given distance. For J1959 and J2215 the fixed-distance inclinations $i$ are so small that the required neutron star mass is unphysical. For J1301 the mass is low and the companion size is below even that of the cold degenerate model. The IBS-B solutions described here are plotted as circle points. Note that all have plausible radii – J1301 and J1959 are larger than the cold degenerate size and J2215 is just above the main sequence size. The J2215 asymmetry is naturally produced by the offset magnetic poles and the required particle heating is less than 1% of the spin-down power. While the fit in Table 2 is the best found consistent with the physical constraints it does require a distance somewhat larger than the $DM$-estimated value. With the large number of J2215 fit parameters it is not surprising that other (poorer) local minima exist in the fit, so the model conclusions must be considered preliminary. For example larger $f_c$ can be accommodated if $d_{\rm kpc}$ is further increased.
At the bottom of Table 2 we give the mass ratios and masses for these models, assuming the parameters given in Table 1. We have put the ‘Direct-Fixed d’ mass values in italics since, as noted above, these are not good physical solutions. Both the ‘Direct-Free’ masses and the ‘IBS-B’ masses are plausible. The J2215 ‘IBS-B’ mass estimate is fairly low for a neutron star but the J1301 and J1959 masses suggest at least modest accretion.
8.9truecm
-0.5truecm
Conclusions
===========
We have shown that the common picture of black widow heating, in which the companion is irradiated directly by high energy pulsar photons, often requires system distances substantially larger than allowed by other pulsar measurements and heating powers that exceed the pulsar spindown luminosity. Note that this problem only comes to the fore when one uses models that depend on the absolute band fluxes and is not obvious in other (e.g. ELC) model fits. This leads us to investigate a model in which the direct heating by the pulsar $\gamma$-rays is supplemented by particle heating, where the particles arise from pulsar wind reprocessing in an intrabinary shock and then are ducted to the companion at its magnetic poles. This IBS-B ducting model has the potential to solve both the luminosity and distance problems as a larger fraction of the spindown power is collected and concentrated to companion hot spots, allowing a subset of the surface to dominate the observed radiation. This gives smaller distances and more reasonable energetic demands. This is all very appealing but one must ask how well it matches the data before trusting the fit parameters and before embarking on detailed studies to assess the physical viability of this (primarily) ‘geometry+energetics’ model.
We have applied this IBS-B model to several companion-evaporating pulsars. Intriguingly, the best fits still occur for the simple direct heating models as long as the distance is unconstrained. In some cases it may simply be that the DM-estimated distances are greatly in error and that the pulsar $\gamma$-ray radiation illuminates the companion much more strongly than along the Earth line-of-sight. But in others such large distances cannot be tolerated. If we require the distance to be consistent with DM estimates, then our magnetic model can always provide a better fit than the direct heating picture. In addition it can produce light curve asymmetries and required heating powers consistent with those expected from the pulsar spindown. The pulsar masses implied by the model fits are also much more reasonable if the close DM-determined distances are adopted. But the fits are still imperfect and one may question whether this more complex magnetic model is warranted by the data.
Additional data can, of course, select between these models. Most important are robust, independent distance determinations that set the size of the emitting area. Unfortunately black widows and redbacks generally display too much timing noise to allow timing parallaxes. However VLBI/GAIA parallaxes and, for systems like J1959, kinematic parallaxes using the bow shock velocities and proper motions will be very valuable. In the X-ray good measurement of caustics from the relativistic IBS particles can independently constrain the $\beta$ and $f_v$ parameters, restricting the IBS-B model space. Improved photometry of the companions, especially in the infrared where the weakly heated backside can contribute will certainly help model fits. These factors will be particularly helpful in modeling the important case of PSR J1959+2048. Finally very high quality phase-resolved spectroscopy is sensitive to the distribution of the absorption lines over the visible face of the companion (e.g. RGFK15). Sufficiently high quality companion observations must reveal the details of the heating distribution.
Our efforts have, so far, mainly increased the range of viable heating models. We have highlighted problems with the standard direct heating assumption and have produced a code that allows the IBS-B model to be compared with multiwavelength data. This new modeling is important to black widow evolution and Equation of State (EoS) studies. For example, while the direct heating model implies a very large mass for PSR J1957+2048, with dramatic EoS implication, the IBS-B fit value is more conventional. On the other hand the IBS-B fit for PSR J1301+0833 show a larger, but uncertain, neutron star mass.
As we apply this model to more black widow pulsars it should become clear what aspects of this picture are robust and lead to improved understanding of the pulsar wind energy deposition. Since the picture seems viable a more detailed analysis of the coupling between the pulsar wind and the companion magnetosphere is needed; the examples in this paper suggest typical efficiencies of $\sim 1$%. The appeal of a good understanding of the wind’s interaction with the companion is large, since this provides a bolometric monitor of pulsar outflow many times closer than the X-ray PWN termination shocks. This modeling also makes it clear that we need to get a clean understanding of companion heating before we can make high confidence assertions about the dense matter EoS.
We thank Paul Callanan and Mark Reynolds for allowing us to re-fit their photometry for PSR J1959+2048, Alex Filippenko and his colleagues for their continuing interest in the optical properties of black widow binaries, Hongjun An for discussions about IBS shock physics, Rene Breton for advice on the ICARUS code and the anonymous referee whose requests for clarification improved the paper. This work was supported in part by NASA grant NNX17AL86G.
Aldcroft, T., Romani, R. V. & Cordes, M. 1992, ApJ, 400, 638 Applegate, J. H. 1992, ApJ, 382, 621 Arumugasamy, P., Pavlov G. G. & Garmire, G. P. 2015, ApJ 814, 90 Breton, R. P., et al. 2013, ApJ, 769, 108 Bogovalov, S. V., Khangulya, K., Koldoba, A. V., Ustyugova, G. V. & Aharonian, F. A. 2012, MNRAS, 419, 4326 Callanan, P. J., van Paradijs, J. & Rengelink, R. 1995, ApJ 439, 928 Canto’, J., Raga, A. C., & Wilkin, F. P. 1996, ApJ, 469, 729. Cordes, J. M. & Lazio, T. J. W. 2002, arXiv:astro-ph/0207156 Djorgokski, S., & Evans, C.R. 1988, ApJ 335, L61 Dyson, J. E. 1975, ApSS, 35, 299 Eichler, D. 1992, MNRAS, 254, 11p. Foight, D. R., Guever, T., Oezel, F. & Slane, P. O. 2015, ArXiv150407274 Gentile, P. A., Roberts, M. S. E., McLaughlin, M. A., et al. 2014, ApJ, 783, 69 Green, G. M., Schlafly, E. F., Finkbeiner, D. P. et al. 2015, ApJ, 810, 25 He, C., Ng, C.-Y., & Kaspi, V. M. 2013, ApJ, 768, 64. Ho, C. 1986, MNRAS, 221, 523. Huang, R. H. H., Kong, A. H. K., Takata, J., et al. 2012, ApJ, 760, 92 Husser, T.-O., Wende-von Berg, S., Dreizler, S., et al. 2013, AA, 533, A6 Li, M., Halpern, J. P. & Thortensen, J. R. 2014, ApJ, 795, L115 Orosz, J. A., & Hauschildt, P. H. 2000, AA, 364, 265 Parkin, E. R., & Pittard, J. M. 2008, MNRAS, 388, 1047 Pierbattista, M., Harding, A. K., Gonier, P. L. & Grenier, I. A. 2016, AA, 538, 137 Reynolds, M. T., Callanan, P. J., Fruchter, A. A. et al. 2007, MNRAS, 379, 1117 Roberts, M. S. E., McLaughlin, M. A., Gentile, P. A. et al. 2014, AN, 335, 315 Roberts, M. S. E., McLaughlin, M. A., Gentile, P. A. et al. 2015, ArXiV, 1502.07208 Romani, R. W. & Watters, K. P. 2010, ApJ, 714, 810 Romani, R. W., Filippenko, A. V., & Cenko, S. B. 2015, ApJ, 804, 115. Romani, R. W., Graham, M. L., Filippenko, A. V., & Kerr, M. 2015, ApJ, 809, 10 Romani, R. W., Graham, M. L., Filippenko, A. V., & Zheng, W. 2016, ApJ, 833, 138 Romani, R. W. & Sanchez, N. 2016, ApJ, 828, 7 Schroeder, J., & Halpern, J. P. 2014, ApJ, 793, 78 Stappers, B.W, van Kerkwijk, M. H., Bell, J.F. & Kulkarni, S. R. 2001, ApJ, 548, 183 Tchekhovskoy, A., Spitkovsky, A. & Li, J. G. 2013, MNRAS, 435, L1 van Kerkwijk, M. H., Breton, R. P., & Kulkarni, S. R. 2011, ApJ, 728, van Staden, A. D. & Antionadis, J. 2016, ApJ, 833, L12 Wadiasingh, Z., Harding, A. K., Venter, C., Böttcher, M. & baring, M. G. 2017, arXiv170309560W Yao, J. M., Manchester, R. N. & Wang, N. 2017, ApJ, 835, 29
|
---
author:
- 'James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Viégas, and Jimbo Wilson'
bibliography:
- 'template.bib'
title: 'The What-If Tool: Interactive Probing of Machine Learning Models'
---
People working with machine learning (ML) often need to inspect and analyze the models they build or use. Understanding when a model performs well or poorly, as well as how perturbations to inputs affect the output, are key steps in improving it. A practitioner may want to ask questions such as: *How would changes to a data point affect my model’s prediction? Does my model perform differently for various groups—for example, historically marginalized people? How diverse is the dataset I am testing my model on?*
We present the What-If Tool (WIT), an open-source, model-agnostic interactive visual tool for model understanding. Running the tool requires only trained models and a sample dataset. The tool is available as part of TensorBoard, the front-end for the TensorFlow [@tensorflow] machine learning framework; it can also be used in Jupyter and Colaboratory notebooks.
WIT supports iterative “what-if” exploration through a visual interface. Users can perform counterfactual reasoning, investigate decision boundaries, and explore how general changes to data points affect predictions, simulating different realities to see how a model behaves. It supports flexible visualizations of input data and of model performance, letting the user easily switch between different views.
An important motivation for these visualizations is to enable easy slicing of the data by a combination of different features from the dataset. We refer to this as *intersectional analysis*, a term we chose in reference to Crenshaw’s concept of “intersectionality” [@crenshaw2018demarginalizing]. This type of analysis is critical for understanding issues surrounding model fairness investigations [@pmlr-v81-buolamwini18a].
The tool supports both local (analyzing the decision on a single data point) and global (understanding model behavior across an entire dataset) model understanding tasks [@DoshiKim2017Interpretability]. It also supports a variety of data and model types.
This paper describes the design of the tool, and walks through a set of scenarios showing how it has been applied in practice to analyze ML systems. Users were able to discover surprising facts about real-world systems–often systems that they had previously analyzed with other tools. Our results suggest that supporting exploration of hypotheticals is a powerful way to understand the behavior of ML systems.
Related Work
============
WIT relates directly to two different areas of research: ML model understanding frameworks and flexible visualization platforms.
Model understanding frameworks
------------------------------
The challenge of understanding ML performance has inspired many systems (see [@hohman2018] for a sample survey, or [@patel2008investigating] for a general analysis of the space). Often the goal has been to illuminate the internal workings of a model. This line of work has been especially common in relation to deep neural networks, whose workings remain somewhat mysterious, e.g. Torralba’s DrawNet [@drawnet] or Strobelt’s LSTMVis [@DBLP:journals/corr/StrobeltGHPR16].
WIT, in contrast, falls into the category of “black box” tools; that is, it does not rely on the internals of a model, but is designed to let users probe only inputs and outputs. This limitation reflects real-world constraints (it is not always possible to access model internals), but also means that the tool is extremely general in application.
Several other systems have taken a black-box approach. Uber’s Manifold [@Zhang_2019], a tool that has been used for production ML, offers some of the same functionality as WIT. For instance, it features a sophisticated set of visualizations that allow comparison between two models. While Manifold has interesting preset visualizations, our tool implements a flexible framework that can be configured to reproduce the same kind of display as well as supporting many other arrangements through different configurations. Many other systems are highly specialized, focusing on just one aspect of model understanding. EnsembleMatrix [@2009-ensemblematrix] is designed for comparing models that make up ensembles, while RuleMatrix [@DBLP:journals/corr/abs-1807-06228] attempts to explain ML models in terms of simple rules.
One of the systems closest in spirit to WIT is ModelTracker [@modeltracker], which provides a rich visualization of model behavior on sample data, and allows users easy access to key performance metrics. Like our tool, ModelTracker surfaces discrepancies between classification results on similar points. WIT differs from ModelTracker in several respects. Most importantly, our tool has a strong focus on testing hypothetical outcomes, intersectional analysis [@crenshaw2018demarginalizing], and machine learning fairness issues. It is also usable as a standalone tool which can be applied to third-party systems.
Another similar system is Prospector[@prospector], a tool that enables visual inspection of black-box models by providing interactive partial dependence diagnostics. Like WIT, Prospector allows the user to drill down into specific data points to manipulate feature values to determine their effect on model predictions. But while Prospector relies on the orthogonality of input features for single-class predictive models, WIT affords intersectional analysis of multiple, potentially correlated or confounded features.
Other similar systems include another from Microsoft, GAMut [@gamut-a-design-probe-to-understand-howdata-scientists-understand-machine-learning-models], which allows rich testing of hypotheticals. By design, however, GAMut aims only at the analysis of a particular type of machine learning algorithm (generalized additive models). Another system, iForest [@zhaoiforest], has similar capabilities as WIT, but is solely for use with random forest models.
One significant motivation for our tool comes from the desire to generalize prior visualization work on model understanding and fairness by Hardt, Viégas & Wattenberg [@hardt2016equality]. A key feature of WIT is that it can calculate ML fairness metrics on trained models. Many current tools offer a similar capability: IBM AI Fairness 360 [@aif360-oct-2018], Audit AI [@auditai], and GAMut [@gamut-a-design-probe-to-understand-howdata-scientists-understand-machine-learning-models]. A distinguishing element of our tool is its ability to interactively apply optimization procedures to make post-training classification threshold adjustments to improve those metrics.
Flexible visualization platform
-------------------------------
A key element of WIT, used in the Datapoint Editor tab described in more detail below, is the Facets Dive [@facets] visualization. This component allows users to create custom views of input data and model results, with an emphasis on exploring the intersection of multiple attributes.
The Facets Dive visualization is related to several previous systems. Its ability to rapidly switch between encodings for X-axis, Y-axis, and color may be a viewed as a simplified version of some of the functionality in Tableau’s offering [@tableau]. It also relates to Microsoft’s Pivot tool [@pivot], both for this same ability and the use of smooth animations to help users understand the transitions between encodings.
Unlike these purely generalist tools, however, Facets Dive is tightly integrated into WIT, and designed for data exploration related to ML. These design choices range from details (e.g., picking defaults that are likely to be helpful to typical ML practitioners) to global architectural decisions. Most importantly, Facets Dive is based entirely on local, in-memory storage and calculation. This protects sensitive data (which may be access restricted, so that sending it to a remote server for analysis is impossible–a common issue with ML data sets). It also allows smooth exploration. at the cost of placing a practical limit on data size.
Background and Overall Design
=============================
We built WIT to empower a larger set of users to probe ML systems, allowing non ML-experts to engage with the technology in an effective way. At first, we had a fairly broad audience in mind, including data journalists, activists, and civil society groups among others. However, after testing the tools with a variety of users, it became clear that given some of the technical prerequisites (having a trained model and a dataset), our audience ended up being more technical. In practice, we restricted ourselves to those with some existing ML experience such as CS students, data scientists, product managers, and ML practitioners.
Initially, we built a simple proof-of-concept application in which users could interact with a dataset and a trained model to edit datapoints and visualize inference results on the TensorBoard platform. Over the course of 15 months, we conducted internal and external studies and designed WIT based on the feedback we received. Our internal study recruited four teams and eight individuals within our company who were already in the process of building TensorFlow models and utilizing TensorBoard to monitor them. Participants’ use cases represented a variety of example formats (numerical, image-based and text-based), model types (classification and regression) and tasks (predictions and ranking). This evaluation consisted of three parts. First, we evaluated the setup of the proof-of-concept application with each participant’s model and data. Here, we provided participants with a set of instructions and evaluated the technical suitability and compatibility of model and examples. Once participants had a working proof-of-concept with their own model and data, we conducted a think-aloud evaluation in which participants freely explored their model’s performance using the system. Finally, participants interacted with the proof-of-concept over a period of two to three weeks in their daily jobs, following which we conducted a semi-structured interview. Feedback from each studies was incorporated before the next usability study.
Once WIT was open sourced, we ran an external workshop with five novice machine learning users who provided their own data that they wished to model and evaluate. We introduced participants to WIT through a discussion on model understanding and fairness explorations using interactive demos. We then observed how participants used WIT in the context of their models and data. We concluded the workshop with a discussion reflecting on their use of WIT and a post-workshop offline survey to capture their overall expectations and experience.
A second external WIT workshop was held at the Institute for Applied Computational Science at Harvard University. This workshop was open to the public and was attended by over 70 members of the local data science and machine learning community to maximize diversity of the participant pool. Following a brief introduction to WIT, participants were pointed to demos that they could run using Colaboratory notebooks, along with prompts for investigations. Participants queried the WIT team on the capabilities available and any confusion they faced. Following the workshop, participants were sent an optional offline survey.
The application, as described in this paper, benefits from these multiple rounds of user feedback.
User Needs {#requirementAnalysis}
----------
Usability studies conducted with early versions of WIT identified a variety of requirements and desired features, which we distilled into five user needs:
**N1. Test multiple hypotheses with minimal code.** \[N1\] Users should be able to interact with a trained model through a graphical interface instead of having to write custom code. The interface should support rapid iteration and exploration of multiple hypotheses. This user need corroborates results from [@patel2008investigating], which suggests that tools for ML practitioners should support (1) an iterative and exploratory process; (2) comprehension of the relationships between data and models; and (3) evaluation of model performance in the context of an application. In the same vein, while visualizing model internals would be useful for hypothetical testing, this would require users to write model-specific code which was generally less favored during usability studies.
**N2. Use visualizations as a medium for model understanding.**\[N2\] Understanding a model can depend on a user’s ability to generate explanations for model behavior at an instance-, feature- and subgroup-level. Despite attempts to use visualizations for generating these explanations via rules [@DBLP:journals/corr/abs-1807-06228], analogies, and layer activations [@cnnvis], most approaches run into issues of visual complexity, the need to support multiple exploratory workflows, and reliance on simple, interpretable data for meaningful insights [@krause2017workflow; @cnnvis; @DBLP:journals/corr/abs-1807-06228; @Zhang_2019]. WIT should be able to provide multiple, complementary visualizations through which users can arrive at and validate generalized explanations for model behavior locally and globally.
**N3. Test hypotheticals without having access to the inner workings of a model.** \[N3\] WIT should treat models as black boxes to help users generate explanations for end-to-end model behavior using hypotheticals to answer questions such as *“How would increasing the value of age affect a model’s prediction scores?”* or *“What would need to change in the data point for a different outcome?”*. Hypotheticals allow users to test model performance on perturbations of data points along one or more specified dimensions. Without access to model internals, explanations generated using hypotheticals remain model-agnostic and generalize to perturbations. This increases explanation and representation flexibility, particularly when the models being evaluated are very complex and therefore cannot be meaningfully interpretable [@Ribeiro:2016:WIT:2939672.2939778]. Hypotheticals are particularly potent when testing conditions of model inference results for new conjectures and what-if scenarios.
**N4. Conduct exploratory intersectional analysis of model performance.** \[N4\] Users are often interested in subsets of data on which models perform unexpectedly. For example, Buolamwini and Gebru [@pmlr-v81-buolamwini18a] examined accuracy on intersectional subgroups instead of features alone and revealed that three commercially available image classifiers had the poorest performance on the darker female subgroup in test data. Even if a model achieves high accuracy on a subgroup, the false positive and false negative rates can be wildly different, leading to real-world consequences[@compas]. Since there are multiple ways to define these subsets, visualizations for exploratory data analysis should retain flexiblity, scalablity and customization for input data types, model task and optimization strategies. Complementary views at instance-, feature-, subset- and outcome- levels are particularly insightful when comparing the performance of multiple models on the same dataset.
**N5. Evaluate potential performance improvements for multiple models.** \[N5\] In model development, it can be hard to track the impact of changes, such as changing a classification threshold. For example, optimizing thresholds for demographic parity may ensure that all groups receive an equal fraction of “advantaged” classification, while putting sub-groups with lower true positive rates at a disadvantage [@hardt2016equality]. One may want to test different optimization strategies for a variety of costs locally and globally before changing the training data or model hyperparameters to improve performance. Users should be able to interactively debug model performance by testing strategies that mitigate undesirable model behaviors in an exploratory space on smaller datasets without needing to take the analysis offline. A closely related way that users test changes to a model is by comparing the performance of a *“control”* model to *“experimental”* versions on benchmarks.
Overall Design {#visualizationInterface}
--------------
WIT is part of the TensorBoard application [@tensorboard], as a code-free, installation-free dashboard. It is also available as a standalone notebook extension for Jupyter and Colaboratory notebooks.
In TensorBoard mode, users can configure the tool through a dialog box to load a dataset from disk and to query a model being served through TensorFlow Serving. In notebook mode, setup options are provided through the Python invocation that displays the tool. In addition to being useful for model understanding efforts on a single model, WIT can be used to compare results across two models for the same dataset. In this case, all standard features apply to both models for meaningful comparisons.
Out-of-the-box support, with no need for any custom coding by the user in alignment with user need **N1**, is provided for those making use of the TensorFlow Extended (TFX) pipeline [@tfx], through WIT’s support of TFRecord files, TensorFlow Estimator models, and model serving through TensorFlow Serving. In notebook mode, the tool can be used with models and datasets outside of this paradigm, through use of a user-provided Python function for performing model prediction.
WIT’s interface consists of two main panels: a visualization panel on the right and a control panel on the left. The control panel has three tabs designed to support exploratory workflows: Datapoint Editor, Performance + Fairness, and Features.
Task-Based Functionality {#functionality}
========================
Here we discuss the implementation of WIT, and how it builds on user needs outlined in \[requirementAnalysis\], in the context of different user tasks.
We utilize the UCI Census dataset [@Dua2019] throughout this section as a running example to describe and contextualize WIT’s features. We analyze two models trained on the UCI dataset and prediction task [@Dua2019] which classify individuals as high ($\geq$\$50K/year) or low income from their census data. *Model 1* is a multi-layer neural network and *Model 2* is a simple linear classifier. We test both models on 500 data points from the UCI test dataset[^1]. In particular, we examine a selected data point, **D** (highlighted in yellow in Figure \[fig:teaser\]), where the two models disagree on the prediction. We explore how the *capital-gain* feature disparately influences the prediction for the two models.
Exploring Your Data {#exploringYourData}
-------------------
In this section we describe how users can explore their data as well as perform customizable analyses of their data and model results.
### Customizable Analysis {#customizableAnalysis}
The Datapoint Editor tab of WIT shows all loaded data points and their model inference values. Data points can be binned, positioned, colored and labeled by values of any of their features. Each data point is represented by a circle or a thumbnail image. The visualization allows users to zoom and pan around the data points, while describing details such as the categorical color scheme in a legend.
To support interactive evaluations of model results on intersectional subgroups, users can create visualizations using performance metrics from the model, in addition to combinations of the features in the dataset and model inference values. With respect to performance metrics, in classification models users can color, label or bin data points by the correctness of a classification. In regression models, users can color, label or bin data points by inference error, inference mean error, or inference absolute error, plus position data points in a scatterplot by those same error measures.
The Datapoint Editor enables users to create custom visualizations from both the dataset and model results for a variety of different model understanding tasks. Some useful visualizations include:
- **Confusion matrices** \[Figure \[fig:confMatrix\]\] for binary and multi-class classifiers. In the figure, binning is set to the ground truth on the X-axis and model prediction on the Y-axis. Datapoints are colored by correctness of the prediction, where is correct and is incorrect.
- **Histograms and bar or column charts** \[Figure \[fig:ageHistogram\]\] for categorical and numeric features. For numeric features, binning is done with uniform-width bins and the number of bins is adjustable, defaulting to 10. In this figure, datapoints are colored by classification by model 1, where datapoints colored are positively classified and those colored are negatively classified.
- **2-dimensional histograms/bar/column charts** \[Figure \[fig:2dHistogram\]\] with binning on the X-axis and Y-axis by separate features.
- **2D histogram of prediction error** \[Figure \[fig:manifoldStyle\]\] of data points, mimicking performance visualizations used in the Manifold system [@Zhang_2019]. Here, datapoints are colored by sex where is male and is female.
- **Small multiples of scatterplots** \[Figure \[fig:smallMultiples\]\], created by binning by one feature or the intersection of two features, and creating scatterplots within those bins using numeric features.
In our Census example, since we compare two binary classification models, the visualization defaults to a scatterplot that compares scores for the positive classes for the linear classifier and the deep neural network on the Y- and X- axis respectively (See Figure \[fig:teaser\]). Data points that fall along a diagonal represent individuals for whom the models tend to agree. Those further away from the diagonal indicate disagreement between the two models.
### Features Analysis: Dataset Summary Statistics
{width="\textwidth"}
Understanding the data used to train and test a model is critical to understanding the model itself. The Features tab of WIT builds on the Facets Overview [@facets] visualization to automatically provide users with summary statistics and charts of distributions of all features in the loaded dataset. A user may find these statistics and visualizations particularly useful when validating explanations originating from model performance on subsets of the loaded dataset, in line with user need **N2**. For example, the NIST gender report [@NIST] evaluates the impact of ethnicity on gender classification, using country of origin as a proxy for ethnicity. However, Africa and the Caribbean, which have significant Black populations are not represented in the 10 locations used in the report. By visualizing the distributions of features, users can gain a sense for unique feature values which may not be discernible through dataset metadata alone.
For numeric features, minimum, maximum, mean, and standard deviation are displayed along with a 10-bin equal width histogram describing the distribution of feature values. For categorical features, statistics such as number of unique values and most frequent value are displayed along with a visualization of the distribution of these values. Users can toggle between reading a distribution as a chart or as a table. Features with few distinct values ($\leq$20) are displayed as histograms; for others, to avoid clutter, WIT shows a cumulative distribution function as a line chart. Exact values are provided on tooltips, and users can control the size of the charts and switch between linear or log scale.
WIT offers multiple ways to sort features. For instance, a user may sort by non-uniformity to see features that have severely unbalanced distributions, or sort by number of zeros or missing values to see features that have abnormally large counts of empty feature values.
For our Census example, Figure \[fig:overview\] shows a huge imbalance in the distribution of values for capital gains, with 90% of the values being 0, but having a small set of values ranging all the way up to 100,000. This helps explain the results in section \[investigatingWhatIfHypotheses\], where the neural network model learns to essentially ignore zero values for capital gains.
Investigating What-If Hypotheses {#investigatingWhatIfHypotheses}
--------------------------------
Here we show how users can generate and test hypotheses about how their model treats data by identifying counterfactuals and observing partial dependence plots.
### Data Point Editing {#dataEditing}
While it has been shown that in some circumstances providing model explanations (answering “Why?” or “Why Not?” questions) can increase user performance and trust[@lim_2009], we’ve found–like others[@prospector]–that one of the most powerful ways to analyze a model is in terms of *hypotheticals*: probing how an output would change based on carefully chosen input modifications (see section \[mitStudentProject\] for one case study). For example, in the context of a model making bank loan determinations, one might ask questions such as, “Would person X have gotten a loan if she were male instead of female?” or “How much would a small increase in person X’s income have affected the result?”
WIT makes it easy to edit data and see the effect on the model’s inferences. The user can select a data point in the Datapoint Editor tab and view its feature values and results of model predictions. WIT shows prediction scores for each available model, either as regression scores or as class scores for the top returned classes in classification models.
To conduct iterative what-if analyses, the user can edit, add, or delete individual feature values or entire features within an instance in the editor module and see the effect those changes have on the model prediction for that datapoint. Additionally, new features can be added to the datapoints solely for visualization purposes; for example, tagging a set of examples as “high priority” and then visualizing separate confusion matrices for normal and “high priority” datapoints. Image-type features can be edited by replacing existing images. Edited data points are then re-inferred by the model and the inference module is updated with new scores, as well as the delta and direction of the change. Users can duplicate or delete selected data points, useful when comparing multiple small changes in a single data point to the original. For our Census example, Figure \[fig:datapointEditing\] shows editing values for the yellow-bordered, selected data point from Figure \[fig:teaser\]. This data point initially has opposite classifications from the two models. Editing this data point to increase its capital gains and re-inferring shows that the neural network model then changes the prediction to the positive class, so it now matches the prediction from the linear model, which predicted the positive class both before and after the edit. We will continue to explore the effect of changing capital gains on this data point in the sections \[counterfactuals\] and \[pdp\].
![Datapoint editor view showing an edited value for “capital-gain” (A), which the user has changed from 3,411 to 20,000. After rerunning inference (B), this edit causes the score for label 1 ($\geq$50K) to change from 0.336 to 0.991, essentially flipping the prediction for this datapoint.[]{data-label="fig:datapointEditing"}](datapointEdited2.png){width="\linewidth"}
### Counterfactual Reasoning {#counterfactuals}
It can also be helpful to explore an inverse type of hypothetical question. Consider, for the bank loan example, that for a given person the model predicts that the loan would be denied. A user of WIT who is interrogating this model might ask, *“What would have to change so that the person X would get the loan?”*. In cases like this there are often many possible answers, but typically practitioners are most interested in comparing differences to data points upon which the model predicted a different outcome. In the loan example, this means answering the question, *“Who is the person most similar to person X who got a loan?”*
Such data points have been termed *counterfactual examples* by Wachter [@wachter2017counterfactual] (Note that this terminology can cause some confusion, since it is not identical in usage to that of Judea Pearl and collaborators [@pearl2014probabilistic]). Our tool provides users a way of automatically identifying such counterfactual examples for any selected data point.
To sort possible counterfactual examples for a given selected data point, without the need for the user to provide their own code or specification, the tool uses a simple distance metric which aggregates the differences between data points’ feature values across all input features. To do this, it treats numeric and categorical feature values slightly differently to generate distances between data points:
- **Numeric features:** Absolute value of the difference between the two data points divided by the standard deviation of that feature across the entire dataset.
- **Categorical features:** Data points with the same value have a distance of 0, and other distances are set to the probability that any two examples across the entire dataset would share the same value for that feature.
To compute the total distance between two data points, the tool individually assesses the distance for each numeric and categorical feature, and then aggregates these using either an $L^1$ or $L^2$ norm, as chosen by the user. The default measure uses $L^1$ norm as it is a more intuitive measure to non-expert users. We provide the option to use $L^2$ norm as early users requested this flexibility to help them further explore possible counterfactuals.
For our Census example, Figure \[fig:teaser\] shows the currently selected, negatively classified data point (blue dot with yellow border), and the most similar, positively classified, counterfactual data point (red dot with green border), for classifications from the neural network model. There are only a few differences between the feature values of the selected data point and its counterfactual, shown in the left-hand panel of Figure \[fig:teaser\], highlighted in green. Specifically, the counterfactual has a slightly lower age, higher hours worked per week, and a reported capital gain of \$0 (as opposed to \$3,411). That last value might seem surprising, as one might expect that having a significantly lower value for capital gains would indicate a lower income (negative classification) than non-zero capital gains would. Especially considering that in section \[dataEditing\], it was shown that raising the capital gains for the selected data point would also cause the neural network model to change its classification to the positive class.
In addition to inspecting the most similar counterfactual example for a given selected data point, the user can elevate the $L^1$ or $L^2$ distance measure to become a feature in its own right. Such a distance feature can then be used in custom visualizations for more nuanced counterfactual reasoning, such as a scatterplot in which one axis represents the similarity to a selected data point.
### Partial Dependence Plots {#pdp}
Often practitioners have questions about the effect of a feature across an entire range of values. To address these questions, the tool offers partial dependence plots, which show how model predictions change as the value of a specific feature is adjusted for a given data point.
The partial dependence plots are line charts for numeric features and column charts for categorical features. In the numeric case, we visualize class scores (or regression scores for regression models) for the positive class (or top-N specified classes, when there are more than two classes) at 10 equally distributed points between the minimum and maximum observed values for a feature. Users can also manually specify custom ranges. For categorical features, class scores or regression scores are displayed as column or multi-series column charts, where the X-axis consists of the most common values of the given feature across the loaded dataset.
No partial dependence plots are generated for images or for features with unique values (typically these represent something like IDs, so the chart would be difficult to read and not meaningful). When comparing multiple models, inference results for both models are shown on the same plot. In each plot, we indicate original feature values and classification thresholds for all models wherever applicable. Users can hover for tooltips that contain legends and exact values along the plots.
WIT also provides instance-agnostic *global* partial dependence plots, which are computed by averaging inference results on all data points in the dataset. This is particularly useful when checking if the relationship between a feature and model’s performance is consistent locally and globally.
![Partial Dependence (PD) plot view. Top: categorical PD plot for “education level.” Bottom: numeric PD plot for “capital gains” feature. The neural network has learned that low but non-zero capital gains are indicative of lower income, high capital gains are indicative of high income, but zero capital gains is not necessarily indicative of either. In both plots, the horizontal gray line represents the positive classification threshold for the models.[]{data-label="fig:pd"}](joint-pdp.png){width="\linewidth"}
For our Census example, Figure \[fig:pd\] depicts partial dependence plots for the *education* and *capital-gain* features. We find that the inference score and capital-gain feature have a monotonic relationship in the linear model (as expected), whereas the neural network has learned a more nuanced relationship, where both having zero capital gains and having higher capital gains are indicative of high income rather than having a small amount of capital gains. This is consistent with the observed counterfactual behavior, discussed in \[counterfactuals\].
Evaluating Performance and Fairness {#evaluatingPerformanceAndFairness}
-----------------------------------
To meet needs **N4** and **N5**, WIT provides tools for analyzing performance, available under a “Performance + Fairness” tab. The tools can be used to analyze aggregate model performance as well as compare performance on slices of data. Following the same layout paradigm as the Datapoint Editor tab, users can interact with configuration controls on the left side panel to specify ground truth features, select features to slice by, and set cost ratios for performance measures, and view the resulting performance table on the right side. An initial view of the performance table offers users performance measures tailored to the type of prediction task. Users can slice their data into subgroups by a single feature available in the dataset, or by the intersection of two features, in order to enable intersectional performance comparisons. For example, in our UCI Census example, users could slice by both sex and race to view intersectional performance, similar to the type of analysis done in [@pmlr-v81-buolamwini18a]. Slicing by features calculates these performance measures for subgroups, which are defined by the unique values in the selected features.
### Performance Measures
When applied, slices are initially sorted by their data point count and can also be sorted alphabetically or by performance measures such as accuracy for a classification model or mean error for a regression model. Table \[tab:performanceMeasuresTable\] describes the different performance measures available for the supported model types.
**Model Type** **Performance Measures**
---------------------------- ---------------------------
Binary classification Accuracy
False positive percentage
False negative percentage
Confusion matrix
ROC curve
Multi-class classification Accuracy
Confusion matrix
Regression Mean error
Mean absolute error
Mean squared error
: Performance measures reported for different model types in the Performance + Fairness tab.[]{data-label="tab:performanceMeasuresTable"}
For binary classification models, receiver operating characteristic (ROC) curves plot the true positive rate and false positive rate at each possible classification threshold for all models in a single chart. For both binary and multiclass classification models, confusion matrices are provided for each model. Error counts are emphasized graphically via color opacity, as shown in Figure \[fig:perf\].
In the rest of this section, we explain details specific to binary classification, such as cost ratios and positive classification threshold values.
### Cost Ratio
ML systems can make different kinds of mistakes: false positives and false negatives. The cost ratio determines how “expensive” these different kinds of errors might be. In a sense, by setting the cost ratio, users decide how conservative their system should be. For example, in a medical context, it might be preferable for a system to rather sensitive, where it is much more likely to give false positives (i.e. a screening test reports cancer when there is no cancer), instead of the other way around (e.g. the screening test misses cancer signs in a patient).
In WIT, users can change the cost ratio, in order to automatically find the optimal positive classification threshold for a model given the results of model inference on the loaded dataset, in line with user needs **N1** and **N5**. The positive classification threshold is the minimum score that a binary classification model must give for the positive class before the model labels a data point as being in the positive class, as opposed to the negative class.
When no cost ratio is specified, it defaults to 1.0, at which false positives and false negatives are considered equally undesirable. When optimizing the positive classification threshold with this default cost ratio, WIT will find the threshold which achieves the highest accuracy on the loaded dataset. Upon setting a threshold, it displays the performance measures when using that threshold to classify data points as belonging to the positive class or the negative class.
For our Census example, Figure \[fig:perf1\] shows the performance of the two models, broken down by sex of the people represented by the data points, with the positive classification thresholds set to their default values of 0.5. It can be seen that the accuracy is higher for women, and from the first columns of the confusion matrices, that there is a large disparity in the percentage of data points predicted as being in the positive class for men and women. This disparity also exists in the actual ground truth classes from the test dataset between the two sexes as well, as shown in the first rows of the confusion matrices.
### Thresholds and Fairness Optimization Strategies
In addition to evaluating metrics such as accuracy and error rates for specified slices in the performance table, users can explore different fairness optimization strategies. Classification thresholds for each slice can be individually modified so that the positive class threshold is different depending on the data point; the performance table and visualization instantly update to reflect resulting changes in performance.
Applying a fairness optimization strategy through the tool automatically updates the classification thresholds for each slice individually in order to satisfy a particular fairness definition. Again, the results can be seen across the performance table with updates to thresholds, performance metrics and visualizations. Table \[tab:fairnessDefinitions\] describes the fairness optimization strategies for binary classification available in the tool.
In our Census example (Figure \[fig:perf2\]), when optimizing the models’ classification thresholds for demographic parity between sexes, the thresholds for male data points is raised and the threshold for female data points is lowered. This adjustment accounts for the large imbalance in the UCI Census dataset between sexes, where men are much more likely to be labeled as high income than women.
[.5]{}
[.5]{}
Comparing Two Models {#comparingModels}
--------------------
It is often desirable to see the performance of a model with respect to another one. For instance, a practitioner may make changes to their datasets or models based on the feedback they receive from our tool, train a new model, and compare it to the old one to see if the fairness metrics have improved. To facilitate this workflow, all performance measures and visualizations, such as ROC curves and partial dependence plots, support comparison of two models. For example, when using the tool with two models, it immediately plots an inference score comparison between the models, allowing one to pick individual data points that differ in the scores and analyze what caused this difference. One can use partial dependence plots to compare if the new model is more or less sensitive to a feature of interest than an old model.
Data Scaling {#DataScaling}
------------
Here we provide approximate numbers on how many data points can be loaded into WIT for a standard laptop from the last several years. For tabular data with 10-100 numeric or string inputs (such as the UCI Census dataset), WIT can handle $\sim 100k$ points. For small images (78x64 pixels), WIT can load $\sim 2000$ points. In general, the number of features and the size of each feature are the main factors for memory constraints. For example, it would be possible to load more images using smaller image sizes.
Case studies
============
We present three case studies to show how WIT addresses user needs and can provide insights into ML model performance. The first two cases come from a large technology company. The third involves a set of undergraduate engineering/computer science students analyzing models to gain insights about a particular dataset.
Regression Model from an ML Researcher
--------------------------------------
An ML researcher at a large technology company used our tool to analyze a regression model used in a production setting. In a post-use interview, they said they wanted to better understand “how different features affect the predictions” (aligning with user need **N3**). They loaded a random sample of 2,000 data points from the test dataset and the model used in production.
To understand how different features affect the model’s output, the researcher picked individual points from the Datapoint Editor tab, manually edited feature values, and then re-ran inference to see how regression values changed. After doing this a number of times, they moved over to partial dependence plots to further understand how altering each feature in a data point would change the model’s regression score, validating the tool’s ability to satisfy user need **N2**. They immediately noticed something odd: the partial dependence plot for a certain feature was completely flat for every data point they looked at. This led to the hypothesis that the model was never taking that feature into account during prediction–a surprise, since they expected this feature to have a significant effect on the final regression score for a data point.
With this new information, they investigated the code in their ML pipeline, and found a bug preventing that feature from being ingested by the model at serving time. WIT helped them uncover and fix an instance of training-serving skew that was unknowingly affecting their model’s performance. Note that this bug had existed for some time and had not been detected until this researcher used our tool to analyze their model.
Model Comparison by a Software Engineer
---------------------------------------
A software engineer at a large technology company used WIT to compare two versions of a regression model. The models predict a specific health metric for a medical patient, given a previous measurement of that metric, the time since that first measurement, and some other features. The first model was trained on an initial dataset and the second model was trained on a newer dataset which was generated with slightly different feature-building code to create the input features for the model to train and evaluate on.
The engineer used our tool to compare the performance of the two models. They noticed immediately through the Datapoint Editor visualization and inference results of selected data points that the first model’s predictions were almost always larger than the second model’s. They then plotted the target value of each test data point’s prediction against the initial health metric measurement across data from both the training set of both models. This exemplifies the tool’s capability to satisfy user need **N2**. For the first model’s training data, the target value was almost always larger than the initial measurement. They did not expect to see this relationship, and did not see it for data points from the second model’s dataset. An investigation uncovered a bug in the code that builds the input features for the first model, causing it to swap the first and last value of the measurements. In the engineer’s words, “before using the What-If Tool we didn’t suspect any bug because the same feature building code was used for all training/validation/test sets. So the error metrics were reasonable. The model was just solving a different problem.” The success of the tool in uncovering a previously-unnoticed issue directly relates to user need **N5**.
Binary Classification, Counterfactual and Fairness Metrics Exploration by University Students {#mitStudentProject}
---------------------------------------------------------------------------------------------
A group of MIT students working on a project for the *Foundations of Information Policy* course made extensive use of WIT. Their goal was to analyze the fairness of the Boston Police Departments stop-and-frisk practices. They took the Boston Police Department Field Interrogation and Observation (FIO) Dataset [@bpd], which provides over 150,000 records of stop and frisk encounters from 2011 to 2015, and created a linear classifier model to emulate those decisions using a training set sampled from that dataset. This model could then be queried to reason about the department’s decision making process, beyond just looking the raw data. Their model takes as input information about a police encounter, such as the age, race, gender, and criminal record of an individual being stopped by police, along with the date, time, and location of the encounter and the name of the officer, and as an output determines whether that person was searched or frisked (positive class) or not (negative class) during the encounter. After training a model to perform this classification, they used WIT to analyze the model on the held-out test set in order to reason about the department’s stop-and-frisk practices. By analyzing their model, as opposed to the raw dataset, the students were able to ask a variety of “what-if” questions, aligning with user need **N1**.
The group made extensive use of the Datapoint Editor tab to explore individual data points and perform counterfactual reasoning on them. They explored data points where the model’s positive classification score was close to 0.5, meaning the model was very unsure of which class the data point belonged to. They further tested these points by altering features about the data point and re-inferring to see how the prediction changed, both with manual edits and through the partial dependence plots.
The students were not originally planning on doing much analysis around the effect of the police officer ID on prediction, but by using the nearest counterfactual comparison tool, they quickly realized that predictions on many data points could have their classification flipped just by changing the officer ID. This was immediately apparent when seeing that on many data points the closest counterfactual was identical (ignoring encounter date and time) other than the officer ID.
They were able to conclude that although age, gender and race were important characteristics in predicting the likelihood of being frisked, as they had theorized, the most salient input feature was the officer involved in the encounter, which was something they did not anticipate. This use of WIT illustrates user need **N3**.
The students were also interested in analyzing their model for equality of opportunity for different slices of the dataset, probing for fairness across different groups. Using the WIT’s Performance + Fairness tab, they sliced the dataset by the features they were interested in analyzing and observed the confusion matrices for the different groups when using the same positive classification threshold. The students then used different optimization strategies to check how the thresholds and model performance would change when adjusting the thresholds per slice to achieve different fairness constraints, such as equality of opportunity.
They concluded that in their model, certain groups of people were disadvantaged when it came to the chance of being frisked during a police encounter: *“Our equal opportunity analysis concluded that White people had the most advantage and Hispanic had slight advantage, while being Asian gave you significant disadvantage and being Black gave you slight disadvantage.”* This points to the tool successfully accomplishing user need **N4**.
Lessons from an iterative design process
========================================
Our foundational research revealed that WIT’s target users typically rely on cumbersome model-specific code to conduct their analyses. Our initial design focused on a single user journey in which users could–with no coding–edit a datapoint and re-run prediction. Over the course of 15 months, we added new interaction capabilities and refined the interface to better support other common tasks.
To that end, we streamlined WIT’s functionality into three core workflows. In addition to the original workflow of testing hypothetical scenarios, we supported (a) general sense-making around data and (b) evaluation of performance and fairness. A key challenge in designing interfaces for these tasks was that many users were encountering certain components (e.g., partial dependence plots) and concepts (e.g., counterfactuals) for the first time. To address this issue, we provided copious in-application documentation.
Feedback from users also led us to an interesting and counterintuitive design decision: to *reduce* certain types of interactivity. When working with controls in the Fairness + Performance view, we initially made a direct connection between classification threshold controls and the data points visualization. A change in the threshold would lead to immediate rearrangement of items in the visualizations–a type of tight linking that is conventionally viewed as effective design. However, our users told us they found the constant changes (as well as an increased latency) confusing. To address this issue, we replaced the data points visualization (in this tab) with the confusion matrices and ROC curves. This simpler and more stable view proved much more satisfactory.
Conclusion and Directions for Future Research
=============================================
We have presented the What-If Tool (WIT), which is designed to let ML practitioners explore and probe ML models. WIT enables users to analyze ML system performance on real data and hypothetical scenarios via a graphical user interface. The set of provided visualizations allows users to see a statistical overview of input data and model results, then zoom in to perform intersectional analysis. The tool lets users experiment with hypothetical conditions, with a variety of options that range from direct editing of data points to a novel type of automatic identification of counterfactual examples. It also provides capabilities for assessing and optimizing metrics related to ML fairness, again without any coding.
Real-world usage of the tools indicates that these features are valuable not only to machine learning novices, but also to experienced ML engineers, and can help highlight issues with models that would be otherwise hard to see. In practical use we have seen WIT uncover nontrivial, long-lived bugs, and provide practitioners with new insights into their models.
One natural future direction is to enhance the tool to make use of information about the internals of an ML model, when available. For instance, for models which represent differentiable functions, information about gradients can be informative about the saliency of different features [@sundararajan2017axiomatic] [@smilkov2017smoothgrad]. For feedforward neural networks, techniques such as TCAV [@kim2018tcav] can help lay users understand a model’s output.
A particularly important future direction is to decrease the level of ML and data science expertise necessary to use this type of tool. For example, some users have suggested automating the process of finding outliers and subsets of data on which a model underperforms. Additionally, some users have requested to ability to add their own model performance metrics and fairness constraints into the tool through the UI. These and other directions have the potential to greatly broaden the set of stakeholders able to participate in ML model understanding and fairness efforts.
[^1]: This particular demo, along with a link to the tool’s source code, can be found on the What-If Tool website (https://pair-code.github.io/what-if-tool/).
|
---
abstract: 'Evidence for orbital modulation of the very high energy (VHE) $\gamma$-ray emission from the high-mass X-ray binary and microquasar LS 5039 has recently been reported by the HESS collaboration. The observed flux modulation was found to go in tandem with a change in the GeV – TeV spectral shape, which may partially be a result of $\gamma\gamma$ absorption in the intense radiation field of the massive companion star. However, it was suggested that $\gamma\gamma$ absorption effects alone can not be the only cause of the observed spectral variability since the flux at $\sim 200$ GeV, which is near the minimum of the expected $\gamma\gamma$ absorption trough, remained essentially unchanged between superior and inferior conjunction of the binary system. In this paper, a detailed parameter study of the $\gamma\gamma$ absorption effects in this system is presented. For a range of plausible locations of the VHE $\gamma$-ray emission region and the allowable range of viewing angles, the de-absorbed, intrinsic VHE $\gamma$-ray spectra and total VHE photon fluxes and luminosities are calculated and compared to luminosity constraints based on Bondi-Hoyle limited wind accretion onto the compact object in LS 5039. Based on these arguments, it is found that (1) it is impossible to choose the viewing angle and location of the VHE emission region in a way that the intrinsic (deabsorbed) fluxes and spectra in superior and inferior conjunction are identical; consequently, the intrinsic VHE luminosities and spectral shapes must be fundamentally different in different orbital phases, (2) if the VHE luminosity is limited by wind accretion from the companion star and the system is viewed at an inclination angle of $i \gtrsim 40^o$, the emission is most likely beamed by a larger Doppler factor than inferred from the dynamics of the large-scale radio outflows, (3) the still poorly constrained viewing angle between the line of sight and the jet axis is most likely substantially smaller than the maximum of $\sim 64^o$ inferred from the lack of eclipses. (4) Consequently, the compact object is more likely to be a black hole rather than a neutron star. (5) There is a limited range of allowed configurations for which the expected VHE neutrino flux would actually anti-correlate with the observed VHE $\gamma$-ray emission. If hadronic models for the $\gamma$-ray production in LS 5039 apply, a solid detection of the expected VHE neutrino flux and its orbital modulation with km$^3$ scale water-Cherenkov neutrino detectors might require the accumulation of data over more than 3 years.'
author:
- Markus Böttcher
title: 'Constraints on the Geometry of the VHE Emission in LS 5039 from Photon-Photon Deabsorption'
---
\[intro\]Introduction
=====================
The recent detections of VHE ($E \gtrsim 250$ GeV) $\gamma$-rays from the high-mass X-ray binary jet sources LS 5039 with the High Energy Stereoscopic System [HESS; @aharonian05] and LS I +63$^o$303 with the Major Atmospheric Gamma-Ray Imaging Cherenkov Telescope [MAGIC @albert06] establish these sources (termed “microquasars” if they are accretion powered) as a new class of $\gamma$-ray emitting sources. These results confirm the earlier tentative identification of LS 5039 with the EGRET source 3EG J1824-1514 [@paredes00] and LSI $61^o303$ with the COS B source 2CG 135+01 [@gregory78; @taylor92] and the EGRET source 3EG J0241+6103 [@kniffen97]. Both of these objects show evidence for variability of the VHE emission, suggesting an association with the orbital period of the binary system. In the case of LS I +63$^o$303, the association with the orbital period is not yet firmly established since the MAGIC observations covered only a few orbital periods, and the orbital period [$P = 26.5$ d; @gregory02] is very close to the siderial period of the moon, which also sets a natural windowing period for VHE observations [@albert06]. In contrast, the HESS observations of LS 5039 provide rather unambiguous evidence for an orbital modulation of both the VHE $\gamma$-ray flux and spectral shape with the orbital period of $P = 3.9$ d [@aharonian06b]. Specifically, [@aharonian06b] found that between inferior conjunction (i.e., the compact object being located in front of the companion star), the VHE $\gamma$-ray spectrum could be well fitted with an exponentially cut-off, hard power-law of the form $\Phi_E \propto E^{-1.85} \, e^{-E/E_0}$ whith a cut-off energy of $E_0 = 8.7$ TeV. In contrast, the VHE spectrum at superior conjunction is well represented by a pure power-law ($\Phi_E \propto E^{-2.53}$) with a much steeper slope, but identical differential flux at $E_{\rm norm} \approx 200$ GeV. The original data from [@aharonian06b], together with these spectral fit functions, are shown in Fig. \[observed\_spectrum\].
A variety of different models for the high-energy emission from high-mass X-ray binaries have been suggested. These range from high-energy processes in neutron star magnetospheres [e.g., @moskalenko93; @moskalenko94; @bednarek97; @bednarek00; @chernyakova06; @dubus06b], via models with the inner regions of microquasar jets being the primary high-energy emission sites [e.g. @romero03; @bp04; @bosch05a; @gupta06; @db06; @gb06] and interactions of microquasar jets with the ISM [@bosch05b], to models involving particle acceleration in shocks produced by colliding stellar winds [e.g., @reimer06]. In addition to involving a variety of different leptonic and hadronic emission processes, these models also imply vastly different locations of the $\gamma$-ray production site with respect to the compact object in LS 5039 and the companion star. However, independent of the emission mechanism responsible for the VHE $\gamma$-rays, the intense radiation field of the high-mass (stellar type O6.5V) companion will lead to $\gamma\gamma$ absorption of VHE $\gamma$-rays in the $\sim 100$ GeV – TeV photon energy range. The characteristic features of the $\gamma\gamma$ absorption of VHE $\gamma$-rays by the companion star light in LS 5039 have been investigated in detail by [@bd05] and [@dubus06a]. It was found that, if the $\gamma$-ray emission originates within a distance of the order of the orbital separation of the binary system ($s \sim 2 \times
10^{12}$ cm), this effect should lead to a pronounced $\gamma\gamma$ absorption trough, in particular near superior conjunction, while $\gamma\gamma$ absorption tends to be almost negligible near inferior conjunction. In LS 5039, the minimum of the absorption trough is expected to be located around $E_{\rm min} \sim
300$ GeV and should shift from higher to lower energies as the orientation of the binary system changes from inferior to superior conjunction.
Most of the relevant parameters of LS 5039, except for the inclination angle $i$ of the line of sight with respect to the normal to the orbital plane, are rather well determined (see §\[parameters\]). This allows for a detailed parameter study of the $\gamma\gamma$ absorption effects, leaving the inclination angle and the distance of the VHE emission site from the compact object as free parameters. As will be shown in §\[cascades\], the effect of electromagnetic cascades will lead to a re-deposition of the absorbed $\gtrsim 100$ GeV luminosity almost entirely at photon energies $E \lesssim
100$ GeV. For that reason, one can easily correct for the $\gamma\gamma$ absorption effect at energies $E \gtrsim 100$ GeV by multiplying the observed fluxes by a factor $e^{\tau_{\gamma\gamma}(E)}$ (where $\tau_{\gamma\gamma}(E)$ is the $\gamma\gamma$ absorption depth along the line of sight) in order to find the intrinsic, deabsorbed VHE spectra for any given choice of $i$ and the height $z_0$ of the VHE $\gamma$-ray emission site above the compact object. Results of this procedure will be presented in §\[deabsorbed\]. In §\[constraints\], these results will then be used to estimate the total flux and luminosity in VHE $\gamma$-rays, which can be compared to limits on the available power under the assumption that the VHE emission is powered by Bondi-Hoyle limited wind accretion onto the compact object. This leads to important constraints on the location of the VHE emision site and the geometry of the system, which will be discussed in § \[summary\].
\[parameters\]Parameters of the LS 5039 System
==============================================
LS 5039 is a high-mass X-ray binary in which a compact object is in orbit around an O6.5V type stellar companion with a mass of $M_{\ast} = 23 \, M_{\odot}$, a bolometric luminosity of $L_{\ast}
= 10^{5.3} \, L_{\odot} \approx 7 \times 10^{38}$ ergs s$^{-1}$ and an effective surface temperature of $T_{\rm eff} = 39,000 \, ^o$K. The mass function of the system is $f(M) = (M_{\rm c.o.} \sin i)^3
/ (M_{\rm c.o.} + M_{\ast})^2 \approx 5 \times 10^{-3} \, M_{\odot}$. The binary orbit has an eccentricity of $e = 0.35$ and an orbital period of $P = 3.9$ d. The inclination angle $i$ is only poorly constrained in the range $13^o \lesssim i \lesssim 64^o$. Under the assumption of co-rotation of the star with the orbital motion, a preferred inclination angle of $i = 25^o$ could be inferred, leading to a compact-object mass of $M_{\rm c.o.} = 3.7^{+1.3}_{-1.0}
\, M_{\odot}$ [@casares05]. However, since there is no clear evidence for co-rotation, the inclination angle is left as a free parameter in this analysis. Within the entire range of allowed values, $13^o \le i \le 64^o$, the mass of the compact object is substantially smaller than the mass of the companion, so that the orbit can very well be approximated by a stationary companion star, orbited by the compact object for our purposes. The orbit has a semimajor axis of length $a = 2.3 \times 10^{12}$ cm, and the projection of the line of sight onto the orbital plane forms an angle of $\approx 45^o$ with the semimajor axis [see, e.g., Fig. 4 of @aharonian06b]. Consequently, the orbital separation at superior conjunction is found to be $s_{\rm s.c.} = 1.6 \times 10^{12}$ cm, while at inferior conjunction, it is $s_{\rm i.c.} = 2.7 \times 10^{12}$ cm.
The mass outflow rate in the stellar wind of the companion has been determined as $\dot M_{\rm wind} \approx 10^{-6.3} M_{\odot}$/yr, and the terminal wind speed is $v_{\infty} \approx 2500$ km s$^{-1}$ [@mg02]. EVN and MERLIN observations of the radio jets of LS 5039 suggest a mildly relativistic flow speed of $\beta \sim 0.2$ on the length scale of several hundred AU [@paredes02]. This would correspond to a bulk Lorentz factor of the flow of $\Gamma \approx 1.02$. However, it is plausible to assume that near the base of the jet, where the VHE $\gamma$-ray emission may arise (according to some of the currently most actively discussed modeles), the flow may have a substantially higher speed. Therefore, we will also consider bulk flow speeds of $\Gamma \sim 2$, more typical of the jet speeds of other Galactic microquasar jets.
For the analysis in this paper, we use a source distance of $d = 2.5$ kpc, corresponding to $4 \pi d^2 = 7.1 \times 10^{44}$ cm$^2$ [@casares05].
Inferred compact object masses and Doppler boosting factors $D =
(\Gamma [1 - \beta \cos i])^{-1}$ for representative values of $i = 20^o$, $40^o$, and $60^o$ and $\Gamma = 1.02$ and $\Gamma = 2$ are listed in Table \[parameter\_table\].
\[cascades\]The Role of Electromagnetic Cascades
================================================
The absorption of VHE $\gamma$-ray photons photons will lead to the production of relativistic electron-positron pairs, which will lose energy via synchrotron radiation and inverse-Compton scattering on starlight photons, and initiate synchrotron or inverse-Compton supported electromagnetic cascades. For photon energies of $E_{\gamma} \gtrsim
100$ GeV, the produced pairs will have Lorentz factors of $\gamma
\gtrsim 10^5$. Given the surface temperature of the companion star of $T_{\rm eff} = 39,000$ K, starlight photons have a characteristic photon energy of $\epsilon_{\ast} \equiv h \nu_{\ast} / (m_e c^2)
\sim 7 \times 10^{-6}$, electrons and positrons with energies of $\gamma \gtrsim 1/\epsilon_{\ast} \sim 1.5 \times 10^5$ will interact with these star light photons in the Klein-Nishina limit and, thus, very inefficiently. $\gamma$-rays produced through upscattering of star light photons by secondary electrons and positrons in the Thomson regime will have energies of $E_{\rm IC} \lesssim E_{\rm IC}^{\rm max}
\equiv m_e c^2 / \epsilon_{\ast} \sim 75$ GeV.
In addition to Compton scattering, secondary electron-positron pairs will suffer synchrotron losses. Magnetic fields even near the base of microquasar jets are unlikely to exceed $B \sim 10^3$ G. Consequently, along essentially the entire trajectory of VHE photons one may safely assume $B \equiv 10^3 \, B_3$ G with $B_3 \lesssim 1$. Secondary electrons and positrons traveling through such magnetic fields will produce synchrotron photons of characteristic energies $E_{\rm sy} \sim 1.2
\times B_3 \, \gamma_6^2$ GeV, where $\gamma_6 = \gamma / 10^6$ parametrizes the secondary electron’s/positron’s energy. Thus, even for extreme magnetic-field values of $B = 10^3$ G, synchrotron photons at energies $\gtrsim 100$ GeV could only be produced by secondary pairs with $\gamma \gtrsim 10^7$, resulting from primary $\gamma$-ray photons of $E_{\gamma} \gtrsim 10$ TeV, where only a negligible portion of the total luminosity from LS 5039 is expected to be liberated. Consequently, electromagnetic cascades initiated by the secondary electrons and positrons from $\gamma\gamma$ absorption in the stellar photon field will re-emit the absorbed VHE $\gamma$-ray photon energy essentially entirely at photon energies $\lesssim 100$ GeV. This conclusion is fully consistent with the more detailed consideration of the pair cascades resulting from $\gamma\gamma$ absorption of the VHE emission from LS 5039 by @aharonian06a.
As demonstrated in [@bd05], $\gamma Z$ absorption in the stellar wind of the companion as well as $\gamma\gamma$ absorption on star light photons reprocessed in the stellar wind are negligible compared to the direct $\gamma\gamma$ absorption effect for LS 5039.
Consequently, the only relevant effect modifying the VHE $\gamma$-ray spectrum of LS 5039 at energies above $\sim 100$ GeV between the emission site and the observer on Earth is $\gamma\gamma$ absorption by direct star light photons. This effect can easily be corrected for by multiplying the observed spectra by a factor $e^{\tau_{\gamma\gamma}(E)}$, where $\tau_{\gamma\gamma}(E)$ is the $\gamma\gamma$ absorption depth along the line of sight.
\[deabsorbed\]Deabsorbed VHE $\gamma$-ray spectra
=================================================
In this section, we present a parameter study of the deabsorbed photon spectra and integrated VHE $\gamma$-ray fluxes for representative values of the viewing angle of $i = 20^o$, $40^o$, and $60^o$ and a range of locations of the emission site, characterized by a height $z_0$ above the compact object in the direction perpendicular to the orbital plane. The absorption depth $\tau_{\gamma\gamma} (E)$ along the line of sight is evaluated as described in detail in [@bd05]. The observed spectra are represented by the best-fit functional forms quoted in the introduction.
Figures \[i20\] — \[i60\] show the inferred intrinsic, deabsorbed VHE $\gamma$-ray spectra from LS 5039 for a range of $10^{12} \,
{\rm cm} \le z_0 \le 1.5 \times 10^{13}$ cm. At larger distances from the compact object, $\gamma\gamma$ absorption due to the stellar radiation field becomes negligible for all inclination angles. Furthermore, models which assume a distance greatly in excess of the characteristic orbital separation (i.e., $z_0 \gtrsim 10^{13}$ cm) may be hard to reconcile with the periodic modulation of the emission on the orbital time scale since the $\gamma$-ray production site would be distributed over a large volume, with primary energy input episodes from different orbital phases overlapping and thus smearing out the original orbital modulation. As the height $z_0$ approaches values of the order of the characteristic orbital separation, $s \sim 2 \times 10^{12}$ cm, the intrinsic spectra would have to exhibit a significant excess towards the threshold of the HESS observations at $E_{\rm thr} \sim 200$ GeV in order to compensate for the $\gamma\gamma$ absorption trough with its extremum around $\sim 300$ GeV [@bd05]. At even smaller distances from the compact object, $z_0 \ll 10^{12}$ cm, the absorption features would again become essentially independent of $z_0$ since the overall geometry would not change significantly anymore with a change of $z_0$.
A first, important conclusion from Figures \[i20\] – \[i60\] is that there is no combination of $i$ and $z_0$ for which the de-absorbed VHE $\gamma$-ray spectra in the inferior and superior conjunction could be identical. Thus, the intrinsic VHE $\gamma$-ray spectra must be fundamentally different in the different orbital phases corresponding to superior conjunction (near periastron) and inferior conjunction (closer to apastron).
Second, there is a large range of instances in which the deabsorbed, differential photon fluxes at 200 GeV $\lesssim E \lesssim$ 10 TeV during superior conjunction would have to be substantially higher than during inferior conjunction, opposite to the observed trend. This could have interesting consequences for strategies of searches for high-energy neutrinos. To a first approximation, the intrinsic, unabsorbed VHE $\gamma$-ray flux is roughly equal to the expected high-energy neutrino flux [see, e.g., @lipari06]. Therefore, our results suggest that phases with lower observed VHE $\gamma$-ray fluxes from LS 5039 may actually coincide with phases of larger neutrino fluxes. This is confirmed by a plot of the integrated 0.2 – 10 TeV photon fluxes as a function of height of the emission region as shown in Figure \[flux02\]. An anti-correlation of the VHE neutrino and $\gamma$-ray fluxes would require emission region heights of $z_0 \lesssim 2.5 \times 10^{12}$ cm ($i = 20^o$), $4 \times 10^{12}$ cm ($i = 40^o$), and $7 \times 10^{12}$ cm ($i = 60^o$), respectively. As we will see in the next section, for $i = 60^o$, these configurations can be ruled out because of constraints on the available power from wind accretion onto the compact object.
The possibility of a neutrino-to-photon flux ratio largely exceeding one has also recently been discussed for the case of LSI +61$^0$303 by @th06. Those authors also compared the existing AMANDA upper limits to the MAGIC VHE $\gamma$-ray fluxes of that source. They find that a neutrino-to-photon flux ratio up to $\sim 10$ for a $\gamma\gamma$ absorption modulated photon signal would still be consistent with the upper limits on the VHE neutrino flux from that source.
In order to assess the observational prospects of detecting VHE neutrinos with the new generation of km$^3$ neutrino detectors, such as ANTARES, NEMO, or KM3Net, one can estimate the expected neutrino flux assuming a characteristic ratio of produced neutrinos to unabsorbed VHE photons at the source of $\sim 1$ [@lipari06]. Under this assumption, the resulting neutrino fluxes should be comparable to the deabsorbed photon fluxes plotted in Figures \[i20\] – \[i60\]. As a representative example of the sensitivity of km$^3$ water-Cherenkov neutrino detectors, those figures also contain the anticipated sensitivity limits of the NEMO detector for a steady point source with a power-law of neutrino number spectral index $2.5$ for 1 and 3 years of data taking, respectively [@distefano06]. Given the likely orbital modulation of the neutrino flux of LS 5039, our results indicate that neutrino data accumulated over substantially longer than 3 years might be required in order to obtain a firm detection of the neutrino signal from LS 5039 and its orbital modulation.
\[constraints\]Constraints from Accretion Power Limits
======================================================
The inferred intrinsic (deabsorbed) VHE $\gamma$-ray fluxes calculated in the previous section imply apparent isotropic 0.1 – 10 TeV luminosities in the range of $\sim 2 \times 10^{34}$ – $7 \times 10^{34}$ ergs/s ($i = 20^o$), $\sim 1.3 \times 10^{34}$ – $7 \times 10^{35}$ ergs/s ($i = 40^o$), and $\sim 1.2 \times 10^{34}$ – $\gg 10^{37}$ ergs/s ($i = 60^o$), respectively, for emission regions located at $z_0
\ge 10^{12}$ cm (see Figures \[luminosities1\] and \[luminosities2\]). These should be confronted with constraints on the available power from wind accretion from the stellar companion onto the compact object.
For this purpose, we assume that for a given directed wind velocity of $v_{\rm wind} \approx 2.5 \times 10^8$ cm/s, all matter with $(1/2) v_{\rm wind}^2
< GM_{\rm c.o.}/R_{\rm BH}$ will be accreted onto the compact object. An absolute maximum on the available power is then set by $L_{\rm max} \approx
(1/12) \, \dot M_{\rm wind} \, c^2 \, (R_{\rm BH}^2 / [4 s^2])$. We furthermore allow for Doppler boosting of this accretion power along the microquasar jet with the Doppler boosting factors $D$ listed in Table \[parameter\_table\]. This yields an available power corresponding to an inferred, apparent isotropic luminosity of
$$L_{\rm max}^{\rm iso} \approx 2.8 \times 10^{33} \, \left( {M_{\rm c.o.}
\over M_{\odot}} \right)^2 \, D^4 \, \left( {s \over 2 \times 10^{12} \,
{\rm cm}} \right)^{-2} \, \left( {v_{\rm wind} \over 2.5 \times 10^8
\, {\rm cm/s}} \right)^{-4} \; {\rm ergs / s}.
\label{Lmax}$$
The resulting luminosity limits are also included in Table \[parameter\_table\] and indicateded by the horizontal lines in Figures \[luminosities1\] and \[luminosities2\] for $\Gamma = 1.02$ and $\Gamma = 2$, respectively. Allowed configurations are those for which the inferred, apparent isotropic 0.1 – 10 TeV luminosity is substantially below the respective luminosity limit.
Additional restrictions come from the inferred luminosity of the EGRET source 3EG J1824-1514 whose 90 % confidence contour includes the location of LS 5039 [@aharonian05]. The observed $> 100$ MeV $\gamma$-ray flux from this source corresponds to an inferred isotropic luminosity of $L_{\rm EGRET} \sim 7 \times
10^{34}$ ergs s$^{-1}$. This level is also indicated in Figures \[luminosities1\] and \[luminosities2\]. However, the association of the EGRET source with LS 5039 is uncertain since the 90 % confidence contour also contains the pulsar PSR B1822-14 as a possible counterpart. Thus, even if the available power according to eq. \[Lmax\] is lower than $L_{\rm EGRET}$, this would not provide a significant problem since the EGRET flux could be provided by the nearby pulsar. However, the EGRET flux does provide an upper limit on the $\sim 30$ MeV – 30 GeV flux from LS 5039. As shown in @aharonian06a, the bulk of the VHE $\gamma$-ray flux that is absorbed by $\gamma\gamma$ absorption on companion star photons and initiates pair cascades, will be re-deposited in $\gamma$-rays in the EGRET energy range. Thus, any scenario that requires an unabsorbed source luminosity in excess of the EGRET luminosity would be very problematic. The resulting limits inferred from both eq. \[Lmax\] and the EGRET flux on the height $z_0$ are summarized in Table \[z\_limits\].
The figure indicates that for $i = 20^o$, virtually all configurations are allowed, even if the emission is essentially unboosted ($\Gamma = 1.02$) and originates very close to the compact object.
For $i = 40^o$ at the time of superior conjunction, one can set a limit of $z_0 \gtrsim 1.7 \times 10^{12}$ cm. For this viewing angle at inferior conjunction, the limit implied for $\Gamma = 1.02$ is always below the inferred value, indicating that this configuration is ruled out. This implies that, if $i \sim 40^o$, the emission at the time of inferior conjunction must be substantially Doppler enhanced.
The situation is even more extreme for $i = 60^o$. Under this viewing angle, only the $\Gamma = 1.02$ scenario leads to an allowed model and requires $z_0
\gtrsim 5 \times 10^{12}$ cm. Since there is no allowed scenario to produce the observed flux at inferior conjunction, one can conclude that a large inclination angle of $i \sim 60^o$ may be ruled out.
Note, however, that there are alternative models for the very-high-energy emission from X-ray binaries in which the source of power for the high-energy emission lies in the rotational energy of the compact object [in that case, most plausibly a neutron star, see, e.g. @chernyakova06]. The constraints discussed above will obviously not apply to such models. Instead, in the case of a rotation-powered pulsar wind/jet, one can estimate the total available power as
$$L_{\rm rot} \sim {8 \over 5} \pi^2 \, M \, R^2 \, {\dot P \over P^3} \sim
4.4 \cdot 10^{37} \, {\dot P_{-15} \over P_{-2}^3} \; {\rm ergs \; s}^{-1},
\label{L_rotation}$$
where a $1.4 \, M_{\odot}$ neutron star with $R = 10$ km is assumed and the spin-down rate and spin period are parametrized as $\dot P = 10^{-15} \,
\dot P_{-15}$ and $P = 10 \, P_{-2}$ ms, respectively. Consequently, the power requirements of even the extreme scenarios for $i = 60^o$, as discussed above, could, in principle, be met by a rotation-powered pulsar. However, such a luminosity could only be sustained over a spin-down time of
$$t_{\rm sd} \sim 3 \times 10^5 \, {P_{-2} \over \dot P_{-15}} \; {\rm years}
\label{t_spindown}$$
and should therefore be rare.
\[summary\]Summary and Discussion
=================================
In this paper, a detailed study of the intrinsic VHE $\gamma$-ray emission from the Galactic microquasar LS 5039 after correction for $\gamma\gamma$ absorption by star light photons from the massive companion star is presented. This system had shown evidence for an orbital modulation of the VHE $\gamma$-ray flux and spectral shape. A range of observationally allowed inclination angles, $13^o \le i \le 64^o$ (specifically, $i = 20^o$, $i = 40^o$, and $i = 60^o$) as well as plausible distances $z_0$ of the VHE $\gamma$-ray emission region above the compact object ($10^{12} \, {\rm cm} \le z_0 \le 1.5 \times
10^{13}$ cm) was explored. Deabsorbed, intrinsic VHE $\gamma$-ray spectra as well as integrated fluxes and inferred, apparent isotropic luminosities were calculated and contrasted with constraints from the available power from wind accretion from the massive companion onto the compact object. The main results are:
- [It is impossible to choose the viewing angle and location of the VHE emission region in a way that the intrinsic (deabsorbed) fluxes and spectra in superior and inferior conjunction are identical within the range of values of $i$ and $z_0$ considered here. For values of $z_0$ much smaller and much larger than the characteristic orbital separation ($s \sim 2 \times 10^{12}$ cm), the absorption features would become virtually independent of $z_0$. Furthermore, models assuming a VHE $\gamma$-ray emission site at a distance greatly in excess of the orbital separation might be difficult to reconcile with the orbital modulation of the VHE emission. Consequently, the intrinsic VHE luminosities and spectral shapes must be fundamentally different in different orbital phases.]{}
- [It was found that the luminosity constraints for an inclination angle of $i = 60^o$ at inferior conjunction could not be satisfied at all, and for $i = 40^o$, there is no allowed configuration in agreement with the luminosity constraint for $\Gamma = 1.02$. From this, it may be concluded that, if the VHE luminosity is limited by wind accretion from the companion star and the system is viewed at an inclination angle of $i \gtrsim 40^o$, the emission is most likely beamed by a larger Doppler factor than inferred from the dynamics of the large-scale radio outflows on scales of several hundred AU.]{}
- [Since it was found to be impossible to satisfy the luminosity constraint for inferior conjunction at a viewing angle of $i = 60^o$, one can constrain the viewing angle to values substantially smaller than the maximum of $\sim 64^o$ inferred from the lack of eclipses.]{}
- [The previous two points as well as the fact that the luminosity limits can easily be satisfied for $i = 20^o$, indicate that a rather small inclination angle $i \sim 20^o$ may be preferred. Thus, our results confirm the conjecture of [@casares05] that the compact object might be a black hole rather than a neutron star.]{}
- [Under the assumption of a photon-to-neutrino ratio of $\sim 1$ (before $\gamma\gamma$ absorption), the detection of the neutrino flux from LS 5039 and its orbital modulation might require the accumulation of data over more than 3 years with km$^3$ scale neutrino detectors like ANTARES, NEMO, or KM3Net.]{}
- [Comparing the ranges of allowed configurations from Table \[z\_limits\] to the plot of intrinsic VHE fluxes in Figure \[flux02\], one can see that there is a limited range of allowed configurations for which the expected VHE neutrino flux would actually anti-correlate with the observed VHE $\gamma$-ray emission. Specifically, for a preferred viewing angle of $i \sim 20^o$, models in which the emission originates within $z_0 \lesssim
2.5 \times 10^{12}$ cm would predict that the VHE neutrino flux at superior conjunction is larger than at inferior conjunction, opposite to the orbital modulation trend seen in VHE $\gamma$-ray photons. Thus, strategies for the identification of high-energy neutrinos from microquasars based on a positive correlation with observed VHE fluxes may fail if models with $z_0 \lesssim
2.5 \times 10^{12}$ cm apply.]{}
- [The luminosity limitations discussed above could, in principle, be overcome by a rotation-powered pulsar. However, this would require a rather short spin-down time scale of $t_{\rm sd} \lesssim 10^6$ yr. Consequently, such objects should be rare.]{}
The author thanks F. Aharonian and V. Bosch-Ramon for stimulating discussions and hospitality during a visit at the Max-Planck-Institute for Nuclear Physics, and M. De Naurois for providing the HESS data points. I also thank the referee for very insightful and constructive comments, which have greatly helped to improve this manuscript. This work was partially supported by NASA INGEGRAL Theory grant no. NNG 05GP69G and a scholarship at the Max-Planck Institute for Nuclear Physics in Heidelberg.
Aharonian, F. A., et al., 2005, Science, 309, 746
Aharonian, F. A., Anchordoqui, L. A., Khangulyan, D., & Montaruli, T., 2006a, in proc. of 9th International Conference on Topics in Astrophysics and Underground Physics, “TAUP 2005”, Journal of Physics, Conf. Series, in press (astro-ph/0508658)
Aharonian, F. A., et al., 2006b, A&A, submitted (astro-ph/0607192)
Albert, J., et al., 2006, Science, vol. 312, issue 5781, p. 1771
Bednarek, W., 1997, A&A, 322, 523
Bednarek, W., 2000, A&A, 362, 646
Böttcher, M., & Dermer, C. D., 2005, ApJ, 634, L81
Bosch-Ramon, V., & Paredes, J. M., 2004, A&A, 417, 1075
Bosch-Ramon, V., Romero, G. E., & Paredes, J. M., 2005a, A&A, 429, 267
Bosch-Ramon, V., Aharonian, F. A., & Paredes, J. M., 2005b, A&A, 432, 609
Casares, J., Ribó, M., Ribas, I., Paredes, J. M., Martí, J., & Herrero, A., 2005, MNRAS, 364, 899
Chernyakova, M., Neronov, A., & Walter, R., 2006, MNRAS, in press (astro-ph/0606070; early release under 2006MNRAS.tmp.1051C)
Dermer, C. D., & Böttcher, M., 2006, ApJ, 643, 1081
Distefano, C., 2006, Astroph. & Space Science, submitted (astro-ph/0608514)
Gregory, P. C., 2002, ApJ, 575, 427
Dubus, G., 2006a, A&A, 451, 9
Dubus, G., 2006b, A&A, 456, 801
Gregory, P. C., & Taylor, A. R., 1978, Nature, 272, 704
Gupta, S., Böttcher, M., & Dermer, C. D., 2006, ApJ, 644, 409
Gupta, S., & Böttcher, M., 2006, ApJ, ApJ, 650, L123
Kniffen, D. A., et al. 1997, ApJ, 486, 126
Lipari, P., in proc. of “Very Large Volume Neutrino Telescopes”, Catania, Italy, 2005, in press (astro-ph/0605535)
McSwain, M. V., & Gies, D. R., 2002, ApJ, 568, L27
Moskalenko, I. V., Karakula, S., & Tkaczyk, W., 1993, MNRAS, 260, 681
Moskalenko, I. V., & Karakula, S., 1994, ApJS, 92, 567
Paredes, J. M., Martí, J., Ribó, M., & Massi, M., 2000, Science, 288, 2340
Paredes, J. M., Ribó, M., Ros, E., Martí, J., & Massi, M., 2002, A&A, 393, L99
Reimer, A., Pohl, M., & Reimer, O., 2006, ApJ, 644, 1118
Romero, G. E., Torres, D. F., Kaufman Bernadó, M. M., & Mirabel, I. F., 2003, A&A, 410, L1
Taylor, A. R., Kenny, H. T. Spencer, R. E., & Tzioumis, A., 1992, ApJ, 395, 268
Torres, D., & Halzen, F., 2006, A&A, submitted (astro-ph/0607368)
![VHE $\gamma$-ray spectra of LS 5039 around superior (blue triangles) and inferior (red circles) conjunction [from @aharonian06b]. The curves indicate the best-fit straight power-law (superior) and exponentially cut-off power-law (inferior) representations of the spectra, on which the analysis in this paper is based.[]{data-label="observed_spectrum"}](fig1_color.eps){width="14cm"}
![Deabsorbed VHE $\gamma$-ray spectra of LS 5039 for $i = 20^o$ and a range of heights of $z_0$ of the emission region above the compact object. The vertical dashed line indicates the energy threshold of the HESS observations. The flat power-laws correspond to the anticipated neutrino detection sensitivities of the planned NEMO km$^3$ detector for a steady point source with an underlying power-law spectrum of neutrino number spectral index 2.5 after 1 and 3 years of data taking, respectively [@distefano06]. These are relevant for comparison to the deabsorbed photon fluxes for a characteristic ratio of unabsorbed photons to neutrinos of $\sim 1$.[]{data-label="i20"}](fig2_color.eps){width="14cm"}
![Same as Fig. \[i20\], but for $i = 40^o$.[]{data-label="i40"}](fig3_color.eps){width="14cm"}
![Same as Fig. \[i20\], but for $i = 60^o$.[]{data-label="i60"}](fig4_color.eps){width="14cm"}
![Deabsorbed, integrated 0.2 – 10 TeV $\gamma$-ray fluxes of LS 5039 as a function of height $z_0$ of the emission region above the compact object. These numbers would also approximately equal to the expected neutrino fluxes in the same energy range.[]{data-label="flux02"}](fig5_color.eps){width="14cm"}
![Inferred apparent isotropic luminosities for LS 5039 (thick curves), compared to the luminosity limits from Bondi-Hoyle limited wind accretion (horizontal lines), assuming Doppler boosting of the emission corresponding to $\Gamma = 1.02$. Allowed configurations are those for which the inferred luminosities are substantially below the respective luminosity limits.[]{data-label="luminosities1"}](fig6_color.eps){width="14cm"}
![Same as Fig. 6, but for $\Gamma = 2$.[]{data-label="luminosities2"}](fig7_color.eps){width="14cm"}
[ccccccc]{} 20 & i.c. & 4.5 & 1.21 & 2.68 & $6.7 \times 10^{34}$ & $1.6 \times 10^{36}$\
20 & s.c. & 4.5 & 1.21 & 2.68 & $1.9 \times 10^{35}$ & $4.6 \times 10^{36}$\
40 & i.c. & 2.3 & 1.13 & 1.49 & $1.3 \times 10^{34}$ & $4.0 \times 10^{34}$\
40 & s.c. & 2.3 & 1.13 & 1.49 & $3.8 \times 10^{34}$ & $1.1 \times 10^{35}$\
60 & i.c. & 1.6 & 1.09 & 0.88 & $5.6 \times 10^{33}$ & $2.4 \times 10^{33}$\
60 & s.c. & 1.6 & 1.09 & 0.88 & $1.6 \times 10^{34}$ & $6.7 \times 10^{33}$\
\[parameter\_table\]
[cccc]{} 20 & i.c. & $\ll 1$ & $\ll 1$\
20 & s.c. & $\ll 1$ & $\ll 1$\
40 & i.c. & forbidden & $\ll 1$\
40 & s.c. & $2.0$ & $1.7$\
60 & i.c. & forbidden & forbidden\
60 & s.c. & 5 & forbidden\
\[z\_limits\]
|
---
abstract: 'Highest weight representations of $U_q(su(1,1))$ with $q=\exp \pi i/N$ are investigated. The structures of the irreducible hieghesat weight modules are discussed in detail. The Clebsch-Gordan decomposition for the tensor product of two irreducible representations is discussed. By using the results, a representation of ${SL(2,{\Bbb R})}\otimes{U_q(su(2)) }$ is also presented in terms of holomorphic sections which also have ${U_q(su(2)) }$ index. Furthermore we realise $Z_N$-graded supersymmetry in terms of the representation. An explicit realization of $Osp(1 \vert 2)$ via the heighest weight representation of ${U_q(su(1,1)) }$ with $q^2=-1$ is given.'
---
0.0cm 22.0cm 15.0cm 0.0cm 0.0cm
‘=11
=eusm10 scaled 1 =eusm7 scaled 1 =eusm5 scaled 1
== =
@\#1[\#1<10 \#1 \#1=10 A\#1=11 B\#1=12 C \#1=13 D\#1=14 E\#1=15 F]{}
@\#1[[@@[\#1]{}]{}]{} @@\#1[\#1]{}
‘=
‘=11
=eufm10 scaled 1 =eufm7 scaled 1 =eufm5 scaled 1
== =
@\#1[\#1<10 \#1 \#1=10 A\#1=11 B\#1=12 C \#1=13 D\#1=14 E\#1=15 F]{}
@\#1[[@@[\#1]{}]{}]{} @@\#1[\#1]{}
‘=
‘=11
=msxm10 scaled 1 =msxm7 scaled 1 =msxm5 scaled 1 =msym10 scaled 1 =msym7 scaled 1 =msym5 scaled 1 == === =@\#1[\#1<10 \#1 \#1=10 A\#1=11 B\#1=12 C \#1=13 D\#1=14 E\#1=15 F]{}
@\#1[[@@[\#1]{}]{}]{} @@\#1[\#1]{}
‘=
\
\[2em\]\
Department of Mathematics and Statistiics\
University of Edinburgh\
Edinburgh, U.K.
Introduction
============
Many works on quantum groups or $q$-deformations of universal envelopping algebras ($q$UEA) have revealed a variety of fascinating features in both the fields of mathematics and physics. In particular, from the physical viewpoint, they are of great significance with their connection to exactly solvable systems and $2D$ conformal field theories. The connections between $q$UEA of compact Lie algebra, $e.g.$, ${U_q(su(2)) }$, with $q$ a root of unity and rational conformal field theories (RCFT)[@MoS] whose central charges are given by rational numbers have been discussed extensively [@TK]-[@PS]. In RCFT quantum group structures appear essentially in the monodromy properties of conformal blocks. The authors of Refs.[@GS; @RRR] have shown more transparent connections between quantum groups and RCFT by constructing good representation spaces of $q$UEA in terms of the Coulomb gas representations of RCFT. An important feature of the construction is that the highest weight module of ${U_q(su(2)) }$ emerges as the the family of screened vertex operators, that is, each highest weight vector $e^j_m$ corresponds to the screened vertex operator with $j-m$ screening operators, and the generators of ${U_q(su(2)) }$, $X_+$ and $X_-$, are represented as contour creation and annihilation operators. Thus, the quantum groups associated with the [*compact*]{} Lie algebras when $q$ is a root of unity act as a relevant symmetry of RCFT.
In contrast, the $q$UEA, ${U_q(su(1,1)) }$, of the non-compact Lie algebra, $su(1,1)$, has not been well discussed. Several dynamical models where ${U_q(su(1,1)) }$ appears as a symmetry or a dynamical algebra are known [@CEK]-[@KS], and it is likely that ${U_q(su(1,1)) }$ will play a significant role in the models with non-compact spaces. The representation theories of ${U_q(su(1,1)) }$ were given in [@KD]-[@BK] for generic $q$, $i.e.$, $q$ not a root of unity. As in the compact case, however, we can expect to extract new and illuminating features from ${U_q(su(1,1)) }$ with $q$ a root of unity. In Ref.[@MS], Matsuzaki and the author have investigated highest weight representations of ${U_q(su(1,1)) }$ when $q$ is a root of unity and revealed a remarkable feature: for unitary representation space, ${U_q(su(1,1)) }$ has the structure $${U_q(su(1,1)) }= U(su(1,1))\otimes{U_q(su(2)) }.$$ The important point is that the non-compact nature appears through the classical’ Lie algebra $su(1,1)$ and the $q$-deformed effects are only in ${U_q(su(2)) }$. This relation means that ${U_q(su(1,1)) }$ with $q$ a root of unity is the unified algebra of the quantum algebra ${U_q(su(2)) }$ and the non-compact $su(1,1)\simeq{sl(2,{\Bbb R})}$. The importance of this observation is recognized by noticing that the Lie group ${SL(2,{\Bbb R})}$ has deep connection with the theory of topological $2D$ gravity [@MoS]-[@VV]. The spirit of this is that ${SL(2,{\Bbb R})}$ plays the role of the gauge group on the Riemann surface $\Sigma_g$ with constant negative curvature, [*i.e.*]{}, with genus $g$ ($\ge2$) and the moduli space of the complex structure of $\Sigma_g$ is identified with the moduli space of ${SL(2,{\Bbb R})}$ flat connections. Along this line, we can expect that ${U_q(su(1,1)) }$ is regarded as the symmetry of the theory of topological gravity coupled to RCFT as a matter. Here the ${sl(2,{\Bbb R})}$ sector of ${U_q(su(1,1)) }$ gives rise to the deformation of the complex structure of the base manifold $\Sigma_g$. With this pespective in mind, the aim of this paper is to further investigate ${U_q(su(1,1)) }$ when ${q=\exp \pi i /N}$ as the first step toward this goal. We will obtain a holomorphic representation of ${SL(2,{\Bbb R})}\otimes{U_q(su(2)) }$ in terms of holomorphic vectors, which are holomorphic sections of a line bundle over the homogeneous space ${SL(2,{\Bbb R})}/U(1)$ and that have an index with respect to ${U_q(su(2)) }$.
Another novel aspect of $q$UEA at a root of unity appears in the connection with generalized ($Z_N$-graded) supersymmetry or paragrassmann algebras [@OK] as discussed in Ref.[@ABL]-[@FIK]. Here we will present a different and concrete realization of this connection as another remarkable consequence of the relation (1). By using the highest weight representations, we will explicitly construct $N$ holomorphic functions on the homogeneous space. The $N$-th powers of the generators of ${U_q(su(1,1)) }$ act on them and give rise to infinitesimal transformations of these functions under the transformation of the homogeneous space. Furthermore we will see that these $N$ functions can be classified into two sets according to their dimensions or, so called, ${sl(2,{\Bbb R})}$-spins. One of them is the set of functions which have dimensions $\zeta$, and the other is the set of functions with dimensions $\zeta+\frac{1}{2}$.
The organization of this paper is as follows: In the next chapter, we briefly review the highest weight representations of ${U_q(su(1,1)) }$ in order to make this article self-contained although it has already been done in Ref.[@MS]. Chapter 3, which is the main part of this paper, looks in detail at the structure of ${U_q(su(1,1)) }$ highest weight module when ${q=\exp \pi i /N}$. We also give the Clebsch-Goldan decomposition for the tensor product of two highest weight representations. In addition, representations of ${SL(2,{\Bbb R})}\otimes{U_q(su(2)) }$ are discussed in terms of holomorphic sections which have ${U_q(su(2)) }$ index over the homogeneous space ${SL(2,{\Bbb R})}/U(1)$ and discuss $Z_N$-graded supersymmetry. In chapter 4, we explicitly derive $Osp(1\vert 2)$ via the highest weight representation of ${U_q(su(1,1)) }$ with $q^2=-1$. We conclude with some discussion.
It is convenient to summarize here the conventions and notation we will make use in this paper; ${\Bbb Z}_+$ stands for the non-negative integers, $i.e.$, ${\Bbb Z}_+ = \{0, 1, 2, \cdots\}$. $[n]$ for $n\in {\Bbb Z}$ describes a $q$-integer defined by $[n]=(q^n-q^{-n})/(q-q^{-1})$. This convention is useful in our later discussion because, for ${}^\forall n \in {\Bbb Z}$, $[n] \in {\Bbb R}$ if $q \in {\Bbb R}$ or $\vert q \vert =1$. Finally $\left[ \begin{array}{c} n \\ r \end{array} \right]_q = [n]!/
[n-r]![r]!$ is a $q$-binomial coefficient.
Highest Weight Representations of $U_q(su(1,1))$
================================================
To begin with, it is helpful for our discussions to give a brief review of unitary representations of the classical Lie algebra $su(1,1)$. This appears as a non-compact real form of the Lie algebra ${sl(2,{\Bbb C})}$ generated by $E_+, E_-$ and $H$. The relations among them are $$[E_+, E_-] = 2H, \quad\quad [H, E_\pm] = \pm E_\pm$$ The difference between the compact real form $su(2)$ and the non-compact one $su(1,1)$ will appear only through the definitions of Hermitian conjugations. They are defined by, $$\begin{array}{lll}
E_\pm^\dagger = E_\mp, &\quad\quad H^\dagger = H,
&\quad\quad\quad{\rm for}\quad su(2), \\
E_\pm^\dagger = - E_\mp, &\quad\quad H^\dagger = H,
&\quad\quad\quad{\rm for}\quad su(1,1).
\end{array}$$ Via the substitution $E_\pm\rightarrow \mp G_\mp, \, E_0\rightarrow G_0$ we can get another formulation of $su(1,1)\simeq {sl(2,{\Bbb R})}$. Now the relations and Hermitian conjugations are, $$\begin{array}{ll}
[G_n, G_m]=(n-m)G_{n+m}, & n, m=0, \pm 1, \\
\rule{0mm}{.7cm}
\quad G_\pm^\dagger = G_\mp, \quad G_0^\dagger =G_0,
& {\rm for} \quad su(1,1), \\
\rule{0mm}{.5cm}
\quad G_\pm^\dagger = -G_\mp, \quad G_0^\dagger =G_0,
& {\rm for} \quad su(2)
\end{array} \label{eq:another}$$
We will use the latter formulation in chapter 3.
Representations are classified by means of eigenvalues of Cartan operator $H$ and the second Casimir operator. It is well known that there are four classes of unitary irreducible representations: (a) Identity $I$; The trivial representation of the form $I= \{ \vert 0 \rangle \}$. (b) Discrete series ${\cal D}_n^+$ ; ${\cal D}_n^+ = \{\vert k+ \phi \rangle
\;\vert\; k = n, n+1, \cdots \}$ with $n \in {\Bbb Z}_+$, that is, each representation ${\cal D}_n^+$ is bounded below such that $E_- \vert n+ \phi \rangle = 0$. The state $\vert n + \phi \rangle$ is called a wighest weight state. ${\cal D}_n^+$ is refered to as the highest weight representation and we will concentrate only on this type. (c) Discrete series ${\cal D}_n^-$ ; ${\cal D}_n^- = \{\vert k-\phi \rangle
\; \vert \; k = -n, -n-1, \cdots\}$ for $n \in {\Bbb Z}_+$, that is, each representation ${\cal D}_n^-$ has the upper bound state such that $E_+ \vert -n- \phi \rangle = 0$. This type of representation is called the lowest weight representaion. (d) Continuous series ${\cal B}$ : Representations of the form ${\cal B} = \{ \vert k + \phi \rangle \; \vert \; k \in {\Bbb R} \}$. In the cases (b)$\sim$(d), $\phi$ takes its value in $0<\phi\le 1$ and it cannot be further determined. In particular, representations ${\cal D}_n^\pm$ are not really discrete in this sense. Only by a consideration of the Lie group action of $SU(1,1)$ on the highest or lowest weight representations, does the discreteness arise. Then $\phi$ is determined to be $1/2$, or 1. It will turn out that the highest weight represaentation of ${U_q(su(1,1)) }$ with $q$ a root of unity is actually a discrete series without any other considerations. Let us proceed to the quantum cases.
The case when $q$ is not a root of unity
----------------------------------------
We briefly summarize the highest weight representation of the non-compact real form of $U_q({sl(2,{\Bbb C})})$ when $q$ is not a root of unity in order to make the differences from the case $q$ when is a root of unity clear. Generators and relations of $U_q({sl(2,{\Bbb C})})$ are as follows.
[**Definition 2.1**]{}$\quad$ $U_q({sl(2,{\Bbb C})})$ is generated by $X_+,\,
X_-,\, K$ with relations among them given by $$[X_+, X_-] = \frac{K^2-K^{-2}}{q-q^{-1}}, \quad KX_\pm = q^{\pm1} X_\pm K.
\label{eq:relation}$$
As in the classical case, the Hermitian conjugation rule determines whether its real form is compact or non-compact. Conjugations consistent with the relations exist if and only if $q$ is real or $\vert q\vert =1$. The non-compact real form $U_q(su(1,1))$ can be obtained by defining the conjugation as follows: $$\begin{array}{clll}
{}& K^\dagger = K, &\quad X_\pm^\dagger = -X_\mp, &\quad \quad
{\rm when} \; q\in {\Bbb R} \\
{}& K^\dagger=K^{-1}, &\quad X_\pm^\dagger = -X_\mp, &\quad \quad
{\rm when} \; \vert q\vert =1
\end{array}
\label{eq:qhermite}$$ If we take the conjugation of $X_\pm$ to be $X_\pm^\dagger = X_\mp$ instead of the above, we obtain compact real form, $U_q(su(2))$. A Hopf algebraic structure results upon defining a coproduct $\Delta$, a counit $\epsilon$ and an antipode $\gamma$ by $\Delta(K) = K \otimes K, \; \Delta(X_\pm) = X_\pm \otimes K^{-1} + K \otimes
X_\pm$, $\epsilon(K) = 1, \; \epsilon(X_\pm) = 0$, $\gamma(K) = K^{-1}, \; \gamma(X_\pm) = -q^{\mp1} X_\pm$.
One can represent $U_q(su(1,1))$ by constructing a highest weight module as in the classical case. Each module is characterized by a positive parameter $h$ which is the highest weight. The highest weight $h$ is specified by the second order Casimir operator. The highest weight module over a highest weight vector $\vert h; 0 \rangle$ is given by
$$V_h=\{\vert h; r\rangle \; \vert\;
\vert h; r\rangle := \frac{(X_+)^r}{[\,r\,]!} \vert h; 0\rangle, \;
r\in {\Bbb Z}_+\}$$
where the highest weight vector is characterized by
$$X_-\vert h; 0\rangle =0,
\quad K\vert h; 0\rangle = q^h \vert h; 0\rangle.$$
Using the definition of weight vectors $\vert h; r\rangle$ and the relations (\[eq:relation\]), the action of $U_q(su(1,1))$ on the highest weight module $V_h$ is given as follows, $$\begin{aligned}
X_+\vert h; r\rangle &=& [r+1] \vert h; r+1 \rangle, \label{eq:xpaction} \\
X_-\vert h; r\rangle &=& -[2h +r -1] \vert h; r-1 \rangle,
\label{eq:xmaction}\\
K \vert h; r\rangle &=& q^{h+r} \vert h; r\rangle. \label{eq:kaction}\end{aligned}$$ With the Hermitian conjugation (\[eq:qhermite\]), the norm of the state $\vert h; r \rangle$ is given as the inner product $\parallel \vert h;r \rangle \parallel^2 :=\langle h;r\vert h;r\rangle$ and is $$\parallel \vert h;r \rangle \parallel^2 = \left[
\begin{array}{c} 2h+r-1 \\ r \end{array}
\right]_q, \label{eq:norm}$$ where we have normaized the norm of the highest weight vector as $\parallel \vert h; 0 \rangle \parallel^2 =1.$ Since the $q$-integer $[2h+r-1]$ in the r.h.s. of eq.(\[eq:xmaction\]) is always non-zero for $r\ge 1$, we see that no other submodules appear, that is, the highest weight module $V_h$ is irreducible. Furthermore, if $q \in {\Bbb R}$, we can also see that all weight vectors have positive definite norm, $i.e.$, $V_h$ is unitary. Notice that every highest weight module $V_h$ is isomorphic to the highest weight module $V_h^{cl}$ of the classical $su(1,1)$.
The case when $q$ is a root of unity
------------------------------------
In the following we set the deformation parameter $q$ to be $q=\exp \pi i /N$. Then $q^N=-1$ and $[N]=0$. In this case the situation is drastically different from both the classical case and the case with generic $q$ and some difficulties appear.
The first problem is that the norm of the $N$-th state diverges due to the $[N]$ in the denominator of eq.(\[eq:norm\]). The same problem occues in the $U_q(su(2))$ case as well. Fortunately in that case the problem could be resolved utilizing to the fact that highest and lowest weight states exist for each module. Namely, we can always choose a value of the highest weight such that the module does not have the state whose norm diverges. This is the origin of the finiteness of the number of the highest weight states in the unitary representation of $U_q(su(2))$. However in the non-compact case, we cannot remedy this problem in such a manner because the unitary representation is always of infinite dimension. Instead, in order for the $N$-th state to have finite norm, we have to impose the condition that there exists an integer $\mu$ satisfying $$[ 2h + \mu - 1 ] = 0.$$ This factor appearing in the numerator cancells the factor $[N]$ at the $N$-th state. Of course, it is neccessary that $\mu\le N$. Thus we are led to the following proposition:
[**Proposition 2.2**]{}$\quad$ A highest weight module of $U_q(su(1,1))$ with $q=\exp \pi i / N$ is well-defined if and only if the highest weight is labelled by two integers $\mu$ and $\nu$ as
$$h_{\mu\nu} =\frac{1}{2} (N\nu -\mu +1), \quad\quad \mu= 1, 2, \cdots N,
\quad \nu \in {\Bbb N}. \label{eq:hmn}$$
The restriction $\nu\in {\Bbb N}$ (equivalently $h_{\mu\nu}>0$) follows because we are considering the highest weight representations. The important point coming from the proposition is that the highest weight representations of $U_q(su(1,1))$ with ${q=\exp \pi i /N}$ are actually a discrete series with the highest weights takeing values in $\{\frac{1}{2}, 1, \frac{3}{2}, \cdots\}$. Unlike the classical case, no consideration about a group action is needed to show this discreteness. As we will see in section 3.2, these values of $h_{\mu\nu}$ are compatible with representations of ${SL(2,{\Bbb R})}\otimes{U_q(su(2)) }$.
In the construction of a highest weight module we come upon further difficulties. First, noticing (\[eq:xpaction\]),(\[eq:xmaction\]), the generators $X_\pm$ are nilpotent on the module due to $[N]=0$ and $ [ 2h_{\mu\nu} + \mu - 1 ] = 0$, [*i.e.*]{},
$$(X_\pm)^N \vert \psi\rangle =0,$$
where $\vert\psi\rangle$ is an arbitrary state in the highest weight module on $\vert h_{\mu\nu};0\rangle$. Therefore one cannot move from a state to another state by acting $X_+$ or $X_-$ successively. However $(X_\pm)^N / [N]!$ is well-defined on the module, and one can reach every stste with the set $\{X_\pm, (X_\pm)^N/[N]!\}$. Secondly, the operator $K$ is not sufficient to specify the weight of a state due to the relation $K^{2N}=1$. Indeed, two states $\vert h; r \rangle$ and $\vert h; r+2N \rangle$ have the same eigenvalue $q^{h_{\mu\nu}+r}$ with respect to the operator $K$. This fact means that we need other operator in addition to $K$ to specify weight completely. These problems have already appeared in the compact case and Lusztig resolved them by adding generators and redefining $U_q({sl(2,{\Bbb C})})$ [@Lu]. His method is applicable to our non-compact case as well, and so we add new generators
$$L_{1}:= -\frac{(-X_-)^N}{[N]!}, \quad
L_{-1}:= \frac{(X_+)^N}{[N]!}, \quad
L_0:=\frac{1}{2}
\left[ \begin{array}{c} 2H+N-1 \\
N \end{array} \right]_q,
\label{eq:ldef}$$
where $K=q^H$. In order to obtain non-compact representations, Hermitian conjugations are given by the second line of (\[eq:qhermite\]) for the operators $X_\pm, K$ and, therefore, those for the new operators are $$L_{\pm1}^\dagger = -L_{\mp1}, \quad L_0^\dagger = L_0.$$
Now we are ready to investigate the highest weight representations of $U_q(su(1,1))$ with $q$ a root of unity. Let us construct the highest weight module, $V_{\mu,\nu}$, on the highest weight vector $\vert h_{\mu\nu}; 0 \rangle$,
$$V_{\mu,\nu}= \{ \vert h_{\mu\nu}; r\rangle \>\vert \> r\in {\bf Z}_+\}.$$
The highest weight vector is characterized by $$\begin{aligned}
X_-\vert h_{\mu\nu}; 0 \rangle &=&
L_{-1} \vert h_{\mu\nu}; 0 \rangle = 0, \\
K \vert h_{\mu\nu}; 0 \rangle
&=& q^{h_{\mu\nu}} \vert h_{\mu\nu}; 0 \rangle, \quad
L_0 \vert h_{\mu\nu}; 0 \rangle = \ell\, \vert h_{\mu\nu}; 0 \rangle,\end{aligned}$$ where $\ell := \frac{1}{2} \left[ \begin{array}{c}
2h_{\mu\nu} +N -1 \\
N
\end{array} \right]_q $. Although this $q$-binomial coefficient includes a factor $\frac{0}{0}$, we can estimate it by using $[kN] / [N] = (-)^{k-1} k$ and obtain $\ell = (-)^{N\nu+\mu}\frac{1}{2}\nu$. In the discussions of the highest weight module, it is convenient to consider two cases (I) $1 \le \mu \le N-1$ and (II) $\mu = N$ separately, since zero-norm states appear only in the case (I).
\(I) $1\le \mu\le N-1$: $\quad$ The drastic difference from both the classical and generic cases is in the fact that the highest weight module is not of itself irreducible owing to the state $\vert h_{\mu\nu}; \mu \rangle$. By the definition of the parameter $\mu$, this state has zero norm. The appearence of the zero-norm state is indispensable for obtaining a well defined representation. The state is not only a zero-norm state but also a highest weight state (we will call such a state a null state) due to the relations,
$$X_- \vert h_{\mu\nu}; \mu \rangle = 0, \quad\quad
L_{-1} \vert h_{\mu\nu}; \mu \rangle = 0.$$
The first equation comes from the definition of $\mu$, and the second equation follows from the fact that there is not the corresponding state in $V_{\mu,\nu}$ because $\mu \le N-1$. The weight of $\vert h_{\mu\nu}; \mu \rangle$ is $h_{\mu\nu}+\mu = h_{-\mu\nu}$ and therefore the state $\vert h_{\mu\nu}; \mu \rangle$ can be regarded as the highest weight state $\vert h_{-\mu\nu}; 0 \rangle$. Thus the original highest weight module $V_{\mu,\nu}$ has the submodule $V_{-\mu,\nu}$ on the null state $\vert h_{\mu\nu}; \mu \rangle$ and, therefore, the module $V_{\mu,\nu}$ is not irreducible. Further, we can easily show that $V_{-\mu,\nu}$ has again the submodule $V_{\mu,\nu+2}$ which has the submodule $V_{-\mu,\nu+2}$ and so on. Finally we obtain the following embedding of the submodules in the original highest weight module $V_{\mu,\nu}$; $$V_{\mu,\nu}\rightarrow V_{-\mu,\nu}\rightarrow V_{\mu,\nu+2}\rightarrow
V_{-\mu,\nu+2}\rightarrow \cdots \rightarrow V_{\mu,\nu+2k}\rightarrow
V_{-\mu,\nu+2k} \rightarrow\cdots$$ An irreducible highest weight module $V_{\mu,\nu}^{irr}$ on the highest weight state $\vert h_{\mu\nu}; 0 \rangle$ is constructed by subtracting all the submodules, and finally we obtain $$V_{\mu,\nu}^{irr} = \sum_{k \in {\bf Z}_+} V_{\mu,\nu}^{(k)},
\label{eq:block}$$ where $V_{\mu,\nu}^{(k)} := V_{\mu,\nu+2k} - V_{-\mu,\nu+2k}$. Notice that all the zero-norm states that lie on the levels from $(kN+\mu)$ to $((k+1)N-1)$ disappear by the subtraction. The remarkable point is that, the irreducible highest weight module has [*block structure*]{}, that is to say, $V_{\mu,\nu}^{irr}$ consists of infinite series of blocks $V_{\mu,\nu}^{(k)}, \;k=0, 1, \cdots$, and each block has finite number of states, $\vert h_{\mu\nu}; kN+r \rangle, \; r=0, 1, \cdots, \mu-1$. The operators $X_\pm$ move states in each block according to (\[eq:xpaction\]), (\[eq:xmaction\]) together with the conditions $X_-\vert h_{\mu\nu}; kN \rangle=X_+\vert h_{\mu\nu}; kN+\mu-1 \rangle=0$. On the other hand, the operators $L_{\pm1}$ map the state $\vert h_{\mu\nu}; kN+r \rangle$ to $ \vert h_{\mu\nu}; (k\mp1)N+r \rangle. $
\(II) $\mu =N$: $\quad$ In this case, no null state appears in $V_{N,\nu}$. The module is, therefore, irreducible of itself. The only difference from the classical and generic cases is that the actions of $X_+$ and $X_-$, respectively, on the $(kN-1)$-th state and the $kN$-th state vanish, [*i.e.*]{}, $X_+\vert h_{N,\nu};kN-1 \rangle=0$ due to $[N]=0$ and $X_-\vert h_{N,\nu};kN \rangle=0$ due to $[2h_{N,\nu}+N-1]=0$. Instead, the state $\vert h_{N,\nu};kN \rangle$ can be generated by the operation of $L_{-1}$ on the state $\vert h_{N,\nu};(k-1)N \rangle$. Thus the irreducible highest weight module also has the [*block structure*]{} as in the case (I),
$$V_{N,\nu}^{irr} = \sum_{k \in {\bf Z}_+} V_{N,\nu}^{(k)},
\label{eq:blockb}$$
where $V_{N,\nu}^{(k)}$ is the $k$-th block which consists of $\vert h_{N\nu}; kN+r\rangle, \; r=0, 1, \cdots, N-1.$
Now we have obtained irreducible highest weight modules of ${U_q(su(1,1)) }$ when ${q=\exp \pi i /N}$. The crucial point is that unlike the classical or generic cases, every irreducible module has the block structure (\[eq:block\]) or (\[eq:blockb\]) which can be written as $V_k \otimes V_r$, where $V_k$ is the space which consists of infinite number of blocks and $V_r$ stands for a block having finite number of states $\vert h_{\mu\nu} : kN+r\rangle, \; r=0 \sim \mu-1$. It will turn out that the block structure is the very origin of novel features of ${U_q(su(1,1)) }$ at a root of unity. Furthermore we should notice that the irreducible module ${V_{\mu, \nu}^{irr}}$ is not neccessarily unitary, because $[ x ]$ for a positive integer $x$ is not always positive.
Finally we present character formula of the representation. $$\begin{aligned}
\chi_{\mu\nu}(x) &:=& {\rm Tr}_{{V_{\mu, \nu}^{irr}}} x^H = \sum_{k=0}^\infty
\left( \frac{x^{h_{\mu\,\nu+2k}}}{1-x} -
\frac{x^{h_{-\mu\,\nu+2k}}}{1-x}\right) \nonumber \\
&=&\frac{x^{h_{\mu\,\nu}}}{1-x} \frac{1 - x^\mu}{1 - x^N}.
\label{eq:character}\end{aligned}$$ This holds for $1\le\mu\le N$. In the $\mu=N$ case, the character $\chi_{N\nu}(x)$ is the same as that of classical highest weight representation of $su(1,1)$.
Structure of $U_q(su(1,1))$ with ${q=\exp \pi i /N}$
====================================================
The aim of this chapter is to elaborate on the irreducible highest weight module $V_{\mu,\nu}^{irr}, \; \mu=1, 2, \cdots N$ and its attendant the block structure. It is convenient to write the level of a state in $V_{\mu,\nu}^{irr}$ by $kN+r$ with $r=0, 1, \cdots, \mu-1$ rather than $r = 0, 1, \cdots$, and we always adopt this notation hereafter.
Main Theorem
------------
In this section we prove the following theorem which states the structure of the irreducible highest weight module of ${U_q(su(1,1)) }$.
[**Theorem 3.1**]{}$\quad$ When ${q=\exp \pi i /N}$, the irreducible highest weight $U_q(su(1,1))$-module is isomorphic to a tensor product of two spaces as follows; $$V_{\mu,\nu}^{irr} \simeq V_\zeta^{cl} \otimes \mho_j,$$ where $\zeta=\frac{1}{2}\nu$ and $j=\frac{1}{2}(\mu-1)$. These two spaces $V_\zeta^{cl}$ and $\mho_j$ are understood as follows;
\(1) if ${V_{\mu, \nu}^{irr}}$ = is unitary (see proposition 3.2 below), then\
$V^{cl}_\zeta$ : unitary irreducible infinite dimensional $su(1,1)$-module\
$\mho_j$ : unitary irreducible finite dimensional $U_q(su(2))$-module.
\(2) = if ${V_{\mu, \nu}^{irr}}$ īs not unitary, there are three cases as follows;\
(2-1) $\nu\,\in 2{\Bbb N}+1$ and $\nu N+\mu\,\in 2{\Bbb N}+1$\
$V^{cl}_\zeta$ : non-unitary irreducible infinite dimensional $su(2)$-module\
$\mho_j$ : unitary irreducible $U_q(su(2))$-module.\
(2-2) $\nu\,\in 2{\Bbb N} $ and $\nu N+\mu\,\in 2{\Bbb N} $\
$V^{cl}_\zeta$ : unitary irreducible infinite dimensional $su(1,1)$-module\
$\mho_j$ : non-unitary irreducible finite dimensional $U_q(su(1,1))$-module.\
(2-3) $\nu\,\in 2{\Bbb N} $ and $\nu N+\mu\,\in 2{\Bbb N}+1$\
$V^{cl}_\zeta$ : non-unitary irreducible infinite dimensional $su(2)$-module\
$\mho_j$ : non-unitary irreducible finite dimensional $U_q(su(1,1))$-module.
To prove this theorem we first find an isomorphism $\rho$ between ${V_{\mu, \nu}^{irr}}$ and $V_k\otimes V_r$, and then discuss what $V_k$ and $V_r$ are. (For the time being, we will denote the two spaces $V^{cl}_\zeta$ and $\mho_j$ in the theorem respectively by $V_k$ and $V_r$ in accordance with the previous chapter.) We begin with the observation that the operator $(X_\pm)^{kN+r}/[kN+r]!$ can be rewritten in terms of $X_\pm$ and $L_{\pm1}$. By the straightforward calculation, one finds
$$\frac{X_\pm^{kN+r}}{[kN+r]!} = (-)^{\frac{1}{2} k(k-1)N+kr}
\frac{X_\pm^r}{[r]!}\frac{(L_{\mp1})^k}{k!}.$$
Furthermore one should notice that the genearators $L_n, \;n=0, \pm1$ (anti-)commute with the generators $X_\pm, K$ on the module $V_{\mu, \nu}^{irr}$. That is, for $ \vert \psi \rangle \in V_{\mu,\nu}^{irr}$, $$\vert \psi \rangle =0,
\quad K^{-1} L_n K\vert \psi \rangle =-L_n \vert \psi \rangle,
\quad\quad n=0, \pm1,$$ These equations mean that up to a sign we can reach the $(kN+r)$-th state by letting $L_{-1}^k/ k!$ and $X_+^r/[r]!$ operate separately on the highest weight state. These facts indicate that there exits a map $\rho\; ; \;
{V_{\mu, \nu}^{irr}}\rightarrow V_k \otimes V_r$ and the map $\rho$ induces another map $\hat \rho\; : \; U_q(su(1,1)) \rightarrow U_k \otimes U_r$ where $U_r$ and $U_k$ are the universal enveloping algebras generated by $X_\pm, K$ and $L_n, \; n=0, \pm1$, respectively. In order to define the maps $\rho$ and $\hat\rho$, we observe the actions of $X_{\pm}, K$ and $L_n$ on $V_{\mu, \nu}^{irr}$. The actions of $U_r$ are generated by $$\begin{aligned}
X_+ {\vert h_{\mu\nu} ; kN+r \rangle}&=& (-)^k [r+1] {\vert h_{\mu\nu} ; kN+r+1 \rangle}, \label{eq:qxpaction}\\
X_- {\vert h_{\mu\nu} ; kN+r \rangle}&=& -(-)^{\nu+1}(-)^k [\mu - r] {\vert h_{\mu\nu} ; kN+r-1 \rangle}, \label{eq:qxmaction}\\
K {\vert h_{\mu\nu} ; kN+r \rangle}&=& (-)^{\frac{\nu}{2} +k} q^{-j+r} {\vert h_{\mu\nu} ; kN+r \rangle}, \label{eq:qkaction}\end{aligned}$$ and as far as the operators $L_n$ are concerned, it is sufficient to consider the actions on ${\vert h_{\mu\nu} ; kN \rangle}$. They are as follows;
$$\begin{aligned}
L_1 {\vert h_{\mu\nu} ; kN \rangle}&=& (-)^{kN} (k+1) {\vert h_{\mu\nu} ; (k+1)N \rangle}, \label{eq:lpaction}\\
L_{-1} {\vert h_{\mu\nu} ; kN \rangle}&=& -(-)^{\nu N + \mu}(-)^{kN} (\nu+k-1) {\vert h_{\mu\nu} ; (k-1)N \rangle}, \label{eq:lmaction}\\
L_0 {\vert h_{\mu\nu} ; kN \rangle}&=& (-)^{\nu N+\mu} \left( \frac{1}{2} \nu +k \right) {\vert h_{\mu\nu} ; kN \rangle}.
\label{eq:lzaction}\end{aligned}$$
The map $\rho$ can be defined as follows,
$$\rho ({\vert h_{\mu\nu} ; kN+r \rangle}) = (-)^{\frac{1}{2}k(k-1)N+kr}{\vert \zeta ; k\rangle \otimes \vert j ; -j+r \rangle}, \label{eq:defiso}$$
where $\zeta=\frac{1}{2}\nu$ and $j=\frac{1}{2}(\mu-1)$. This is an isomorphism between ${V_{\mu, \nu}^{irr}}$ and $V_k \otimes V_r$. From (\[eq:qxpaction\])-(\[eq:lzaction\]) together with (\[eq:defiso\]) we obtain the map $\hat\rho$: ${U_q(su(1,1)) }\rightarrow U_k\otimes U_r$, such that $ \rho ({\cal O} \vert\psi\rangle)
= \hat\rho({\cal O})\rho(\vert\psi\rangle)$ for $\vert\psi \rangle
\in{V_{\mu, \nu}^{irr}}, \; {\cal O}\in \{X_\pm, K, L_{0, \pm1}\}$. Explicitly we find $$\begin{aligned}
&& \begin{array}{l} \hat\rho(X_+) = {\bf 1} \otimes (-)^{\nu+1} J_+, \\
\quad\quad\quad\quad\quad \mbox{where}\quad
J_+ {\vert j ; -j+r\rangle}= (-)^{\nu+1} [r+1]{\vert j ; -j+r+1\rangle}, \end{array} \label{eq:defxp} \\
&& \begin{array}{l} \hat\rho(X_-) = {\bf 1} \otimes (-J_-), \\
\quad\quad\quad\quad\quad
\mbox{where}\quad J_-{\vert j ; -j+r\rangle}= (-)^{\nu+1}[2j-r+1]{\vert j ; -j+r-1\rangle}, \end{array}
\label{eq:defxm}\\
&& \begin{array}{l} \hat\rho(K) = (-)^{L_0}{\bf 1} \otimes {{\cal K}}, \\
\quad\quad\quad\quad\quad \mbox{where}\quad {{\cal K}}{\vert j ; -j+r\rangle}= q^{-j+r}{\vert j ; -j+r\rangle}\end{array}
\label{eq:defk}\\
&& \begin{array}{l} \hat\rho(L_1) = (-G_1)
\otimes {\bf 1}, \\
\quad\quad\quad\quad\quad \mbox{where}\quad
G_1{\vert \zeta ; k\rangle}= (-)^{\nu N+\mu} (2\zeta+k-1){\vert \zeta ; k-1\rangle}, \end{array} \label{eq:deflp}\\
&& \begin{array}{l} \hat\rho(L_{-1}) = (-)^{\nu N + \mu} G_{-1}
\otimes {\bf 1}, \\
\quad\quad\quad\quad\quad \mbox{where}\quad
G_{-1} {\vert \zeta ; k\rangle}= (-)^{\nu N + \mu}(k+1) {\vert \zeta ; k+1\rangle}, \end{array}
\label{eq:deflm}\\
&& \begin{array}{l} \hat\rho(L_0) = (-)^{\nu N + \mu}G_0 \otimes {\bf 1}, \\
\quad\quad\quad\quad\quad \mbox{where}\quad G_0{\vert \zeta ; k\rangle}= (\zeta +k){\vert \zeta ; k\rangle},
\end{array} \label{eq:deflz}\end{aligned}$$ Here we have included some sign factors $(-)^{\nu+1}, \; (-)^{\nu N+\mu}$ for later convenience. The next step to complete the proof of the theorem is to examine in detail the modules $V_k$ and $V_r$ which are spanned by ${\vert \zeta ; k\rangle}$ and ${\vert j ; -j+r\rangle}$, respectively. We calculate the commutation relations among $J_+, J_-, {{\cal K}}$ and among $G_1, G_{-1}, G_0$. As for $V_k$ and $U_k$, one finds from the equations (\[eq:deflp\])-(\[eq:deflz\]) that, $$[G_n, G_m] = (n-m) G_{n+m}, \quad\quad\quad n,m=0, \pm1.$$ They are just the relations in (\[eq:another\]) of the classical ${sl(2,{\Bbb C})}$ and we set the Hermitian conjugations as $$G_{\pm1}^\dagger= (-)^{\nu N + \mu} G_{\mp1}, \quad\quad G_0^\dagger = G_0.
\label{eq:ghermite}$$ Upon using the conjugations, the norm of the state $\vert \zeta ; k\rangle
:=((-)^{\nu N + \mu}G_{-1})^k \vert \zeta; 0 \rangle / k!$ is $$\parallel \vert\zeta ; k \rangle \parallel^2 =
((-)^{\nu N+\mu})^k \left( \begin{array}{c} 2\zeta +k-1 \\ k
\end{array} \right). \label{eq:gnorm}$$ The Hermitian conjugations and the signature of the norm depend on the value of $\nu N+\mu$. When $\nu N+\mu$ is even, eqs.(\[eq:ghermite\],\[eq:gnorm\]) say $U_k$ is the classical $su(1,1)$ and $V_k$ is the unitary infinite dimensional $su(1,1)$-module. On the other hand, when $\nu N+\mu$ is odd, $U_k$ is the classical $su(2)$ and $V_k$ is a non-unitary infinite dimensional $su(2)$-module. Hereafter we write $V_k$ as $V_\zeta^{cl}$.
We turn to $U_r$ and $V_r$. Let us define $\mho_j$ to be the finite dimensional module by rewriting the level $r=0,1,\cdots,2j$ in $V_r$ by means of $m=-j+r$, that is $$\mho_j= \{\vert j; m \rangle \;\vert\; \vert j; m\rangle :=
\frac{((-)^{\nu+1}J_-)^{j-m}}{[j-m]!}\vert j; j\rangle, \quad m=-j,
\cdots, j \} \label{eq:defjmodule}$$ Instead of (\[eq:defxp\])-(\[eq:defk\]), the actions of $J_\pm,\, {{\cal K}}$ on the module $\mho_j$ are written as $$J_\pm \vert j; m\rangle =(-)^{\nu+1}[j\pm m+1] \vert j; m\pm 1\rangle,
\quad {{\cal K}}\vert j; m\rangle =q^m \vert j; m\rangle
\label{eq:jaction}$$ with $J_+\vert j; j\rangle=J_- \vert j; -j\rangle =0$. Upon using (\[eq:jaction\]), one finds the relations among $J_\pm, {{\cal K}}$ to be $$[J_+, J_-] = \frac{{{\cal K}}^2 - {{\cal K}}^{-2}}{q-q^{-1}}, \quad
{{\cal K}}J_\pm = q^{\pm1}J_\pm {{\cal K}},
\label{eq:jrelation}$$ and Hermitian conjugations are defined by
$$J_\pm^\dagger = (-)^{\nu+1} J_\mp, \quad\quad {{\cal K}}^\dagger ={{\cal K}}^{-1}.
\label{eq:jhermite}$$
With the Hermitian conjugations and the definition of $\vert j; m\rangle$ given in (\[eq:defjmodule\]), the norm of the state $\vert j ; m \rangle$ can be calculated as $$\parallel\vert j; m \rangle \parallel^2 = ((-)^{\nu+1})^{j-m}
\left[ \begin{array}{c} 2j \\ j-m \end{array} \right]_q
\label{eq:jnorm}$$ Notice that the $q$-binomial coefficient in eq.(\[eq:jnorm\]) is always positive because $[x]=\frac{\sin (\pi x/N)}{\sin (\pi /N)}>0$ for $x<N$. Relation (\[eq:jrelation\]) and the conjugation (\[eq:jhermite\]) together with the norm (\[eq:jnorm\]) say that when $\nu+1$ is even, $U_r$ is $U_q(su(2))$ and $\mho_j$ is a $(2j+1)$-dimensional unitary $U_q(su(2))$-module, whereas when $\nu +1$ is odd $U_r$ is $U_q(su(1,1))$ and $\mho_j$ is a $(2j+1)$-dimensional non-unitary $U_q(su(1,1))$-module.
Now we have understood the two spaces $V_\zeta^{cl}$ and $\mho_j$ together with the unitarity conditions of them. We then wish to ask how the unitarity of the original module ${V_{\mu, \nu}^{irr}}$ relates to the unitarity of $V_\zeta^{cl}$ and $\mho_j$. It is easy to answer the question by noticing that the norm of the state $\vert h_{\mu\nu} ; kN+r \rangle$ is written as
$$\parallel \vert h_{\mu\nu} ; kN+r \rangle \parallel^2 =\,
\parallel \vert j; m \rangle \parallel^2 \cdot
\parallel \vert \zeta ; k \rangle \parallel^2.$$
Therefore ${V_{\mu, \nu}^{irr}}$ is unitary if and only if both $V_\zeta^{cl}$ and $\mho_j$ are unitary and we have shown the following proposition:
[**Proposition 3.2**]{}The irreducible highest weight module ${V_{\mu, \nu}^{irr}}$ is unitary if and only if $$\nu \in 2{\Bbb N}-1, \quad \mbox{and}\quad
\mu\in \left\{ \begin{array}{cl} 2{\Bbb N}-1 &
\mbox{if}\quad N\in 2{\Bbb N}-1 \\
2{\Bbb N} & \mbox{if}\quad N\in 2{\Bbb N}
\end{array} \right.$$
Now the proof of Theorem 3.1 has been completed.
We end this section with presenting irreducible decomposition of the completely reducible ${U_q(su(1,1)) }$-module ${\bf V}^{su_q(1,1)}$. Basically ${\bf V}^{su_q(1,1)}$ is a direct sum of ${V_{\mu, \nu}^{irr}}\simeq V_\zeta^{cl}
\otimes \mho_j$ and the values $\zeta$ and $j$ take $\zeta\in{\Bbb N}/2$ and $j\in\{0, \frac{1}{2}, 1, \frac{3}{2}, \cdots, \frac{N-1}{2}\}$. However the maximal value of $j,\;i.e.,\: j=(N-1)/2$ is problematic as in the ${U_q(su(2)) }$ case. In ${U_q(su(2)) }$ with $q=\exp\pi i/N$, the highest weight state is restricted such as $\vert j; j \rangle \in {\rm Ker}\,J_+ / {\rm Im}\,J_+^{N-1}$ [@PS]. Hence the value $j=(N-1)/2$ is excluded. The quantum dimension defined by ${\rm Tr}_{R_j} {{\cal K}}^2$ for the highest weight representation $R_j$ of ${U_q(su(2)) }$ is zero for $R_{N-1/2}$. In our case, using the character formula (\[eq:character\]), the quantum dimension is calculated as follows; $$d_q:={\rm Tr}_{{V_{\mu, \nu}^{irr}}}K^2 = \chi_\zeta\cdot\chi_j,$$ where $$\chi_\zeta= \frac{q^{2N\zeta}}{1-q^{2N}}, \quad
\chi_j = \frac{q^{2j+1}-q^{-2j-1}}{q-q^{-1}}.$$ When $j=(N-1)/2$, $\chi_j=0$ as in the ${U_q(su(2)) }$ case, but $d_q$ is finite. In contrast, when $j\le (N-2)/2$, $\chi_j = [2j+1]$ and the quantum dimension $d_q$ diverges as expected. Therefore we have shown that $${\bf V}^{su_q(1,1)} = \left(\bigoplus_{\zeta\in{\bf N}/2}V_\zeta^{cl}\right)
\otimes \left(\bigoplus_{j\in{\cal A}}\mho_j\right),$$ where ${\cal A}=\{0, \frac{1}{2}, 1, \frac{3}{2}, \cdots, \frac{N-2}{2}\}$.
Clebsch-Gordan Decomposition
----------------------------
Let us study the Clebsch-Gordan (CG) decomposition for the tensor product of two irreducible highest weight representations of ${U_q(su(1,1)) }$, $$V_1 \otimes V_2\;\longrightarrow\; V_3,$$ where $V_i:=V_{\mu_i, \nu_i}^{irr}\simeq V_{\zeta_i}^{cl}\otimes \mho_{j_i}$. The quantum CG coefficient, known as the $q$-$3j$ symbol, for ${U_q(su(1,1)) }$ has been obtained in Refs.[@LK; @Ai] when $q$ is generic. In this section, in order to derive the CG decomposition rule for ${U_q(su(1,1)) }$ with ${q=\exp \pi i /N}$ we will make full use of the result obtained by Liskova and Kirillov [@LK]. The CG coefficient they have given is, $$\begin{aligned}
&&\left[ \begin{array}{ccc} h_1 & h_2 & h_3 \\
M_1 & M_2 & M_3 \end{array}\right]^{su_q(1,1)}_q
=C(q)\delta_{M_1+M_2,\,M_3}\tilde{\Delta}(h_1, h_2, h_3) \nonumber \\
&&\times \left\{
\frac{[2j-1][M_3-h_3]![M_1-h_1]![M_2-h_2]![M_1+h_1-1]![M_2+h_2-1]!}
{[M_3+h_3-1]!}\right\}^{1/2} \nonumber \\
&&\times\sum_{R\ge 0}(-)^R q^{\frac{R}{2}(M_3+h_3-1)}
\frac{1}{[R]![M_3-h_3-R]![M_1-h_1-R]![M_1+h_1-R-1]!}
\nonumber\\
&&\quad\cdot\frac{1}{[h_3-h_1-M_1+R]![h_3+h_2-M_1+R-1]!},\end{aligned}$$ where $C(q)$ is a factor which is not important for our analysis below and $$\begin{aligned}
&{}&\tilde{\Delta}(h_1, h_2, h_3) \\
&{}&\quad =\{[h_3-h_1-h_2]![h_3-h_1+h_2-1]!
[h_3+h_1-h_2-1]![h_1+h_2+h_3-2]!\}^{1/2}. \nonumber\end{aligned}$$ The notations $h_i=h_{\mu_i\,\nu_i}=\zeta_i N-j_i, \;
M_i=h_i+k_iN+r_i=(\zeta_i+k_i)N-m_i$ have been made. Of course for our case, $i.e.$, ${q=\exp \pi i /N}$, the CG coefficient is not necessarily well-defined due to the factor $[N]=0$. What we will do is to count the number of $[N]$ and look for the condition such that the numbers of $[N]$ in the numerator and in the denominator should be equal. Lengthy examination derives the following result: Finite CG coefficients exist if and only if $$\begin{array}{l}
\zeta_1 + \zeta_2 \le \zeta_3, \\
{}\\
\vert j_1 - j_2 \vert -1 < j_3 \le {\rm min}\,(j_1+j_2, N-2-j_1-j_2).
\end{array}$$ It should be noticed that, on the module ${V_{\mu, \nu}^{irr}}$, the coproduct of the operators $K$ and $L_0$ are $\Delta(K)=K\otimes K$ and $\Delta(L_0)=L_0\otimes 1 +
1\otimes L_0$. The coproduct of the Cartan operator yields the conservation law of the highest weights, physically speaking, the conservation of the spin or angular-momentum along the $z$-axis. Now, from the above coproducts, we have the conservation laws $(\zeta_1+k_1) + (\zeta_2+k_2) = (\zeta_3+k_3)$ and $m_1 + m_2 = m_3$ (mod $N$). Therefore the minimum value of $j_3$ is just $\vert j_1-j_2 \vert$ because the difference between $\vert j_1-j_2 \vert$ and $j_3$ is always integer. We then obtain the following decomposition rule of the tensor product of two representations of ${U_q(su(1,1)) }$, $$\left( V_{\zeta_1}^{cl}\otimes \mho_{j_1}\right) \otimes
\left( V_{\zeta_2}^{cl}\otimes \mho_{j_2}\right)
= \left( \bigoplus_{\zeta_1+\zeta_2\le\zeta_3}V_{\zeta_3}^{cl}\right)
\otimes \left(\bigoplus_{j_3=\vert j_1-j_2\vert}^{{\rm min}\,
\{j_1+j_2, N-2-J_1-j_2\}}\mho_{j_3}\right).$$ The decomposition rules for the tensor products of $V_\zeta^{cl}$ and of $\mho_j$ are the same as those for the tensor products of the non-compact representations of ${sl(2,{\Bbb C})}$ and of the compact representation of $U_q({sl(2,{\Bbb C})})$, respectively. However, taking the unitarity conditions for $V_\zeta^{cl}$ and $\mho_j$ and the conservation laws of the highest weights into account, we find that all the classical modules $V_{\zeta_i}^{cl},\,(i=1,2,3)$ cannot be simultaneously unitary, and similarly for all the modules $\mho_{j_i}, \, (i=1,2,3)$.
Representation of $SL(2,{\bf R}) \otimes U_q(su(2))$
-----------------------------------------------------
Through this section we suppose that the ${U_q(su(1,1)) }$-module ${V_{\mu, \nu}^{irr}}$ is unitary. Namely, the integers $\mu$ and $\nu$ take their values in accordance with proposition 3.2. Now we have found the classical sector ${sl(2,{\Bbb R})}\simeq su(1,1)$ in $U_q(su(1,1))$ and, therefore, we can construct a representation of ${SL(2,{\Bbb R})}\otimes U_q(su(2))$ via the representations we have obtained.
The basic strategy we will follow is to represent the ${SL(2,{\Bbb R})}$ sector, roughly speaking, as follows: Let $G$ be a semi-simple Lie group, and $T$ a maximal torus. The homogeneous space $G/T$ has a complex homogeneous structure, $i.e.$, the group $G$ acts on $G/T$ by means of holomorphic transformation. Then we can interpret the unitary irreducible representations of $G$ as spaces of holomorphic sections of holomorphic line bundles over $G/T$. In our case, the group is $G={SL(2,{\Bbb R})}$ and $T=U(1)$. The homogeneous space $D={SL(2,{\Bbb R})}/U(1)$ can be identified with the complex upper half plane or alternatively with the Poincaré disk $\vert w \vert < 1$. Now we wish to obtain a representation not only of ${SL(2,{\Bbb R})}$ but also of $U_q(su(2))$, that is to say, the holomorphic sections are also the representations of the quantum Lie algebra $U_q(su(2))$. This means that each section should have the index with respect to $U_q(su(2))$ as well.
Let $L$ be a line bundle over ${SL(2,{\Bbb R})}/U(1)$, and define $L_j:=L\otimes \mho_j$. We then construct $\Psi_{j,m}^\zeta\in L_j$, that is, $\Psi_{j,m}^\zeta$ is a holomorphic section of $L$ and also a vector in $\mho_j$. Let ${\cal H}$ be the Hilbert space spanned by ${\vert \zeta ; k\rangle}$. Having an element $\langle \Psi \vert \in {\cal H}^\dagger$, we give the section by, $$\Psi_{j,m}^\zeta(w):=\langle \Psi \vert \sum_{k=0}^\infty w^k
\vert \zeta; k \rangle
\otimes \vert j; m \rangle,$$ Under the action of ${SL(2,{\Bbb R})}$, $w\,\rightarrow\, w'= (aw+b)/(cw+d)$, ${\Psi^\zeta_{j,m}(w)}$ transforms as $${\Psi^\zeta_{j,m}(w)}\;\longrightarrow\; {\Psi'}^\zeta_{j,m}(w')=
\left(\frac{1}{cw+d}\right)^{2\zeta}
\Psi^\zeta_{j,m}\left( \frac{aw+b}{cw+d} \right) \label{eq:sectrans}$$ Note that the action of $t\in U(1)$ is $t\cdot{\Psi^\zeta_{j,m}(w)}=\pi(t){\Psi^\zeta_{j,m}(w)}$ with $\pi(t)\in {\Bbb C}$. It is well known that, on $L$, we can pick a hermitian metric $(f, g)_\zeta= e^\eta f^*\cdot g$ by choosing the symplectic (Kählar) potential $\eta=2\zeta\log(1-\vert w\vert^2)$, $i.e.$, the curvature form on $L$ is given by $w=-i \bar{\partial}\partial \eta(w^*,w)$. With this hermitian metric, the inner product on the space of sections of $L$ is given by $$\langle f, g \rangle_\zeta = \int_{\vert w\vert < 1}
d\mu\,e^{\eta(w^*,w)} f^*\cdot g.$$ where $d\mu =(2\zeta - 1)/\pi dw^* dw(1-\vert w\vert^2)^{-2}$ is the ${SL(2,{\Bbb R})}$ invariant measure. Hence the inner product on $L_i$ is $$\begin{aligned}
&{}&\langle \Psi^\zeta_{j,m}, \Psi^\zeta_{j,m'} \rangle_\zeta \nonumber \\
&{}&=\delta_{m,m'}\,
\left[\begin{array}{c} 2j \\ j-m \end{array}\right]_q
\frac{2\zeta -1}{\pi} \int_{\vert w\vert < 1} dw^* dw
(1-\vert w \vert^2)^{2\zeta-2}\psi_\zeta^*(w)\cdot \psi_\zeta(w),\end{aligned}$$ where $\psi^\zeta(w)=\langle\Psi\vert \sum_{k=0}^\infty w^k {\vert \zeta ; k\rangle}$. It is worthwhile to notice that from equaton (\[eq:sectrans\]), the group ${SL(2,{\Bbb R})}$ now acts in a single valued way because $2\zeta\in{\Bbb N}$. In the classical theory of representation, the condition that the ${SL(2,{\Bbb R})}$-spin $\zeta$ should be a half-integer stems from the requirment of the group action to be single-valued, while in the representation theory of $U_q(su(1,1))$ at a root of unity, the condition originates from the requirment that all states have definite norms. The action of the Lie algebra ${sl(2,{\Bbb R})}$ on ${\Psi^\zeta_{j,m}(w)}$ is obtained upon defining the action of $g\in{sl(2,{\Bbb R})}$ on the sections of $L$ as $g\cdot\psi^\zeta(w)=\langle\Psi\vert \sum_{k=0}^\infty w^k g{\vert \zeta ; k\rangle}$. By using $G_n$ actions (\[eq:deflp\])-(\[eq:deflz\]), we obtain
$$\hat G_n {\Psi^\zeta_{j,m}(w)}= \left( w^{n+1} \partial_w + \zeta(n+1)w^n \right){\Psi^\zeta_{j,m}(w)},
\quad\quad n= 0, \pm1, \label{eq:hatgaction}$$
where we have denoted the ${sl(2,{\Bbb R})}$-actions on the space of holomorphic vectors as $\hat G_n$. The right hand side of (\[eq:hatgaction\]) spans infinitesimal transformations of ${SL(2,{\Bbb R})}$, $w\rightarrow w+\epsilon(w)$ with $\epsilon(w)=\alpha w^2+\beta w +\gamma$.
On the other hand, the holomorphic vector $\Psi^\zeta_{j,m}$ transforms under ${U_q(su(2)) }$ as follows; $$\begin{array}{l}
J_\pm \Psi_{j,m}^\zeta(w) =
[j \pm m + 1] \Psi_{j,m\pm 1}^\zeta(w), \\
{}\\
{\cal K} \Psi_{j,m}^\zeta(w) = q^m \Psi_{j,m}^\zeta(w),
\end{array}$$ with $ J_+\Psi_{j,j}^\zeta(w) = J_-\Psi_{j,-j}^\zeta(w) = 0$. In the above we have given only the highest weight representations for the ${U_q(su(2)) }$ sector. However, it is possible to represent this sector in other ways which are more useful for physical applications. In particular, the construction given in [@GS; @RRR] of the representation space of ${U_q(su(2)) }$ is important for the further investigations, especially the connections with 2$D$ conformal field theories.
Let us turn our attention to another remarkable feature of ${U_q(su(1,1)) }$ at a root of unity, that is, the connection with the generalized supersymmetry. Of course we can easily guess this connection from (\[eq:ldef\]) together with the fact that $L_{0,\pm1}$ generate ${sl(2,{\Bbb R})}$ when the representation is unitary. In the following we will explicitly show that the operators $L_{0,\pm1}$ in (\[eq:ldef\]) can be written as the infinitesimal transformations of the Poincaré disk and therefore the operators $X_\pm$ can be interpreted as the $N$-th roots of the transformations. Our discussion proceeds as follows. First of all, let us find suitable functions on the disk so that the operators $L_{0,\pm1}$ act on them as the infinitesimal transformations. In the previous part of this section, we constructed the holomorphic vector ${\Psi^\zeta_{j,m}(w)}$ by summing ${\vert \zeta ; k\rangle}\otimes{\vert j ; m\rangle}$ over the level $k$ in the ${sl(2,{\Bbb R})}$ sector. Noticing that on ${\Psi^\zeta_{j,m}(w)}$, $\hat G_n$ rather than $L_n$ played the roles of such transformations, we should change our standing point and obtain other holomorphic vectors on which $L_n$ naturally act. Therefore, in this case, we have to construct the vectors by means of the original states ${\vert h_{\mu\nu} ; kN+r \rangle}\in V_{\mu,\nu}$ instead of ${\vert \zeta ; k\rangle}\otimes{\vert j ; m\rangle}$. Notice here that we do not restrict the highest weight modules to the irreducible modules and so the level $r$ runs from $0$ to $N-1$. Let us define the holomorphic vectors as follows;
$${\Phi^\zeta}_r(w) :=\langle\Phi\vert \sum_{k \in {\Bbb Z}_+}(-)^{\frac{1}{2}k(k-1)+kr}
w^k {\vert h_{\mu\nu} ; kN+r \rangle}, \label{eq:defphi}$$
where $\vert w\vert < 1$. ${\Phi^\zeta}_r(w)$ also behaves as a holomorphic section of the line bundle over ${SL(2,{\Bbb R})}/U(1)$. Now we have $N$ holomorphic functions ${\Phi^\zeta}_0(w),\,{\Phi^\zeta}_1(w),\cdots, {\Phi^\zeta}_{N-1}(w)$. On the contrary, we had $2j+1$ functions $\Psi^\zeta_{jj}, \Psi^\zeta_{jj-1},
\cdots, \Psi^\zeta_{j-j}$ in the previous case. From eq.(\[eq:qxpaction\]) one can easily calculate the actions of $X_+$, denoted as $\hat X_+$, on these functions as follows; $${\hat X_+}^{\varepsilon}{\Phi^\zeta}_r(w)=\left\{\begin{array}{ll}
{[r+1]} {\Phi^\zeta}_{r+1}(w), & \quad{\rm for}\quad 0\leq r \leq N-2 \\
\rule{0mm}{.7cm}
{[N]}_{\varepsilon}\partial_w {\Phi^\zeta}_0(w), & \quad{\rm for}\quad r=N-1.
\end{array}\right.
\label{eq:hxpact}$$
Similarly upon using eq.(\[eq:qxmaction\]) we obtain $$-{\hat X_-}^{\varepsilon}{\Phi^\zeta}_r(w)=\left\{\begin{array}{ll}
w{[\mu]} {\Phi^\zeta}_{N-1}(w), & \quad{\rm for}\quad r=0 \\
\rule{0mm}{.5cm}
{[\mu-r]} {\Phi^\zeta}_{r-1}(w), & \quad{\rm for}\quad 1\leq r\leq \mu-1 \\
\rule{0mm}{.5cm}
{[N]}_{\varepsilon}(w\partial_w +2\zeta){\Phi^\zeta}_{\mu-1}(w),
&\quad{\rm for}\quad r=\mu \\
\rule{0mm}{.5cm}
{[N+\mu-r]} {\Phi^\zeta}_{r-1}(w), & \quad{\rm for}\quad \mu+1\leq r\leq N-1.
\end{array}\right.
\label{eq:hxmact}$$
Here we have introduced the symbols $[N]_{\varepsilon}$ and ${\hat X_\pm}^{\varepsilon}$ such that $[N]_{\varepsilon}\neq 0$ and $\lim_{{\varepsilon}\rightarrow 0} [N]_{\varepsilon}= [N]=0$, and $\lim_{{\varepsilon}\rightarrow 0} {\hat X_\pm}^{\varepsilon}= {\hat X_\pm}$. Now let us calculate the $N$-th powers of ${\hat X_\pm}^{\varepsilon}$. We obtain
$$\begin{aligned}
\lim_{{\varepsilon}\rightarrow0}\, \frac{({\hat X_+}^{\varepsilon})^N}{[N]_{\varepsilon}!} {\Phi^\zeta}_r(w) &=&
\partial_w{\Phi^\zeta}_r(w) \label{eq:XNpact} \\
\lim_{{\varepsilon}\rightarrow0}\, \frac{(-{\hat X_-}^{\varepsilon})^N}{[N]_{\varepsilon}!} {\Phi^\zeta}_r(w) &=&
\left\{\begin{array}{lc}
\left(w^2\partial_w +2\zeta w\right){\Phi^\zeta}_r(w), & 0\leq r\leq \mu-1 \\
\rule{0mm}{.5cm}
\left(w^2\partial_w +2(\zeta+\frac{1}{2}) w\right){\Phi^\zeta}_r(w), &
\mu\leq r\leq N-1
\end{array}\right.
\label{eq:XNmact}\end{aligned}$$
Thus we have shown that ${\hat X_\pm}^{\varepsilon}$, which move the function ${\Phi^\zeta}_r(w)$ to ${\Phi^\zeta}_{r\pm 1}(w)$ accoeding to (\[eq:hxpact\],\[eq:hxmact\]), are related to the $N$-th roots of infinitesimal transformations of the Poincaré disk. Furthermore, eq.(\[eq:XNmact\]) tells us that the functions ${\Phi^\zeta}_r(w)$ for $0\leq r\leq \mu-1$ have dimensions $\zeta$ and the functions ${\Phi^\zeta}_r(w)$ for $\mu \leq r\leq N-1$ have dimensions $\zeta+\frac{1}{2}$. Let ${\widetilde}{\Phi}_r^\zeta(w)$ be the former functions, [*i.e.*]{}, ${\Phi^\zeta}_r$ with dimensions $\zeta$ and $\Xi_r^{\zeta+\frac{1}{2}}(w)$ be the latter with dimensions $\zeta+\frac{1}{2}$. In the above, we examined ${\hat X_\pm}$ and the $N$-powers of them only. Of course we can also obtain $$\lim_{{\varepsilon}\rightarrow0}\, \frac{1}{2}
\left[ \begin{array}{c} 2\hat{H}+N-1 \\ N \end{array} \right]_q
= (w\partial_w + h ). \label{eq:znm}$$
where $\hat K=q^{\hat H}$ and $h$ is the dimension, [*i.e.*]{} $h=\zeta$ for the functions ${\widetilde}{\Phi}^\zeta_r(w)$ and $h=\zeta+\frac{1}{2}$ for the functions $\Xi^{\zeta+\frac{1}{2}}_r(w)$. Thus we have obtained all the generators which span the infinitesimal holomorphic transformations of the homogeneous space ${SL(2,{\Bbb R})}/U(1)$ and obtained two kinds of functions. One is the set of functions whose dimensions are $\zeta$ and the other is the set of functions whose dimensions are $\zeta+\frac{1}{2}$. Moreover the generators ${\hat X_\pm}$ mix between them. Now we can conclude that ${U_q(su(1,1)) }$ with the deformation parameter ${q=\exp \pi i /N}$ may be viewed in terms of a $Z_N$-graded supersymmetry with the upper half plane or Poincaré disk interpreted as an external space.
We have constructed holomorphic vectors over the Poincaré disk in two ways and obtained two sets, ${\Psi^\zeta_{j,m}(w)}$ and ${\Phi^\zeta}_r(w)=\{{\widetilde}{\Phi}_r^\zeta(w), \Xi^{\zeta+\frac{1}{2}}_r(w)\}$. We end this section with the discussion of the connection between them. Upon the substitution $m=-j+r$, the actions of ${\hat X_\pm}$ on ${\widetilde}{\Phi}^\zeta_r(w)$ coincide with those of $J_\pm$ on ${\Psi^\zeta_{j,m}(w)}$ except the actions ${\hat X_+}{\widetilde}{\Phi}^\zeta_\mu$ and ${\hat X_-}{\widetilde}{\Phi}^\zeta_0$ corresponding to $J_+\Psi^\zeta_{jj}$ and $J_-\Psi^\zeta_{j-j}$, respectively. The latter vanish because $\Psi^\zeta_{jj}$ is the highest weight vector and $\Psi^\zeta_{j-j}$ is the lowest weight vector with respect to ${U_q(su(2)) }$. However ${\hat X_+}{\widetilde}{\Phi}^\zeta_\mu$ and ${\hat X_-}{\widetilde}{\Phi}^\zeta_0$ do not vanish but yield the functions $\Xi^{\zeta+\frac{1}{2}}_{\mu+1}$ and $\Xi^{\zeta+\frac{1}{2}}_{N-1}$. In other words, through these two actions the two classses of functions, ${\widetilde}{\Phi}^\zeta_r$ and $\Xi^{\zeta+\frac{1}{2}}_r$, mix with each other after taking the limit ${\varepsilon}\rightarrow0$ in eqs.(\[eq:hxpact\],\[eq:hxmact\]). Noticing that the functions $\Xi^{\zeta+\frac{1}{2}}_r$ have zero norms because they correspond to the states lying between the $(kN+\mu)$-th level to the $(kN+N-1)$-th level in the original module $V_{\mu,\nu}$, we see that they are proportional to $\sqrt{[N]_{\varepsilon}}$ and may rescale them as $\Xi^{\zeta+\frac{1}{2}}_r=\sqrt{[N]_{\varepsilon}}\,{\widetilde}{\Xi}^{\zeta+\frac{1}{2}}_r$. By the rescaling, the set of functions ${\widetilde}{\Xi}^{\zeta+\frac{1}{2}}_r$ completely decouples from the set ${\widetilde}{\Phi}^\zeta_r$ after taking the limit ${\varepsilon}\rightarrow0$. We can then identify ${\widetilde}{\Phi}^\zeta_r(w)$ with ${\Psi^\zeta_{j,m}(w)}$, and we have another set of functions ${\widetilde}{\Xi}^{\zeta+\frac{1}{2}}_r$ whose dimensions differ by $1/2$ from those of ${\widetilde}{\Phi}^\zeta_r$.
$Osp(1\vert 2)$ and $U_q(su(1,1))$ with $q^2=-1$
================================================
In this section we devote ourselves only to the case $N=2$, that is, $[2]=0$ and find that $Osp(1\vert 2)$ can be represented in terms of the representation of $U_q(su(1,1))$. To this end it is convenient to introduce operators ${{\cal L}_1}$ and ${{\cal L}_{-1}}$, which are related to $X_\pm$ and $K$ by the relations $${{\cal L}_1}= iq^{-\frac{1}{2}}KX_-, \quad\quad {{\cal L}_{-1}}= iq^{-\frac{1}{2}}X_+K.$$ Further we define vectors $\phi_r^\zeta(w), \; r= 0, 1$ by means of the highest weight representations of $U_q(su(1,1))$ by $$\phi_r^\zeta(w) = \sum_{k=0}^\infty w^k \vert h_{\mu\nu};
2k+r ), \quad\quad r=0, 1$$ where we have introduced new weight vectors $$\vert h_{\mu\nu}; r ) := \frac{{{\cal L}_{-1}}^r}{\langle r \rangle !}
\vert h_{\mu\nu}; 0 \rangle,$$ with ${\langle r \rangle }=q^{r-1}{[r]}$. The new weight vector $\vert h_{\mu\nu}; r )$ coincides with the original one $\vert h_{\mu\nu}; r\rangle$ up to a phase factor. We can, therefore, deal with the highest weight modules spanned by the new weight vectors in the same fashion as $V_{\mu,\nu}$. In particular, the operator ${{\cal L}_1}$ and ${{\cal L}_{-1}}$ act as $$\begin{array}{l}
{{\cal L}_1}\vert h_{\mu\nu}; r ) = {\langle 2h_{\mu\nu}+r-1 \rangle }
\vert h_{\mu\nu}; r-1 ), \\
\rule{0mm}{.5cm}
{{\cal L}_{-1}}\vert h_{\mu\nu}; r ) = {\langle r+1 \rangle }
\vert h_{\mu\nu}; r+1 ).
\end{array}$$ From these actions it is easily seen that the $\mu$-th state has zero norm as in the previous case. Indeed, in $N=2$ case, the highest weight is given by (see eq.(\[eq:hmn\])), $$h_{\mu\nu}=\frac{1}{2}(2\nu - \mu +1). \label{eq:hmnt}$$ Because $1\le\mu\le N$, there are two cases, $\mu=1$ and $\mu=2$; in the case when $\mu=1$, ${\phi_1^\zeta(w)}$ has zero norm, while neither ${\phi_0^\zeta(w)}$ nor ${\phi_1^\zeta(w)}$ has zero norm in the case when $\mu=2$. We treat these cases separately.
We first examine the case when $\mu=1$. The highest weight is given by $h_{1\nu}=\nu$. Upon using the relation $\langle 2n \rangle=\langle 2 \rangle n$ for an integer $n$, the actions of ${{\cal L}_{-1}}$ on the vectors ${\phi_0^\zeta(w)}, \; {\phi_1^\zeta(w)}$ are easily obtained, $${{\cal L}_{-1}}{\phi_0^\zeta(w)}= {\phi_1^\zeta(w)}, \quad\quad
{{\cal L}_{-1}}{\phi_1^\zeta(w)}= \langle 2 \rangle \partial_w {\phi_0^\zeta(w)}, \label{eq:ospp}$$ and ${{\cal L}_1}$ acts on them as $${{\cal L}_1}{\phi_0^\zeta(w)}= w {\phi_1^\zeta(w)}, \quad\quad
{{\cal L}_1}{\phi_1^\zeta(w)}= \langle 2 \rangle ( \nu+ w\partial_w) {\phi_0^\zeta(w)}. \label{eq:ospm}$$ At first sight of eqs.(\[eq:ospp\]) and (\[eq:ospm\]) we might suspect that ${\phi_0^\zeta(w)}$ and ${\phi_1^\zeta(w)}$ are superpartner with each other and ${\cal L}_{\pm1}$ are the generators of supersymmetry transformations. However we must be more cautious because the actions of ${\cal L}_{\pm1}$ on ${\phi_1^\zeta(w)}$ are zero-actions due to the factor $\langle2\rangle$. Note that the vector ${\phi_1^\zeta(w)}$ must be proportional to $\sqrt{\langle 2 \rangle }$ since the norm of the vector is proportional to $\langle 2 \rangle $. Fortunately we can remedy the situation by scaling the operators and the vector ${\phi_1^\zeta(w)}$ as follows; $$\begin{array}{l}
\phi^\zeta(w) = {\phi_0^\zeta(w)},\quad\quad \sqrt{\langle 2 \rangle }\,
\psi^\zeta(w) = {\phi_1^\zeta(w)}, \\
\sqrt{\langle 2 \rangle }\,{\cal G}_{-1}
= {{\cal L}_{-1}}, \quad\quad \sqrt{\langle 2 \rangle }\,{\cal G}_1 = {{\cal L}_1}.
\end{array}
\label{eq:scaling}$$ The actions of ${\cal G}_\pm$ on $\phi^\zeta$ and $\psi^\zeta$ are now non-vanishing and $\psi^\zeta$ has definite norm.
We are ready to discuss the connection between two-dimensional supersymmetry and $U_q(su(1,1))$. Let us define an infinitesimal transformation ${\delta_\varepsilon}$ as $${\delta_\varepsilon}:= a\,{\cal G}_1 + b\,{\cal G}_{-1},$$ where $a, b$ are infinitesimal Grassmann numbers. Under the transformation the fields ${\phi^\zeta (w)}$ and ${\psi^\zeta (w)}$ transform into each other according to $$\begin{array}{l}
{\delta_\varepsilon}{\phi^\zeta (w)}= \varepsilon(w){\psi^\zeta (w)}, \\
\rule{0mm}{.5cm}
{\delta_\varepsilon}{\psi^\zeta (w)}= \left( \nu(\partial_w {\varepsilon}(w)) + {\varepsilon}(w) \partial_w \right) {\phi^\zeta (w)},
\end{array}$$ where ${\varepsilon}(w)=aw +b$ is an anticommuting analytic function which parametrises infinitesimal holomorphic transformation. The commutation relations of two transformations ${\delta_{\varepsilon_1}}$ and ${\delta_{\varepsilon_2}}$ are $$\begin{array}{l}
{[{\delta_{\varepsilon_1}}, {\delta_{\varepsilon_2}}]}{\phi^\zeta (w)}= \left( \zeta (\partial_w \xi(w))
+\xi(w)\partial_w \right) {\phi^\zeta (w)}, \\
\rule{0mm}{.5cm}
{[{\delta_{\varepsilon_1}}, {\delta_{\varepsilon_2}}]}{\psi^\zeta (w)}= \left( (\zeta+\frac{1}{2})(\partial_w \xi(w))
+\xi(w)\partial_w \right) {\psi^\zeta (w)},
\end{array}
\label{eq:sltr}$$ with $\xi(w)=2{\varepsilon}_1(w){\varepsilon}_2(w)$. The right hand sides of equations (\[eq:sltr\]) are just the transformations of the fields ${\phi^\zeta (w)}$ and ${\psi^\zeta (w)}$ having dimensions $\zeta$ and $\zeta+\frac{1}{2}$, respectively, under the the infinitesimal transformation of ${SL(2,{\Bbb R})}$, $w\,\rightarrow\,w+\xi(w)$. We can therefore conclude that the infinitesimal transformation ${\delta_\varepsilon}$ which is written in terms of generators of $U_q(su(1,1))$ is just the square root’ of infinitesimal $SL(2,{\bf R})$ transformation, that is to say, ${\delta_\varepsilon}$ is an infinitesimal supersymmetry transformation. Further the fields ${\phi^\zeta (w)}$ and ${\psi^\zeta (w)}$ which are constructed in terms of the highest weight representations of $U_q(su(1,1))$ can be regarded as superpartners with each other. Finally, $Osp(1\vert 2)$ algebra is obtained as follows: Let $L_{\pm1}=({\cal G}_{\pm1})^2$ and $F_{\pm\frac{1}{2}}={\cal G}_{\pm1}$, then the following commutation relation are easily checked on the fields $\phi^\zeta(w)$ and $\psi^\zeta(w)$ to be $$\begin{array}{ll}
{[L_n, L_m]}=(n-m)L_{n+m}, & n, m=0, \pm1, \\
{[L_n, F_r]}=\left( \frac{1}{2}n-r \right)F_{n+r},
& r=\frac{1}{2}, -\frac{1}{2}, \\
{\{F_r, F_s\}}= 2L_{r+s}, & r, s =\frac{1}{2}, -\frac{1}{2}.
\end{array}$$ Thus we have succeeded in building the super-algebra $Osp(1\vert 2)$ and its representation in terms of the representation of $U_q(su(1,1))$ when $q^2=-1$ and $\mu=1$, $i.e.$, all states at the $(2{\Bbb N}-1)$-th levels are zero-norm states.
Next we turn to the $\mu=2$ case. By eq.(\[eq:hmnt\]), the highest weight is given by $\zeta-\frac{1}{2}$. On the contrary to the case when $\mu=1$, no zero-norm state appears. The actions of ${{\cal L}_1}$ and ${{\cal L}_{-1}}$ on $\phi_{0,1}$ are as follows; $$\begin{array}{l}
{{\cal L}_1}{\phi_0^\zeta(w)}= \langle 2 \rangle (\nu w + w^2\partial_w){\phi_1^\zeta(w)}, \quad
{{\cal L}_1}{\phi_1^\zeta(w)}= {\phi_0^\zeta(w)}, \\ {} \\
{{\cal L}_{-1}}{\phi_0^\zeta(w)}= {\phi_1^\zeta(w)}, \quad
{{\cal L}_{-1}}{\phi_1^\zeta(w)}= \langle 2 \rangle \partial_w {\phi_0^\zeta(w)}.
\end{array}$$ Unfortunately, we cannot find any consistent ways to remove the factor $\langle 2 \rangle$ as in the previous case (\[eq:scaling\]).
Discussion
==========
In this article the highest weight representations of ${U_q(su(1,1)) }$ when ${q=\exp \pi i /N}$ has been investigated in detail. We have shown that the highest weight module ${V_{\mu, \nu}^{irr}}$ is isomorphic to the tensor product of two highest weight modules $V_\zeta^{cl}$ and $\mho_j$. This fact played a key role of this work, and novel features of ${U_q(su(1,1)) }$ originated from this structure of ${V_{\mu, \nu}^{irr}}$. The module ${V_\zeta^{cl}}$ is a classical non-compact ${sl(2,{\Bbb C})}$-module, while ${\mho_j}$ is a $(2j+1)$-dimensional module of the quantum universal envelopping algebra $U_q({sl(2,{\Bbb C})})$. Theorem 3.1 states what ${V_\zeta^{cl}}$ and ${\mho_j}$ are. In particular, when the original ${U_q(su(1,1)) }$-module ${V_{\mu, \nu}^{irr}}$ is unitary, $V_\zeta^{cl}$ is the unitary highest weight module of $su(1,1)\simeq {sl(2,{\Bbb R})}$ and ${\mho_j}$ is the unitary highest weight ${U_q(su(2)) }$-module. In the following we restrict our discussions to this case, [*i.e.*]{}, ${V_{\mu, \nu}^{irr}}$ is unitary.
We summarize here the novel features of ${U_q(su(1,1)) }$ when ${q=\exp \pi i /N}$: First we should notice that the non-compact nature appears only through the classical module ${V_\zeta^{cl}}$, and the effects of $q$-deformation arise only from the compact sector ${\mho_j}$. Since the non-compact sector ${V_\zeta^{cl}}$ is classical, a representation of the Lie group ${SL(2,{\Bbb R})}$ is naturally induced. Indeed, we gave a representation of ${SL(2,{\Bbb R})}\otimes{U_q(su(2)) }$ by means of the holomorphic vector ${\Psi^\zeta_{j,m}(w)}=\psi_\zeta(w)\otimes\vert j; m\rangle$. Here we used holomorphic sections $\psi_\zeta(w)$ of a line bundle over the homogeneous space ${SL(2,{\Bbb R})}/U(1)$ in order to represent the ${SL(2,{\Bbb R})}$ sector and $\vert j; m\rangle\in{\mho_j}$ is a weight vector with respect to ${U_q(su(2)) }$. With our deformation parameter $q$, $i.e.$, ${q=\exp \pi i /N}$, we have shown that the value of the highest weight $j$ lies in ${\cal A}=\{0, \frac{1}{2}, 1,
\cdots, \frac{N-2}{2}\}$. Notice that this finiteness of the number of the highest weight states for the ${U_q(su(2)) }$ sector comes from the condition that the original highest weight representations ${V_{\mu, \nu}^{irr}}$ of ${U_q(su(1,1)) }$ be well-defined, that is, every state in them has finite norm. The representation ${\Psi^\zeta_{j,m}(w)}$ says that every point on the homogeneous space, ([*i.e.*]{}, the upper half plane or the Poincaré disk) has the representation space of ${U_q(su(2)) }$. In this sense, we suggest that the non-compact homogeneous space can be viewed as a base space or an external space and the representation space of ${U_q(su(2)) }$ as an internal space.
We have also discussed the connection between ${U_q(su(1,1)) }$ with ${q=\exp \pi i /N}$ and $Z_N$-graded supersymmetry by presenting $N$ holomorphic vectors, denoted as ${\Phi^\zeta}_r(w),\,r=0, 1, \cdots, N-1$, in another way. The generators, $X_\pm, K$ of ${U_q(su(1,1)) }$ act on them and map ${\Phi^\zeta}_r$ to ${\Phi^\zeta}_{r\pm1}$. On the other hand, the operator $L_n$ which are related to the $N$-th powers of $X_\pm, K$ by the relations (\[eq:ldef\]) generate the holomorphic transformations of the functions under the infinitesimal transformations of the homogeneous space ${SL(2,{\Bbb R})}/U(1)$. In this sense, we may say that generators of ${U_q(su(1,1)) }$ give rise to $Z_N$-graded supersymmetry transformations and the $N$-th powers of them are related to the infinitesimal transformations with respect to the external space. Furthermore, by observing the transformations under $L_n$, we have shown that these $N$ functions separate into two classes. One of them is the set of functions, ${\widetilde}{\Phi}^\zeta_r(w)$, $r=0\sim 2j$, with dimensions $\zeta$ and the other is the set of functions, $\Xi_r^{\zeta+\frac{1}{2}}(w)$, $r=2j+1\sim N-1$, which have dimensions $\zeta+\frac{1}{2}$ and have zero norms. That is to say, the functions ${\widetilde}{\Phi}^\zeta_r$ and $\Xi_r^{\zeta+\frac{1}{2}}$ behave as the covariant vectors with dimensions $\zeta$ and $\zeta+\frac{1}{2}$, respectively, under ${sl(2,{\Bbb R})}$. In particular, we have shown the explicit realization of two-dimensional supersymmetry $Osp(1\vert2)$ via the representation of ${U_q(su(1,1)) }$ when the deformation parameter satisfies $q^2=-1$.
We have also discussed the Clebsch-Gordan decomposition for the tensor product of two irreducible highest weight modules and found that the decomposition rules for the two sectors $V_\zeta^{cl}$ and $\mho_j$ coincide with those for the classical non-compact representations of ${sl(2,{\Bbb R})}$ and the representations of ${U_q(su(2)) }$.
Finally, we would like to future issues to be investigated. As mentioned in chapter 1, it is quite interesting to expect the relationship between ${U_q(su(1,1)) }$ and topological $2D$ gravity coupled with RCFT. In order to make this expectation come true, we have to find a good representation space of ${U_q(su(1,1)) }$ for which such a physical theory is associated [@MS2]. The $Z_N$ graded supersymmetry implies that the internal symmetry, ${U_q(su(2)) }$, is not independent of the base manifold $\Sigma_g$ but yields the deformation of the metric of $\Sigma_g$ through the $N$-th powers of the action.
Second, it is also interesting to investigate geometical aspect of our result. As for the geometical viewpoint of quantum groups, it is widely expected that quantum groups will shed light on the concept of “quantum” space-time. In particular, quantum groups in the sense of $A_q(G)$, the $q$-deformation of the functional ring over the group $G$, rather than $U_q({\Got{g}})$ play the central role in the noncommutative geometry initiated by Manin[@Ma], and Wess and Zumino [@WZ]. Further Wess and Zumino have studied a $q$-deformed quantum mechanics in terms of the noncommutative differential geometry based on $A_q(G)$. The phenomena observed in this paper suggest that by the quantization of the Poincaré disk, a certain “$q$-deformed space” appears as a ($q$-deformed) fiber at each point on the disk which remains classical. This observation is reminiscent of the result obtained in Ref.[@Su]. Actually, in order to construct $q$-deformed mechanics, a $q$-deformed phase space was introduced in [@Su] by attaching an internal space at each point on the phase space of the classical mechanics and all effects of $q$-deformation stemed only from the internal space.
I am grateful to Dr. T. Matsuzaki for fruitful collaborations and discussions. I would also like to thank Dr. H. W. Braden for carefully reading this manuscript and valuable comments.
[99]{} G. Moore, and N. Seiberg. Lectures on RCFT, in Superstrings ’89. ed. M. Green, R. Iengo, S. Randjbar-Deami, E. Sezgin, and A. Strominger (World Scientific, Singapore, 1990) p.1, and references therein,
A. Tsuchiya and Y. Kanie. in Conformal Field Theory and Solvable Lattice Model. Advanced Studies in Pure Mathematics. [**16**]{} (Kinokuniya, Tokyo, 1987); Lett. Math. Phys. [**13**]{} (1987) 303,
G. Moore, and N. Seiberg. Commun. Math. Phys. [**123**]{} (1989) 177,
L. Alvarez-Gaume, C. Goméz, and G. Sierra. Nucl. Phys. [**B319**]{} (1989) 155; Phys. Lett. [**B200**]{} (1989) 142; Nucl. Phys. [**B330**]{} (1990) 347,
G. Moore, and N. Yu Reshetikhin. Nucl. Phys. [**328**]{} (1989) 557,
V. Pasquier, and H. Saleur. Nucl. Phys. [**B330**]{} (1990) 523,
A. Alekseev, and S. Shatashvili. Commun. Math. Phys. [**133**]{} (1990) 353,
C. Gómez, and G. Sierra. Phys. Lett. [**B240**]{} (1990) 149; Nucl. Phys. [**B352**]{} (1991) 791,
C. Ramirez, H. Ruegg, and M. Ruiz-Altaba. Phys. Lett. [**B247**]{} (1990) 499; Nucl. Phys. [**B364**]{} 195,
M. Chaichian, D. Ellinas, and P. Kulish. Phys. Rev. Lett. [**65**]{} (1990) 980,
F. Narganes-Quijano. J. Phys. [**A24**]{} (1991) 1699,
V. Spiridonov. Phys. Rev. Lett. [**69**]{} (1992) 398,
Z. Chang. J. Phys. [**A25**]{} (1992) L707,
T. Kobayashi, and T. Suzuki. [*Quantum Mechanics with q-Deformed Commutators and Periodic Variables*]{}, UTHEP-254 (1993), J. Phys. [**A**]{} in press,
P. P. Kulish, and E. V. Damaskinsky. J. Phys. [**A23**]{} (1990) L415,
H. Ui, and N. Aizawa. Mod. Phys. Lett. [**A5**]{} (1990) 237,
I. M. Burban, and A. U. Klimyk. J. Phys. [**A26**]{} (1993) 2139,
T. Matsuzaki, and T. Suzuki. J. Phys. [**A26**]{} (1993) 4355,
D. Montano, and J. Sonnenschein. Nucl. Phys. [**324**]{} (1989) 348,
A. H. Chamseddine, and D. Wyler. Nucl. Phys. [**B340**]{} (1990) 595; Phys. Lett. [**B228**]{} (1989) 75,
E. Verlinde, and H. Verlinde. Nucl. Phys. [**B348**]{} (1991) 457,
Y. Ohnuki, and S. Kamefuchi. Quantum Field Theory and Parastatistics. (Univ. of Tokyo Press, Tokyo, 1982),
C. Ahn, D. Bernard, and A. LeClair. Nucl. Phys. [**B346**]{} (1990) 409,
R. Floreanini, and L. Vinet. J. Phys. [**A23**]{} (1990) L1019,
A. P. Polychronakos. Mod. Phys. Lett. [**A5**]{} (1990) 2325,
A. T. Filippov, A. P. Isaev, and A. B. Kurdikov. Mod. Phys. Lett. [**A7**]{} (1992) 2129,
G. Lusztig. Contemp. Math. [**82**]{} (1989) 237,
N. A. Liskova, and A. N. Kirillov. Int. J. Mod. Phys. [**A7**]{} (1992) 611,
N. Aizawa. J. Math. Phys. [**34**]{} (1993) 1937,
T. Matsuzaki, and T. Suzuki. in progress,
Yu. I. Manin. Quantum Groups and Noncommutative Geometry. (Centre de Recherche Math., Montreal, 1989),
J. Wess, and B. Zumino. Nucl. Phys. (Proc. Suppl.) [**18**]{} (1990) 302,
T. Suzuki. J. Math. Phys. [**34**]{} (1993) 3453.
|
---
abstract: 'We propose a nonparametric sequential test that aims to address two practical problems pertinent to online randomized experiments: (i) how to do a hypothesis test for complex metrics; (ii) how to prevent type $1$ error inflation under continuous monitoring. The proposed test does not require knowledge of the underlying probability distribution generating the data. We use the bootstrap to estimate the likelihood for blocks of data followed by mixture sequential probability ratio test. We validate this procedure on data from a major online e-commerce website. We show that the proposed test controls type $1$ error at any time, has good power, is robust to misspecification in the distribution generating the data, and allows quick inference in online randomized experiments.'
author:
- |
Vineet Abhishek\
Walmart Labs\
CA, USA\
`[email protected]`\
Shie Mannor\
Technion\
Haifa, Israel\
`[email protected]`\
bibliography:
- '/Users/vabhish/work/reports/reference/misc.bib'
title: A nonparametric sequential test for online randomized experiments
---
Introduction {#sec:intro}
============
Most major online websites continuously run several randomized experiments to measure the performance of new features or algorithms and to determine their successes or failures; see [@kohavi-2009] for a comprehensive survey. A typical workflow consists of: (i) defining a success metric; (ii) performing a statistical hypothesis test to determine whether or not the observed change in the success metric is significant. Both of these steps, however, are nontrivial in practice.
Consider a typical product search performance data collected by any online e-commerce website. Such data is multidimensional, containing dimensions such as the number of search queries, the number of conversions, time to the first click, total revenue from the sale, etc., for each user session. The success metric of interest could be a complicated scalar function of this multidimensional data rather than some simple average; e.g., a metric of interest could be the ratio of expected number of successful queries per user session to the expected number of total queries per user session, an empirical estimate of which is a ratio of two sums. Furthermore, some commonly used families of probability distributions such as the Bernoulli distribution or the normal distribution are almost never suitable to model this data. The search queries in a user session are highly correlated; using the Bernoulli distribution to model even simple metrics such the click through rate per impression is inaccurate.[^1] Another example is the revenue based metrics with a considerable mass at zero and a heavy tail distribution for the nonzero values. The use of the normal distribution for modeling revenue per user session is inaccurate and sensitive to extreme values unless the sample size is huge. As a result, the use of the binomial proportion test or the t-test for hypothesis testing is unsuitable for such metrics respectively. It requires considerable effort to find appropriate models for each of these metrics and it is implausible to come up with a suitable parametrized family of probability distributions that could be used to model different complex success metrics.
Additionally, classical statistical tests such as the t-test assume that the number of samples is given apriori. This assumption easily breaks in an online setting where an end user with not much formal training in statistics has near real-time access to the performance of ongoing randomized experiments: the decision to stop or continue collecting more data is often based on whether or not the statistical test on the data collected so far shows significance or not. This practice of chasing significance leads to highly inflated type $1$ error.[^2] Several recent articles have highlighted this aspect of data collection as one of the factors contributing to the statistical crisis in science; see, e.g., [@simmons:2011] and [@goodson-2014]. Use of sample size calculator to determine the number of samples to be collected and waiting until the required number of samples have been collected is hard to enforce in practice, especially in do-it-yourself type platforms commonly used to set up and monitor online randomized experiments. Additionally, the sample size calculation requires an estimate of the minimum effect size and the variance of the metric. These estimates are often crude and result in waiting longer than necessary in case the true change is much larger than the estimate.
The above two challenges point to two separate fields of statistics: the bootstrap and sequential tests. The bootstrap is frequently used to estimate the properties of complicated statistics; see [@efron-1993] and [@hall-1997] for a comprehensive review. Sequential tests provide an alternative to committing to a fixed sample size by providing type $1$ error control at any time under continuous monitoring; see [@Lai-2001] for a survey on sequential tests. Sequential tests, however, are still uncommon in practice partly because of the information they require about the data generation process.
Our main contribution is in bringing together the ideas from the bootstrap and sequential tests and applying them to construct a nonparametric sequential test for performing a hypothesis test on complex metrics. We provide empirical evidence using the data from a major online e-commerce website demonstrating that our nonparametric sequential test is suitable for the diverse type of metrics, ensures that type $1$ error is controlled at any time under continuous monitoring, has good power, is robust to misspecification in the distribution generating the data, and enables making quick inference in online randomized experiments. We advocate the use of this general approach in the absence of knowledge of the right distribution generating the data, which typically is the case.
Briefly, we compute a studentized statistic for the metric of interest on blocks of data and use the bootstrap to estimate its likelihood in a nonparametric manner. The likelihood ratios can then be used to perform mixture sequential probability ratio test (henceforth, mixture SPRT) [@robbins-1970]. We highlight theoretical results from existing literature justifying the use of the bootstrap based likelihood estimation and the mixture SPRT.
**Related work**: The bootstrap based nonparametric approach we take distinguishes our work from the related literature on sequential testing. Most of the existing literature assumes the knowledge of parametrized family of distribution from which data (typically scalar) is generated; ours does not. The idea of sequential probability ratio test (SPRT) for testing a simple hypothesis and its optimality originates from the seminal work of [@wald-1945]. Mixture SPRT [@robbins-1970], described in Section \[sec:m-sprt\], extends this to a composite hypothesis. For the exponential family of distributions, mixture SPRT is shown to be a test with power $1$ and almost optimal with respect to expected time to stop; see [@robbins-1970] and [@pollak-1978]. [@johari-2016] formalizes the notion of always valid p-value and uses mixture SPRT for the sequential test. MaxSPRT [@kulldorff], [@kharitonov-2015] is another popular sequential test where the unknown parameter is replaced by its maximum likelihood estimate computed from the observed data so far and the rejection threshold is determined (typically using numerical simulations) to ensure type $1$ error control. The computation of the maximum likelihood estimate as well as the rejection threshold requires knowing the parametrized family of distribution; consequently, MaxSPRT has been studied only for a few simple distributions. Group sequential tests [@pocock-1977] divide observations into blocks of data for inference. It, however, assumes the normal distribution and uses mean as the metric of interest. The theoretical foundations of the bootstrap likelihood estimation can be found in [@hall-1987]; it shows that the bootstrap based likelihood estimation of the studentized statistic provide a fast rate of convergence to the true likelihood. [@davison-1992] uses a nested bootstrap to compute the likelihood which can be computationally expensive. [@efron-1993b] uses confidence intervals to compute the likelihood and shows that for the exponential family of distributions, the confidence interval based approach has the rate of convergence of the same order as in [@hall-1987].
Model, notation, and assumptions {#sec:model}
================================
Let $\mb{X} \in \R^d$ be a random vector with the CDF $F(\mb{x}; \theta, \eta)$ and corresponding pdf $f(\mb{x}; \theta, \eta)$. Here $\theta$ is the scalar parameter of interest and $\eta$ is a nuisance parameter (possibly null or multidimensional). Denote a realized value of the random vector $\mb{X}$ by $\mb{x}$. For $j < k$, let $\mb{X}_{j:k} \triangleq (\mb{X}_j, \mb{X}_{j+1}, \ldots, \mb{X}_{k})$ be a collection of i.i.d. random vectors indexed from $j$ to $k$; define $\mb{x}_{j:k}$ similarly.
We are interested in testing the null hypothesis that $\theta = \theta_0$ against the alternate hypothesis $\theta \neq \theta_0$ such that type 1 error is controlled at any given time and the power of the test should be high; i.e., we are looking for a function $D_{\theta_0}$ mapping sample paths $\mb{x}_1, \mb{x}_2, \ldots $ to $[0, 1]$ such that for any $\alpha \in (0, 1)$: {
[l l]{} &\
1 &
. where $\mb{X}$’s are assumed to be realized i.i.d. from $F(\mb{x}; \theta, \eta)$ and $\mathbb{P}_{\theta}$ is the probability under $F(\mb{x}; \theta, \eta)$. The first part of is a statement on type $1$ error control: if the null hypothesis is true and our tolerance for incorrect rejection of the null hypothesis is $\alpha$, then the probability that the test statistic exceeds $\alpha$ and rejects the null hypothesis (*type $1$ error*) is at most $\alpha$. The second part of is a statement on the power of the test: if the null hypothesis is false then the probability that the test statistic exceeds $\alpha$ and rejects the null hypothesis (*the power of the test*) is close to one.
Let $\wh{F}(\mb{x}_{1:n})$ be the empirical CDF constructed from $n$ i.i.d. samples. We make the following assumption about the parameter of interest $\theta$:
\[assume:normal\] Assume that $\theta$ is a functional statistic with asymptotically normal distribution, i.e., $\theta = T(F)$ for some functional $T$ and $\sqrt{n}(T(\wh{F}(\mb{X}_{1:n})) - \theta) \rightarrow \mc{N}(0, \zeta^2)$ in distribution as $n \rightarrow \infty$, where $T(\wh{F}(\mb{x}_{1:n}))$ is the plug-in estimate of $\theta$. Furthermore, assume that the asymptotic variance $\zeta^2$ does not depend on the specific choice of $\theta$.
Asymptotic normality of plug-in statistic in Assumption \[assume:normal\] holds under fairly general conditions and captures several statistics of interest; see [@lehmann-2004] for a formal discussion on this.
Preliminaries {#sec:preliminaries}
=============
Our work leverages ideas from the bootstrap and mixture SPRT. This section provides their brief overview.
The bootstrap {#sec:bs}
-------------
Let $Z_{1:n}$ be $n$ i.i.d. random variables and let $H_n(z) \triangleq \Prob{T(Z_{1:n}) \leq z}$ be the sampling distribution of a functional $T$. The CDF of $Z$’s is usually unknown and hence $H_n$ cannot be computed directly. The (nonparametric) bootstrap is used to approximate the sampling distribution $H_n$ using a resampling technique. Let $Z^*_{1:n}$ be $n$ i.i.d. random variables each with the CDF equal to the empirical CDF constructed from $Z_{1:n}$. Let $H^*_n(z) \triangleq \CProb{T(Z^*_{1:n}) \leq z}{Z_{1:n}}$ be the bootstrap distribution of the functional $T$. Notice that $H_n^*$ is a random CDF that depends on the realized values of $Z_{1:n}$. The idea is to use $H_n^*$ as an approximation for $H_n$.[^3]
The bootstrap distribution is computed using Monte Carlo method by drawing a large number of samples, each of size $n$, from the empirical distribution. This is identical to sampling with replacement and provides a convenient computational method to deal with complex statistics. Even in the cases where the statistics are asymptotically normal, we might have to rely on the bootstrap to compute its asymptotic variance. Moreover, for asymptotically normal statistics, the bootstrap typically provides an order improvement in convergence rate to the true distribution (see [@hall-1997] and [@hall-1987]). This motivates the use of the bootstrap in our nonparametric sequential test.
Mixture SPRT {#sec:m-sprt}
------------
In a general form, mixture SRPT works as follows. Let $\rho_n(z_{1:n}; \theta)$ be the joint pdf parametrized by $\theta$ of a sequence of random variables $Z_{1:n}$. Let $\theta \in \mc{I}$ and $\pi(\theta)$ be any prior on $\theta$ with $\pi(\theta) > 0$ for $\theta \in \mc{I}$. Define: L(z\_[1:n]{};\_0) . If $\theta = \theta_0$, the sequence $L(Z_{1:n};\theta_0)$’s forms a martingale with respect to the filtration induced by the sequence $Z_n$’s.[^4] Using Doob’s martingale inequality, > 0, where the probability is computed assuming $\theta = \theta_0$. An immediate consequence of this is control of type $1$ error at any time. The optimality of mixture SPRT with respect of power and the expected time to stop ([@robbins-1970] and [@pollak-1978]) and its ease of implementation motivates its use in our nonparametric sequential test.
The p-value at time $n$ is defined as: p\_[value]{}(z\_[1:n]{};\_0) { 1, }. It follows immediate from and that and $p_{value}(z_{1:n};\theta_0)$ is nonincreasing in $n$. In this sense, this is an always valid p-value sequence, defined in [@johari-2016]. The duality between the p-value and the confidence interval can be used to construct an always valid (though possibly larger than the exact) confidence interval.
Mixture SPRT, however, requires the knowledge of the functional form of the pdf up to the unknown parameter of interest. This is hard especially in a multidimensional setting and in the presence of nuisance parameter(s). We address this using the bootstrap, as described next.
The nonparametric sequential test {#sec:seq-test}
=================================
This section shows how mixture SPRT and the bootstrap could be used together to construct a nonparametric sequential test for complex statistics. We will also discuss how to apply the ideas developed here to A/B tests.
The bootstrap mixture SPRT {#sec:bs-seq-test}
--------------------------
Our main idea is as follows: (i) divide the data into blocks of size $N$; (ii) compute a studentized plug-in statistic for $\theta$ on each block of data; (iii) estimate the likelihood of the studentized plug-in statistic for each block using the bootstrap; (iv) use mixture SPRT by treating each block as a unit observation. We provide details below.
Let $\mb{x}_{(k)} \triangleq \mb{x}_{kN:(k+1)N-1}$ denote the $k^{th}$ block of data of size $N$ and $s(\mb{x}_{(k)}; \theta)$ be the studentized plug-in statistic computed from the $k^{th}$ data block, i.e., s(\^[(k)]{}; ) = , where $\sigma(T(\wh{F}(\mb{x}_{(k})))$ is a consistent estimate of the standard deviation of the plug-in statistic $T(\wh{F}(\mb{X}_{(k)}))$.[^5] For notational convenience, we use $\wh{\theta}_{(k)}$ and $s_{(k)}^{\theta}$ in lieu of $T(\wh{F}(\mb{x}_{(k)}))$ and $s(\mb{x}_{(k)};\theta)$ respectively, with the dependence on underlying $\mb{x}$’s and $N$ being implicit; and their uppercase versions to denote the corresponding random variables. The random variable $S_{(k)}^{\theta}$ is asymptotically distributed as $\mc{N}(0, 1)$ at rate $O(\nicefrac{1}{\sqrt{N}})$ assuming $\theta$ is the true parameter. Hence for large $N$, the distribution of $S_{(k)}^{\theta}$ is approximately pivotal (i.e., the distribution does not depend on the unknown $\theta$ and the effect of $\eta$ is negligible). This suggests using the normal density as the pdf for $S_{(k)}^{\theta}$ and carry out mixture SPRT following the steps in Section \[sec:m-sprt\]. The density, however, can be approximated better using the bootstrap.
Algorithm \[algo:bs-sprt\] below describes the bootstrap mixture SPRT.
Given blocks of observations $\mb{x}_{(k)}$ for $k = 1, 2, \ldots$, and a prior $\pi(\theta)$:
1. Draw $\wt{\theta}_1, \wt{\theta}_2, \ldots \wt{\theta}_M$ i.i.d. from $\pi(\theta)$ for some large $M$.
2. For each $k$:
1. Compute $\wh{\theta}_{(k)}$ and its standard deviation $\sigma(\wh{\theta}_{(k)})$.
2. Draw a random sample of size $N$ with replacement from $\mb{x}_{(k)}$; denote it by $\mb{x}^*_{(k)}$. Repeat this $B$ times to obtain a sequence $\mb{x}_{(k)}^{*b}$, $1 \leq b \leq B$. For each b, compute the studentized plug-in statistic $s^{*b}_{(k)}$ as: s\^[\*b]{}\_[(k)]{} = . Let $g^*_{(k)}(\cdot)$ be the pdf obtained using a Gaussian kernel density estimate from the sequence $s^{*b}_{(k)}$, $1 \leq b \leq B$.
3. Compute $L_n$ as follows: L\_n = (\_[m=1]{}\^M( ).
4. Reject the null hypothesis $\theta = \theta_0$ if $L_n > 1/\alpha$ for some $n > 1$.
**Correctness of Algorithm \[algo:bs-sprt\]**: We give a heuristic explanation of the correctness of Algorithm \[algo:bs-sprt\]; a rigorous proof of type $1$ error control or optimality of Algorithm \[algo:bs-sprt\] in some sense is extremely hard.[^6] $L_n$ in step $3$ is similar to . If $L_n$ is approximately a martingale for $\theta = \theta_0$, the arguments of Section \[sec:m-sprt\] would imply type $1$ error control at any time under continuous monitoring. Step $2$ of Algorithm \[algo:bs-sprt\] is approximating the true pdf of the random variable $S_{(k)}^{\theta}$, denoted by $g^{\theta}_{(k)}$, using the bootstrap based estimate $g^*_{(k)}$. We only need to ensure that $g^*_{(k)}$ is a good approximation of $g^{\theta_0}_{(k)}$, assuming $\theta = \theta_0$, for the martingale property to hold. A subtlety arises because the bootstrap distribution is discrete. This, however, is not a problem: [@hall-1987] shows that a smoothened version of bootstrap density obtained by adding a small independent noise $\mc{N}(0, \nicefrac{1}{N^c})$ with arbitrary large $c$ to the bootstrap samples converges uniformly to the true density at rate $O_p(\nicefrac{1}{N})$. Working with the studentized statistic is crucial in obtaining $O_p(\nicefrac{1}{N})$ convergence; see [@hall-1997] for further details. By contrast, approximating $g_{(k)}$ by the normal density is accurate to $O(\nicefrac{1}{\sqrt{N}})$. The bootstrap samples behave as if realized from a true density. A Gaussian kernel density can be used to estimate the bootstrap density from a finite number of bootstrap samples, as in step $2$ of Algorithm \[algo:bs-sprt\]. The independence of the asymptotic variance and $\theta$ by Assumption \[assume:normal\] allows us to compute the likelihood for different values of $\theta$. Step $4$ is a Monte Carlo computation of Bayesian average in the right side of .
Application to A/B tests {#sec:abtest}
------------------------
In A/B tests, the goal is to identify whether or not the success metric computed on each of the two groups of data has a nonzero difference (i.e., $\theta_0 = 0$). The bootstrap mixture SPRT easily extends to A/B tests. Assume for simplicity that the block size is the same for both groups. Let $\mb{x}_{(k)}$’s be data from group $A$ (control) and $\mb{y}_{(k)}$’s the data from group $B$ (variation). We modify the quantities $\wh{\theta}_{(k)}$, $\sigma(\theta_{(k)})$, and $s^{*b}_{(k)}$ of Algorithm \[algo:bs-sprt\] as follows: $$\begin{aligned}
\wh{\theta}_{(k)} &= T(\wh{F}(\mb{y}_{(k)})) - T(\wh{F}(\mb{x}_{(k)})), \\
\sigma(\wh{\theta}_{(k)}) &= \sqrt{\sigma^2(T(\wh{F}(\mb{y}_{(k)}))) + \sigma^2(T(\wh{F}(\mb{x}_{(k)})))}, \\
s^{*b}_{(k)} &= \frac{T(\wh{F}(\mb{y}_{(k)}^{*b})) - T(\wh{F}(\mb{x}_{(k)}^{*b})) - \wh{\theta}_{(k)}}{\sqrt{\sigma^2(T(\wh{F}(\mb{x}_{(k)}^{*b}))) + \sigma^2(T(\wh{F}(\mb{x}_{(k)}^{*b})))}}.\end{aligned}$$
We conclude with a discussion on how to determine various parameters in Algorithm \[algo:bs-sprt\]. Type $1$ error control holds for any prior $\pi(\theta)$ with nonzero density on possible values of $\theta$. The choice of prior, however, can affect the number of observations needed to reject the null hypothesis. A simple choice is $\mc{N}(0, \tau^2)$; $\tau$ can be determined from the outcomes of the past A/B tests or can be set to a few percent of the reference value, computed from the data collected before the start of the A/B test, of the metric of interest. The standard deviation in for each $b$ can be computed using the bootstrap again on $\mb{x}^{*b}_{(k)}$. This, however, is computationally prohibitive. In practice, the standard deviation is computed using the delta method or the jackknife; see e.g., [@efron-1993]. A similar method can be used to compute the standard deviation in . The parameters $B$ and $M$ are merely used for Monte Carlo integration; any sufficiently large value is good. Typical choices for the number of bootstrap sample $B$ is $1000-5000$ and the number of Monte Carlo samples $M$ is $5000-10,000$. Because we work with a block of data at a time, the computational overhead is not much; computation can be performed in an online manner as new blocks of data arrive or using a MapReduce approach on the historical data. Finally, a small block size is intuitively expected to enable fast inference. Doob’s martingale inequality is far from being tight for martingales stopped at some maximum length and becomes tighter as this maximum length is increased. However, the small block size may reduce the accuracy of the bootstrap likelihood estimate. Additionally, data update frequency of a near real-time system puts a lower limit on how small the block of data can be; it is not uncommon to receive a few hundred or a few thousand data points per update. Section \[sec:eval\] describes how to estimate a reasonable block size using post A/A tests.
Empirical evaluation and discussion {#sec:eval}
===================================
**Datasets, metrics, and baselines**: We use data obtained from a major online e-commerce website on product search performance. Each dataset contains a tiny fraction of randomly chosen anonymized users; and for each user session, it contains the session timestamp, the number of distinct search queries, the number of successful search queries, and the total revenue. Datasets contain about 1.8 million user sessions each and span nonoverlapping 2-3 weeks timeframe. We show here the results for only one dataset; similar results were observed for the other datasets.
We work with two metrics: (i) *query success rate per user session*; (ii) *revenue per user session*. The former is the ratio of the sum of successful queries per user session to the sum of total queries per user session, where the sum is over user sessions. The latter is a simple average, however, with most values being zero and nonzero values showing a heavy tail. These two very different metrics are chosen to evaluate the flexibility of the bootstrap mixture SPRT. We have used $\mc{N}(0, \tau^2)$ distribution as prior $\pi(\theta)$, with $\tau$ being equal to a few percent (less than 5%) of the values of metrics computed on data before the start timestamp of the dataset under consideration. Data is partitioned into blocks based on the order induced by the user session timestamp.
Existing sequential tests require knowing a parameterized distribution generating the data. The data generation model for the two metrics we study in our experiments is hard to find. We are not aware of any existing sequential tests that can be applied to the two complex metrics we use in our experiments. Consequently, we use the z-test on the entire data as a baseline for comparison.[^7] The use of z-test (or the t-test) is pervasive in commercial A/B tests platforms, often without much scrutiny. Additionally, we compare the bootstrap mixture SPRT with MaxSPRT ([@kulldorff], [@kharitonov-2015]) on synthetic data generated from the Bernoulli distribution. MaxSPRT requires knowing a parametrized likelihood function and has been applied mostly when the data is generated from simple distributions such as the Poisson distribution and the Bernoulli distribution.
**Post A/A tests based methodology**: We make use of post A/A tests to estimate the type $1$ error. For each dataset, we randomly divide the users into two groups, perform the statistical test for the null hypothesis of zero difference in the metrics for the two groups, and compute the p-values. This process is repeated multiple times ($250$ in our experiments). Because there is no true difference between the two groups, the p-value should theoretically follow uniform distribution between $0$ and $1$. Consequently, the Q-Q plots for the realized p-values from different random splits and the uniform distribution should be the line of slope $1$. To estimate the power of the test, for each random split of the dataset, we add to the realized difference in the metrics for the two groups some small positive offset. For each choice of the additive offset, we then check how many times the null hypothesis is rejected using p-value $\leq 0.05$ as our null hypothesis rejection criteria.
**Type 1 error**: Figure \[fig:ztest-daily\] shows a huge inflation of type $1$ error under sequential monitoring in case of classical fixed sample size z-test. The Q-Q plot lies well below the line of slope $1$. Here, the p-values are computed daily using the data collected until that day. The Q-Q plot is for the minimum p-value observed from daily observations, modeling the scenario where one waits until the data shows significance. Waiting until the p-value is $\leq 0.05$ and stopping the test on that day resulted in type $1$ error rate of at least $30\%$. By contrast, the Q-Q plot of always valid p-value in the case of the bootstrap mixture SPRT with the block size $5000$ is close to the line of slope $1$, as shown in Figure \[fig:aatest-5k\]. This implies that the bootstrap mixture SPRT controls type $1$ error at any time under continuous monitoring. The p-value is the final one at the end of all the blocks (recall that the p-value is nonincreasing in this case).
[1.0]{}
[1.0]{}
The choice of the block size of $5000$ is determined by post A/A tests with different block sizes, looking at the corresponding Q-Q plots, and making sure it stays above but close to the line of slope $1$. Smaller block size makes the bootstrap approximation inaccurate while the larger block size makes this procedure more conservative in case of some upper limit on the number of samples available for the test. This is a consequence of fewer intermediate observations in a finite length dataset, resulting in Doob’s martingale inequality far from being tight. Figure \[fig:aatest-10k\] and \[fig:aatest-20k\] show the Q-Q plot for block length of $10,000$ and $20,000$ respectively. The plots lie above the line of slope $1$, showing a more conservative type $1$ error control.
[1.0]{}
[1.0]{}
**Test power and average duration**: Next, we compare the power of the bootstrap mixture SPRT with the z-test on the entire data. We use the delta method to compute the standard deviation for the query success rate; see [@kempen-2000]. We use an additive offset to the realized difference, as described earlier in this section. Figure \[fig:atc-power\] shows the power of the test as a function of the true difference percentage for the case of query success rate. Here the true difference percentage is the additive offset normalized by the value of the metric on the entire dataset. Notice some loss in power compared to the z-test on the entire dataset. This loss in power is expected because of the sequential nature of the procedure on a finite data set; equivalently, the bootstrap mixture SPRT requires a larger number of samples to achieve the same power as the fixed sample size tests. This is the price paid for the flexibility of continuous monitoring. The loss in power quickly becomes negligible as the true difference becomes large. Focusing on the revenue metric, we notice something surprising: the z-test on the entire data has smaller power than the bootstrap mixture SPRT, as shown in Figure \[fig:revenue-power\]. In this case, the mean of data does not confirm to the normal distribution used to justify the z-test. This is illustrated by the z-test on the entire data after outliers removal which has higher power than the bootstrap mixture SPRT, as expected from a test with an apriori access to the entire data.[^8] The robustness of the bootstrap mixture SPRT to outliers comes from two factors: (i) the use of the bootstrap for learning the likelihood which typically works better for the heavy tail distributions; (ii) working with blocks of data confines the effect of extreme values in the data in a block to that block only by making the likelihood ratio for that block close to one without affecting the other blocks. Thus, not only the bootstrap mixture SPRT allows continuous monitoring, it is more robust to the misspecification in the distribution generating the data.
[0.5]{}
[0.5]{}
Figure \[fig:1689-duration\] shows the average duration of the test using the bootstrap SPRT. The test duration is the numbers of samples consumed in each split until either all the data is exhausted or the null hypothesis is rejected. We take the average over the $250$ random splits of the data set. Notice that the null hypothesis is rejected quickly for large changes, unlike the fixed sample size test where one has to wait until the precomputed number of samples have been collected. This avoids wasteful data collection and unnecessary wait in case of misspecification in the minimum effect size used to determine the number of samples to be collected for fixed sample size tests.
In our experiments, we observed the power of the tests with smaller block size to be either comparable or better than the power of the tests with larger block size; this is in-line with the observations made earlier about the advantage of the small block size. Similarly, the tests with smaller block size had smaller test duration.
**Comparison with MaxSPRT on synthetic data**: Finally, we compare the performance of the bootstrap mixture SPRT with MaxSPRT. We use synthetic data of size $100,000$ generated from the Bernoulli distribution with success probability equal to $0.05$. The rejection threshold for MaxSPRT is determined by multiple A/A tests on synthetic datasets such that the type $1$ error is close to $0.05$, as described in [@kharitonov-2015]. We also use A/A tests to determine the block size and the variance of the Gaussian prior for the bootstrap SPRT such that the type $1$ is close to $0.05$. This resulted in the block size of 1000. Figure \[fig:maxsprt-power\] shows comparable test power for MaxSPRT and the bootstrap mixture SPRT with MaxSPRT performing marginally better. The better performance of MaxSPRT is expected; here the data distribution is exactly known and is exploited by MaxSPRT. The bootstrap mixture SPRT, however, learns the distribution in a nonparametric way and yet performs comparable to MaxSPRT. A similar observation can be made for the average test duration: MaxSPRT has slightly smaller test duration as shown in Figure \[fig:maxsprt-duration\]. MaxSPRT updates its likelihood ratio score for each data point as it arrives. By contrast the bootstrap mixture SPRT updates its likelihood ratio score in blocks of data which contributes to a slightly longer average test duration. Block update, however, is a more realistic scenario in a practical system where data usually arrives in blocks of size few hundred to a few thousand. The general purpose nature of the bootstrap mixture SPRT while still performing comparable to MaxSPRT which requires exact knowledge of the data distribution is appealing.
[0.5]{}
[0.5]{}
Conclusions {#sec:conc}
===========
We propose the bootstrap mixture SPRT for the hypothesis test of complex metrics in a sequential manner. We highlight the theoretical results justifying our approach and provide empirical evidence demonstrating its virtues using real data from an e-commerce website. The bootstrap mixture SPRT allows the flexibility of continuous monitoring while controlling the type $1$ error at any time, has good power, is robust to misspecification in the distribution generating the data, and allows quick inference in case the true change is larger than the estimated minimum effect size. The nonparametric nature of this test makes it suitable for the cases where the data generation distribution is unknown or hard to model, which typically is the case.
[^1]: A similar observation is made by [@crook-2009]. The authors suggest avoiding the use of standard formulas for standard deviation computation which assume independence of the samples and recommend the use of the bootstrap.
[^2]: The inflation of type $1$ error because of data collection to reach significance can be explained by the law of the iterated logarithm, as observed in [@pekelis-2015]. Section \[sec:eval\] provides some plots to demonstrate the ill effects of chasing significance.
[^3]: Under some regularity conditions and a suitable distance metric on the space of CDFs, distance between $H_n$ and $H_n^*$ goes to zero with probability $1$ as $n \rightarrow \infty$; see [@hall-1997].
[^4]: This follows from a general observation that if $Z_{1:n} \sim \rho_n(z_{1:n})$ and $\psi_n(z_{1:n})$ is any other probability density function, then $L_n = \nicefrac{\psi_n(z_{1:n})}{\rho_n(z_{1:n})}$ is a martingale.
[^5]: Formally, $\sqrt{n}\sigma(T(\wh{F}(\mb{X}_{1:n}))) \rightarrow \zeta$ with probability $1$ as $n \rightarrow \infty$, where $\zeta$ is defined in Assumption \[assume:normal\].
[^6]: The results on the optimality of mixture SPRT are known only for exponential families of distributions. We, however, take a nonparametric approach. A theory of bootstrap is based on Edgeworth expansion [@hall-1997]. Applying Edgeworth expansion to analyze likelihood ratio statistic defined in is quite challenging.
[^7]: Because of the large size of the dataset we use, there is no observable difference between the t-test and the z-test.
[^8]: Identifying outliers is a complicated problem by itself, often with no good agreement on the definition of outliers. Here, we simply clip the values to some upper limit affecting only a negligible portion of data to show the effect of extreme values in data on the z-test.
|
---
abstract: 'A search for pair-produced charged Higgs bosons is performed with the L3 detector at LEP using data collected at centre-of-mass energies between 189 and 209, corresponding to an integrated luminosity of 629.4. Decays into a charm and a strange quark or into a tau lepton and its neutrino are considered. No significant excess is observed and lower limits on the mass of the charged Higgs boson are derived at the 95% confidence level. They vary from 76.5 to 82.7, as a function of the $\Htn$ branching ratio.'
author:
- The L3 Collaboration
date: 'August 27, 2003'
title: Search for Charged Higgs Bosons at LEP
---
Introduction {#introduction .unnumbered}
============
In the Standard Model of the electroweak interactions [@standard_model] the masses of bosons and fermions are explained by the Higgs mechanism [@higgs_mech]. This implies the existence of one doublet of complex scalar fields which, in turn, leads to a single neutral scalar Higgs boson. To date, this Higgs boson has not been directly observed [@higgs; @lnq]. Some extensions to the Standard Model contain more than one Higgs doublet [@higgs_hunter], and predict Higgs bosons which can be lighter than the Standard Model one and accessible at LEP. In particular, models with two complex Higgs doublets predict two charged Higgs bosons, $\Hpm$, which can be pair-produced in $\epem$ collisions.
The charged Higgs boson is expected to decay through $\rm\Hp\ra
c\bar{s}$ or $\rm\Hp\ra\tau^+\nu_{\tau}$[^1], with a branching ratio which is a free parameter of the models. The process $\mathrm{\ee\ra\Hp\Hm}$ gives then rise to three different signatures: $\cscs$, $\cstn$ and $\tntn$. These experimental signatures have to be disentangled from the large background of the $\mathrm{\ee\ra W^+W^-}$ process, characterised by similar final states.
Data collected at centre-of-mass energies $\sqrt{s}=189-209\,\GeV{}$ are analysed here, superseding previous results [@chhiggs_189chhiggs_202]. Data from $\sqrt{s}=130-183\,\GeV{}$ [@chhiggs_130_183] are included to obtain the final results. Results from other LEP experiments are given in Reference .
The analyses do not depend of flavour tagging variables and are separately optimised for each of the three possible signatures.
Data and Monte Carlo Samples {#data-and-monte-carlo-samples .unnumbered}
============================
The search for pair-produced charged Higgs bosons is performed using 629.4 of data collected in the years from 1998 to 2000 with the L3 detector [@l3_det] at LEP, at several average centre-of-mass energies, detailed in Table \[table:lumi\].
The charged Higgs cross section is calculated using the HZHA Monte Carlo program [@hzha]. As an example, at $\sqrt{s}=206\,\GeV$ it varies from 0.28pb for a Higgs mass, $\MHPM$, of 70 to 0.17pb for $\MHPM=80\,\GeV{}$. To optimise selections and calculate efficiencies, samples of $\mathrm{\ee\ra\Hp\Hm}$ events are generated with the PYTHIA Monte Carlo program [@jetset73] for $\MHPM$ between 50 and 100, in steps of 5, and between 75 and 80, in steps of 1. About 1000 events for each final state are generated at each Higgs mass. For background studies, the following Monte Carlo generators are used: KK2f [@KK2f] for $\ee\ra\qqbar(\gamma)$, $\ee\ra\mu^+\mu^-(\gamma)$ and $\ee\ra\tau^+\tau^-(\gamma)$, BHWIDE [@BHWIDE] for $\ee\ra\ee$, PYTHIA for $\ee\ra\Zo\Zo$ and $\ee\ra\Zo\ee$, YFSWW [@YFSWW] for $\ee\ra\Wp\Wm$ and PHOJET [@PHOJET] and DIAG36 [@DIAG36] for hadron and lepton production in two-photon interactions, respectively. The L3 detector response is simulated using the GEANT program [@my_geant] which takes into account the effects of energy loss, multiple scattering and showering in the detector. Time-dependent detector inefficiencies, as monitored during the data taking period, are included in the simulations.
Data Analysis {#data-analysis .unnumbered}
=============
The analyses for all three final states are updated since our previous publications at lower centre-of-mass energies [@chhiggs_189chhiggs_202; @chhiggs_130_183]: the searches in the $\HHcscs$ and $\cstn$ channels are based on a mass dependent likelihood interpretation of data samples selected [@sigmaW] for the studies of W pair-production, while a discriminant variable is introduced for the search in the $\tntn$ channel. These analyses are described below.
{#section .unnumbered}
The search in the $\HHcscs$ channel proceeds from a selection of high multiplicity events with balanced transverse and longitudinal momenta and with a visible energy which is a large fraction of $\sqrt{s}$. These criteria reject events from low-multiplicity processes like lepton pair-production, events from two-photon interactions and pair-production of W bosons where at least one boson decays into leptons. The events are forced into four jets by means of the DURHAM algorithm [@durham] and a neural network [@sigmaW] discriminates between events which are compatible with a four-jet topology and those from the large cross section $\ee\ra\qqbar(\gamma)$ process in which four-jet events originate from hard gluon radiation. The neural network inputs are the event spherocity, the energies and widths of the most and least energetic jets, the difference between the energies of the second and third most energetic jets, the minimum multiplicity of calorimetric clusters and charged tracks for any jet, the value of the $y_{34}$ parameter of the DURHAM algorithm and the compatibility with energy-momentum conservation in $\epem$ collisions. After a cut on the output of the neural network, two constrained fits are performed. The first four-constraint fit enforces energy and momentum conservation, modifying the jet energies and directions. The second five-constraint (5C) fit imposes the additional constraint of the production of two equal mass particles. Among the three possible jet pairings, the one is retained which is most compatible with this equal mass hypothesis. Events with a low probability for the fit hypotheses are removed from the sample and a total of 5156 events are observed in data while 5112 are expected from Standard Model processes. The corresponding signal efficiencies are between 70% and 80%, for $\MHPM = 60-95\,\GeV$.
Likelihood variables [@mssm] are built to discriminate four-jet events compatible with charged Higgs production from the dominating background from W pair-production. A different likelihood is prepared for each simulated Monte Carlo sample corresponding to a different Higgs boson mass. Seven variables are included in the likelihoods:
- the minimum opening angle between paired jets;
- the difference between the largest and smallest jet energies;
- the difference between the di-jet masses;
- the output of the neural network for the selection of four-jet events;
- the absolute value of the cosine of the polar angle of the thrust vector;
- the cosine of the polar angle at which the positive charged[^2] boson is produced;
- the value of the quantity $2\ln|M|$, where $M$ is the matrix element for the $\epem\ra{\rm W^+W^-}\ra four fermions$ process from the EXCALIBUR [@EXCALIBUR] Monte Carlo program, calculated using the four-momenta of the reconstructed jets.
Figures 1a, 1b and 1c show the distributions of the last three variables while Figure 1d presents the distribution of the likelihood variable for $\MHPM = 70\,\GeV$. A cut at 0.7 on this variable, which maximizes the signal sensitivity, is applied as a final selection criterion, for all mass hypotheses. The numbers of observed and expected events are given in Table \[events\] and the selection efficiencies in Table \[eff\]. The main contributions to the background come from hadronic W-pair decays (70%) and from the $\ee\ra\qqbar(\gamma)$ process (26%). Figure 2 shows the 5C mass of the pair-produced bosons before and after the cut on the final likelihoods. Peaks from pair-production of W as well as Z bosons are visible.
{#section-1 .unnumbered}
The search in the $\HHcstn$ channel selects events with high multiplicity, two hadronic jets and a tau candidate. Tau candidates can be identified either as electrons or muons with momentum incompatible with that expected for leptons originating from direct semileptonic decay of W pairs, or with narrow, low multiplicity jets with at least one charged track, singled out from the hadronic background with a neural network [@sigmaW]. The tau energy is reconstructed by imposing four-momentum conservation and enforcing the hypothesis of the production of two equal mass particles. The events must have a transverse missing momentum of at least 20 and the absolute value of the cosine of the polar angle of the missing momentum is required to be less than 0.9. Finally, the di-jet invariant mass is required to be less than 100 and the mass recoiling against the di-jet system less than 130, thus selecting 1026 events in data while 979 are expected from Standard Model processes, mainly from W pair-production where one of the W bosons decays into leptons and the other into hadrons. The signal efficiency is about 50%.
To discriminate the signal from the background, mass dependent likelihoods [@mssm] are built which contain eight variables:
- the di-jet acoplanarity;
- the angle of the tau flight direction with respect to that of its parent boson in the rest frame of the latter;
- the di-jet mass;
- the quantity $2\ln|M|$ calculated using the four-momenta of the reconstructed jets and tau as well as the missing momentum and energy;
- the transverse momentum of the event, normalised to $\sqrt{s}$;
- the polar angle of the hadronic system, multiplied by the charge of the reconstructed tau;
- the sum $\Sigma_\theta$ of the angles between the tau candidate and the nearest jet and between the missing momentum and the nearest jet;
- the energy of the tau candidate, calculated in the rest frame of its parent boson and scaled by $\sqrt{s}$.
The distributions of the last three variables are shown in Figures 3a, 3b and 3c. Figure 3d presents an example of the distributions of the likelihood variable for $\MHPM=70\,\GeV$ for data, background and signal Monte Carlo. A cut at 0.6 is applied for all likelihoods. This cut corresponds to the largest sensitivity to a charged Higgs signal. Table \[events\] gives the numbers of observed and expected events, while the selection efficiencies are given in Table [\[eff\]]{}. Over 95% of the background is due to W pair-production. Figure 4 shows the reconstructed mass of the pair-produced bosons before and after the cut on the final likelihoods.
{#section-2 .unnumbered}
The signature for the leptonic decay channel is a pair of tau leptons. These are identified either via their decay into electrons or muons, or as narrow jets.
The selection criteria are similar to those used at lower $\sqrt{s}$ [@chhiggs_189chhiggs_202; @chhiggs_130_183]. Low multiplicity events with large missing energy and momentum are retained. To reduce lepton-pair background, an upper cut is placed on the value of the event collinearity angle, $\xi$, defined as the maximum angle between any pair of tracks. The distribution of this variable is shown in Figure 5a. The contribution from cosmic muons is reduced by making use of information from the time-of-flight system. Figure \[fig:lepton\]b presents the distribution of the scaled visible energy, $E_{vis}/\sqrt{s}$, for events on which all other selection criteria are applied.
The analysis is modified with respect to those previously published [@chhiggs_189chhiggs_202; @chhiggs_130_183] in that the normalised transverse missing momentum of the event, ${P_t/E_{vis}}$, whose distribution is shown in Figure \[fig:lepton\]c, is used as a linear discriminant variable on which no cut is applied.
The efficiency of the $\HHtntn$ selection for several Higgs masses is listed in Table \[eff\]. The numbers of observed and expected events are presented in Table \[events\]. The background is mainly formed by W-pair production (60%), two-photon interactions (26%) and lepton pair-production (9%).
Results {#results .unnumbered}
=======
The number of selected events in each decay channel is consistent with the number of events expected from Standard Model processes. A technique based on a log-likelihood ratio [@lnq] is used to calculate a confidence level (CL) that the observed events are consistent with background expectations. For the $\cscs$ and $\cstn$ channels, the reconstructed mass distributions, shown in Figures \[fig:cscs\_mass\]b and \[fig:cstn\_mass\]b, are used in the calculation, whereas for the $\tntn$ channel, the distribution of the normalised transverse missing momentum, shown in Figure \[fig:lepton\]c, is used.
The systematic uncertainties on the background level and the signal efficiencies are included in the confidence level calculation. These are due to finite Monte Carlo statistics and to the uncertainty on the background normalisation. The former uncertainty is 5% for the background and 2% for the signal Monte Carlo samples. The uncertainty on the background normalisation is 3% for the $\HHcscs$ channel and 2% for the $\cstn$ and $\tntn$ channels. The systematic uncertainty on the signal efficiency due to the selection procedure is estimated by varying the selection criteria and is found to be less than 1%. These systematic uncertainties decrease the $\MHPM$ sensitivity of the combined analysis by about 200.
Figure \[fig:clb\] compares the resulting background confidence level, $1-{CL_b}$, for the data to the expectation in the absence of a signal, for three values of the $\Htn$ branching ratio: $\BRTN$ = 0, 0.5 and 1. The 68.3% and 95.4% probability bands expected in the absence of a signal are also displayed and denoted as $1\sigma$ and $2\sigma$, respectively. A slight excess of data appears around $\MHPM
= 69$ for $\BRTN = 0$, as previously observed [@chhiggs_189chhiggs_202]. It is compatible with a $2.5\sigma$ upward fluctuation in the background. The excess is also compatible with a $2.9\sigma$ downward fluctuation of the signal[^3]. As observed in Figures 6b and 6c, no excess is present in the $\cstn$ and $\tntn$ channels around $\MHPM = 69$. Therefore, the $\cscs$ excess is interpreted as a statistical fluctuation in the background and lower limits at the 95% CL on $\MHPM$ are derived [@lnq] as a function of $\BRTN$. Data at $\sqrt{s}=130-183\,\GeV$ [@chhiggs_130_183] are included to obtain the limits. Figure \[exclusion\] shows the excluded $\MHPM$ regions for each of the final states and their combination, as a function of $\BRTN$. Table \[cl\] gives the observed and the median expected lower limits for several values of the branching ratio.
In conclusion, refined analyses and larger centre-of-mass energies improve the sensitivity of the search for charged Higgs bosons produced in $\epem$ collisions as compared to previous results [@chhiggs_189chhiggs_202; @chhiggs_130_183]. No significant excess is observed in data and a lower limit at 95% CL on the charged Higgs boson mass is obtained as $$\MHPM > 76.5\,\GeV{},$$ independent of its branching ratio.
[10]{}
S.L. Glashow, (1961) 579; S. Weinberg, (1967) 1264; A. Salam, [*Elementary Particle Theory*]{}, edited by N. Svartholm (Almqvist and Wiksell, Stockholm, 1968), p. 367 P.W. Higgs, (1964) 132, (1964) 508; (1966) 1156; F. Englert and R. Brout, (1964) 321; G.S. Guralnik, C.R. Hagen and T.W.B. Kibble, Phys. Rev. Lett. [**13**]{} (1964) 585 L3 Collab., P. Achard , Phys. Lett. [**B 517**]{} (2001) 319 ALEPH, DELPHI, L3 and OPAL Collab., The LEP Working Group for Higgs Boson Searches, (2003) 61 S. Dawson , [*The Physics of the Higgs Bosons: Higgs Hunter’s Guide*]{}, Addison Wesley, Menlo Park, 1989 L3 Collab., M. Acciarri , Phys. Lett. [**B 466**]{} (1999) 71; L3 Collab., M. Acciarri , Phys. Lett. [**B 496**]{} (2000) 34 L3 Collab., M. Acciarri , Phys. Lett. [**B 446**]{} (1999) 368 ALEPH Collab., A. Heister , Phys. Lett. [**B 543**]{} (2002) 1; DELPHI Collab., J. Abdallah , Phys. Lett. [**B 525**]{} (2002) 17; OPAL Collab., G. Abbiendi , Eur. Phys. J. [**C 7**]{} (1999) 407 L3 Collab., B. Adeva , Nucl. Instr. Meth. [**A 289**]{} (1990) 35; J.A. Bakken , Nucl. Instr. Meth. [**A 275**]{} (1989) 81; O. Adriani , Nucl. Instr. Meth. [**A 302**]{} (1991) 53; B. Adeva , Nucl. Instr. Meth. [**A 323**]{} (1992) 109; K. Deiters , Nucl. Instr. Meth. [**A 323**]{} (1992) 162; M. Chemarin , Nucl. Instr. Meth. [**A 349**]{} (1994) 345; M. Acciarri , Nucl. Instr. Meth. [**A 351**]{} (1994) 300; G. Basti , Nucl. Instr. Meth. [**A 374**]{} (1996) 293; A. Adam , Nucl. Instr. Meth. [**A 383**]{} (1996) 342; O. Adriani , Phys. Rep. [**236**]{} (1993) 1 HZHA version 2 is used; P. Janot, in [*Physics at LEP2*]{}, ed. [[G. Altarelli, T. Sjöstrand and F. Zwirner]{}]{}, (CERN 96-01, 1996), volume 2, p. 309 PYTHIA version 5.722 is used; T. Sj[ö]{}strand, preprint CERN-TH/7112/93 (1993), revised 1995; Comp. Phys. Comm. [**82**]{} (1994) 74 KK2f version 4.14 is used; S. Jadach, B.F.L. Ward and Z. Was, Comp. Phys. Comm [**130**]{} (2000) 260 BHWIDE version 1.03 is used; S. Jadach, W. Placzek and B.F.L. Ward, Phys. Lett. [**B 390**]{} (1997) 298 YFSWW version 1.14 is used; S. Jadach , Phys. Rev. [**D 54**]{} (1996) 5434; Phys. Lett. [**B 417**]{} (1998) 326; Phys. Rev. [**D 61**]{} (2000) 113010; Phys. Rev. [**D 65**]{} (2002) 093010 PHOJET version 1.05 is used; R. Engel, Z. Phys. [**C 66**]{} (1995) 203; R. Engel and J. Ranft, Phys. Rev. [**D 54**]{} (1996) 4244 DIAG 36 Monte Carlo; F.A. Berends, P.H. Daverfeldt and R. Kleiss, (1985) 441 GEANT version 3.15 is used;R. Brun , preprint CERN DD/EE/84-1 (1985), revised 1987. The GHEISHA program (H. Fesefeldt, RWTH Aachen Report PITHA 85/02, 1985) is used to simulate hadronic interactions L3 Collab., P. Achard , [*Measurement of the cross section of W pair-production at LEP*]{}, in preparation S. Bethke , Nucl. Phys. [**B 370**]{} (1992) 390 L3 Collab., P. Achard , Phys. Lett. [**B 545**]{} (2002) 30 L3 Collab., M. Acciarri , Phys. Lett. [**B 413**]{} (1997) 176 EXCALIBUR version 1.11 is used; F.A. Berends, R. Kleiss and R. Pittau, Comp. Phys. Comm. [**85**]{} (1995) 437 L3 Collab., O. Adriani , (1992) 457; L3 Collab., O. Adriani , (1993) 355
Author List {#author-list .unnumbered}
===========
namelist274.tex
------------------------ ------- ------- ------- ------- ------- ------- ------- -------
$\sqrt{s}$ () 188.6 191.6 195.5 199.5 201.7 204.9 206.4 208.0
Luminosity (pb$^{-1}$) 176.8 29.8 84.2 83.3 37.2 79.0 130.8 8.3
------------------------ ------- ------- ------- ------- ------- ------- ------- -------
: Average centre-of-mass energies and corresponding integrated luminosities.[]{data-label="table:lumi"}
------------ --------- --------- ---------
$\cscs$ $\cstn$ $\tntn$
Data 2296 442 141
Background 2228 464 141
Signal 100 76 50
------------ --------- --------- ---------
: \[events\] Number of observed data events and background expectations in the three analysis channels. The uncertainty on the background expectations is estimated to be 5%. The numbers of expected signal events for $\MHPM=70\,\GeV$ and $\BRTN=0$, 0.5 and 1 are also given for the $\cscs$, $\cstn$ and $\tntn$ channels, respectively.
--------- ---------- ---- ---- ---- ---- ---- -- --
$\MHPM=$ 60 70 80 90 95
$\cscs$ 62 62 50 58 64
$\cstn$ 38 51 43 43 39
$\tntn$ 26 30 33 34 36
--------- ---------- ---- ---- ---- ---- ---- -- --
: \[eff\] Selection efficiencies for various charged Higgs masses. The efficiencies are largely independent of the centre-of-mass energy. The uncertainty on each efficiency is estimated to be 2%.
--------------------------- ---------- ----------
\[0pt\]\[0pt\][$\BRTN$]{}
observed expected
0.0 76.7 77.5
0.26 76.5 75.6
0.5 76.6 76.5
1.0 82.7 84.6
--------------------------- ---------- ----------
: Observed and expected lower limits at 95% CL for different values of the $\Htn$ branching ratio. The minimum observed limit is at $\BRTN
= 0.26$.[]{data-label="cl"}
[^1]: The inclusion of the charge conjugate reactions is implied throughout this Letter.
[^2]: Charge assignment is based on jet-charge techniques [@TGC].
[^3]: As an example, for $\BRTN = 0.1$, these figures are $1.8\sigma$ and $2.7\sigma$, respectively.
|
---
abstract: |
Let $G=(V,E)$ be a finite undirected graph. An edge set $E' \subseteq E$ is a [*dominating induced matching*]{} ([*d.i.m.*]{}) in $G$ if every edge in $E$ is intersected by exactly one edge of $E'$. The *Dominating Induced Matching* (*DIM*) problem asks for the existence of a d.i.m. in $G$; this problem is also known as the *Efficient Edge Domination* problem; it is the Efficient Domination problem for line graphs.
The DIM problem is [$\mathbb{NP}$]{}-complete even for very restricted graph classes such as planar bipartite graphs with maximum degree 3 and is solvable in linear time for $P_7$-free graphs, and in polynomial time for $S_{1,2,4}$-free graphs as well as for $S_{2,2,2}$-free graphs. In this paper, combining two distinct approaches, we solve it in polynomial time for $S_{2,2,3}$-free graphs.
author:
- 'Andreas Brandstädt[^1]'
- 'Raffaele Mosca[^2]'
title: 'Finding Dominating Induced Matchings in $S_{2,2,3}$-Free Graphs in Polynomial Time'
---
[**Keywords**: dominating induced matching; efficient edge domination; $S_{2,2,3}$-free graphs; polynomial time algorithm; ]{}
Introduction {#sec:intro}
============
Let $G=(V,E)$ be a finite undirected graph. A vertex $v \in V$ [*dominates*]{} itself and its neighbors. A vertex subset $D \subseteq V$ is an [*efficient dominating set*]{} ([*e.d.s.*]{} for short) of $G$ if every vertex of $G$ is dominated by exactly one vertex in $D$. The notion of efficient domination was introduced by Biggs [@Biggs1973] under the name [*perfect code*]{}. The [Efficient Domination]{} (ED) problem asks for the existence of an e.d.s. in a given graph $G$ (note that not every graph has an e.d.s.)
A set $M$ of edges in a graph $G$ is an *efficient edge dominating set* (*e.e.d.s.* for short) of $G$ if and only if it is an e.d.s. in its line graph $L(G)$. The [Efficient Edge Domination]{} (EED) problem asks for the existence of an e.e.d.s. in a given graph $G$. Thus, the EED problem for a graph $G$ corresponds to the ED problem for its line graph $L(G)$. Note that not every graph has an e.e.d.s. An efficient edge dominating set is also called *dominating induced matching* ([*d.i.m.*]{} for short)–recall that an edge subset $E' \subseteq E$ is a dominating induced matching in $G$ if every edge in $E$ is intersected by exactly one edge of $E'$.
The EED problem is motivated by applications such as parallel resource allocation of parallel processing systems, encoding theory and network routing—see e.g. [@GriSlaSheHol1993; @HerLozRieZamdeW2015]. The EED problem is called the [Dominating Induced Matching]{} (DIM) problem in various papers (see e.g. [@BraHunNev2010; @BraMos2014; @CarKorLoz2011; @HerLozRieZamdeW2015; @KorLozPur2014]); subsequently, we will use this notation in our manuscript.
In [@GriSlaSheHol1993], it was shown that the DIM problem is [$\mathbb{NP}$]{}-complete; see e.g. [@BraHunNev2010; @CarKorLoz2011; @LuKoTan2002; @LuTan1998]. However, for various graph classes, DIM is solvable in polynomial time. For mentioning some examples, we need the following notions:
Let $P_k$ denote the path with $k$ vertices, say $a_1,\ldots,a_k$, and $k-1$ edges $a_ia_{i+1}$, $1 \le i \le k-1$. Such a path will also be denoted by $(a_1,\ldots,a_k)$. When speaking about a $P_k$ in a graph $G$, we will always assume that the path is chordless, or, equivalently, that it is an induced subgraph of $G$.
For indices $i,j,k \ge 0$, let $S_{i,j,k}$ denote the graph with vertices $u,x_1,\ldots,x_i$, $y_1,\ldots,y_j$, $z_1,\ldots,z_k$ such that the subgraph induced by $u,x_1,\ldots,x_i$ forms a $P_{i+1}$ $(u,x_1,\ldots,x_i)$, the subgraph induced by $u,y_1,\ldots,y_j$ forms a $P_{j+1}$ $(u,y_1,\ldots,y_j)$, the subgraph induced by $u,z_1,\ldots,z_k$ forms a $P_{k+1}$ $(u,z_1,\ldots,z_k)$, and there are no other edges in $S_{i,j,k}$. Vertex $u$ is called the [*center*]{} of this $S_{i,j,k}$. Thus, [*claw*]{} is $S_{1,1,1}$, and $P_k$ is isomorphic to $S_{0,0,k-1}$.
In [@HerLozRieZamdeW2015], it is conjectured that for every fixed $i,j,k$, DIM is solvable in polynomial time for $S_{i,j,k}$-free graphs (actually, an even stronger conjecture is mentioned in [@HerLozRieZamdeW2015]); this includes $P_k$-free graphs for $k \ge 8$. The following results are known:
\[DIMpolresults\] DIM is solvable in polynomial time for
- $S_{1,1,1}$-free graphs $\cite{CarKorLoz2011}$,
- $S_{1,2,3}$-free graphs $\cite{KorLozPur2014}$,
- $S_{2,2,2}$-free graphs $\cite{HerLozRieZamdeW2015}$,
- $S_{1,2,4}$-free graphs $\cite{BraMos2017/2}$,
- $P_7$-free graphs $\cite{BraMos2014}$ $($in this case even in linear time$)$, and
- $P_8$-free graphs $\cite{BraMos2017}$.
Based on the two distinct approaches described in [@BraMos2017] and in [@HerLozRieZamdeW2015; @KorLozPur2014] (and combining them as in [@BraMos2017/2]), we show in this paper that DIM can be solved in polynomial time for $S_{2,2,3}$-free graphs (generalizing the corresponding results for $S_{1,2,3}$-free and $S_{2,2,2}$-free graphs). Note that up to and including Section \[section:N4nonempty\], the approach and the results are similar to the ones in [@BraMos2017/2].
Definitions and basic properties {#sec:basicnotionsresults}
================================
Basic notions {#subsec:basicnotions}
-------------
Let $G$ be a finite undirected graph without loops and multiple edges. Let $V$ denote its vertex set and $E$ its edge set; let $|V|=n$ and $|E|=m$. For $v \in V$, let $N(v):=\{u \in V: uv \in E\}$ denote the [*open neighborhood of $v$*]{}, and let $N[v]:=N(v) \cup \{v\}$ denote the [*closed neighborhood of $v$*]{}. The [*degree*]{} of $v$ in $G$ is $d_G(v)=|N(v)|$. If $xy \in E$, we also say that $x$ and $y$ [*see each other*]{}, and if $xy \not\in E$, we say that $x$ and $y$ [*miss each other*]{}. A vertex set $S$ is [*independent*]{} in $G$ if for every pair of vertices $x,y \in S$, $xy \not\in E$. A vertex set $Q$ is a [*clique*]{} in $G$ if every two distinct vertices in $Q$ are adjacent.
For $uv \in E$ let $N(uv):= (N(u) \cup N(v)) \setminus \{u,v\}$ and $N[uv]:= N[u] \cup N[v]$.
For $U \subseteq V$, let $G[U]$ denote the subgraph of $G$ induced by vertex subset $U$. Clearly $xy \in E$ is an edge in $G[U]$ exactly when $x \in U$ and $y \in U$; thus, $G[U]$ will be often denoted simply by $U$ when that is clear in the context.
For $A \subseteq V$ and $B \subseteq V$, $A \cap B = \emptyset$, we say that $A {\text{\textcircled{{\footnotesize 0}}}}B$ ($A$ and $B$ [*miss each other*]{}) if there is no edge between $A$ and $B$, and $A$ and $B$ [*see each other*]{} if there is at least one edge between $A$ and $B$. If a vertex $u \notin B$ has a neighbor in $B$ then [*$u$ contacts $B$*]{}. If every vertex in $A$ sees every vertex in $B$, we denote it by $A {\text{\textcircled{{\footnotesize 1}}}}B$. For $A=\{a\}$, we simply denote $A {\text{\textcircled{{\footnotesize 1}}}}B$ by $a {\text{\textcircled{{\footnotesize 1}}}}B$, and correspondingly for $A {\text{\textcircled{{\footnotesize 0}}}}B$ by $a {\text{\textcircled{{\footnotesize 0}}}}B$. If for $A' \subseteq A$, $A' {\text{\textcircled{{\footnotesize 0}}}}(A \setminus A')$, we say that $A'$ is [*isolated*]{} in $G[A]$. For graphs $H_1$, $H_2$ with disjoint vertex sets, $H_1+H_2$ denotes the disjoint union of $H_1$, $H_2$, and for $k \ge 2$, $kH$ denotes the disjoint union of $k$ copies of $H$. For example, $2P_2$ is the disjoint union of two edges.
As already mentioned, a [*path*]{} $P_k$ ([*cycle*]{} $C_k$, respectively) has $k$ vertices, say $v_1,\ldots,v_k$, and edges $v_iv_{i+1}$, $1 \le i \le k-1$ (and $v_kv_1$, respectively). We say that such a path has length $k-1$ and such a cycle has length $k$. A $C_3$ is called a [*triangle*]{}. Let $K_i$, $i \ge 1$, denote the complete graph with $i$ vertices. Clearly, $K_3=C_3$. Let $K_4-e$ or [*diamond*]{} be the graph with four vertices, say $v_1,v_2,v_3,u$, such that $(v_1,v_2,v_3)$ forms a $P_3$ and $u {\text{\textcircled{{\footnotesize 1}}}}\{v_1,v_2,v_3\}$; its [*mid-edge*]{} is the edge $uv_2$. A [*gem*]{} has five vertices, say $v_1,v_2,v_3,v_4,u$, such that $(v_1,v_2,v_3,v_4)$ forms a $P_4$ and $u {\text{\textcircled{{\footnotesize 1}}}}\{v_1,v_2,v_3,v_4\}$. A [*butterfly*]{} has five vertices, say, $v_1,v_2,v_3,v_4,u$, such that $v_1,v_2,v_3,v_4$ induce a $2P_2$ with edges $v_1v_2$ and $v_3v_4$ (the [*peripheral edges*]{} of the butterfly), and $u {\text{\textcircled{{\footnotesize 1}}}}\{v_1,v_2,v_3,v_4\}$. Vertices $a,b,c,d$ induce a [*paw*]{} if $b,c,d$ induce a $C_3$ and $a$ has exactly one neighbor in $b,c,d$, say $b$ (then $ab$ is the [*leaf edge*]{} of the paw).
For a set ${\cal F}$ of graphs, a graph $G$ is called [*${\cal F}$-free*]{} if $G$ contains no induced subgraph from ${\cal F}$. If ${\cal F}=\{H\}$ then instead of $\{H\}$-free, $G$ is called $H$-free. If for instance, $G$ is diamond-free and butterfly-free, we say that $G$ is (diamond,butterfly)-free.
We often consider an edge $e = uv$ to be a set of two vertices; then it makes sense to say, for example, $u \in e$ and $e \cap e'$ for an edge $e'$.
For two vertices $x,y \in V$, let $dist_G(x,y)$ denote the [*distance between $x$ and $y$ in $G$*]{}, i.e., the length of a shortest path between $x$ and $y$ in $G$. The [*distance between a vertex $z$ and an edge $xy$*]{} is the length of a shortest path between $z$ and $x,y$, i.e., $dist_G(z,xy)= \min\{dist_G(z,v) \mid v \in \{x,y\}\}$. The [*distance between two edges*]{} $e,e' \in E$ is the length of a shortest path between $e$ and $e'$, i.e., $dist_G(e,e')= \min\{dist_G(u,v) \mid u \in e, v \in e'\}$. In particular, this means that $dist_G(e,e')=0$ if and only if $e \cap e' \neq \emptyset$. Obviously, if $M$ is a d.i.m. then for every pair $e,e' \in M$, $e \neq e'$, $dist_G(e,e') \ge 2$ holds (a set of edges whose elements have pairwise distance at least 2 is also called [*induced matching*]{}).
For an edge $xy$, let $N_i(xy)$, $i \ge 1$, denote the [*distance levels of $xy$*]{}: $$N_i(xy):=\{z \in V \mid dist_G(z,xy) = i\}.$$
Clearly, for a d.i.m. $M$ of $G$ and its vertex set $V(M)$, $I:=V \setminus V(M)$ is an independent set in $G$, i.e., $$\label{IV(M)partition}
V \mbox{ has the partition } V = I \cup V(M).$$
From now on, let us color all vertices in $I$ white and all vertices in $V(M)$ black. According to [@HerLozRieZamdeW2015], we also use the following notions: In the process of finding $I$ and $M$ in $G$, we will assign either color black or color white to the vertices of $G$, and the assignment of one of the two colors to each vertex of $G$ is called a [*complete coloring*]{} of $G$. If not all vertices of $G$ have been assigned a color, the coloring is said to be [*partial*]{}.
A partial black-white coloring of $V(G)$ is [*feasible*]{} if the set of white vertices is an independent set in $G$ and every black vertex has at most one black neighbor. A complete black-white coloring of $V(G)$ is [*feasible*]{} if the set of white vertices is an independent set in $G$ and every black vertex has exactly one black neighbor. Clearly, $M$ is a d.i.m. of $G$ if and only if the black vertices $V(M)$ and the white vertices $V \setminus V(M)$ form a complete feasible coloring of $V(G)$, and a black-white coloring is [*not feasible*]{} if there is a [*contradiction*]{}, e.g., an edge with two white vertices etc. (From now on, we do not always mention “feasible” when discussing black-white colorings.)
Reduction steps, forbidden subgraphs, forced edges, and excluded edges {#exclforced}
----------------------------------------------------------------------
In [@BraMos2014], we used forced edges for reducing the graph $G$ to a smaller graph $G'$ such that $G$ has a d.i.m. if and only if $G'$ has a d.i.m. Here we combine the reduction approach with the one of [@HerLozRieZamdeW2015] as follows. A vertex $v$ is [*forced to be white*]{} if for every d.i.m. $M$ of $G$, $v \in V \setminus V(M)$. Analogously, a vertex $v$ is [*forced to be black*]{} if for every d.i.m. $M$ of $G$, $v \in V(M)$. Clearly, if $uv \in E$ and if $u$ and $v$ are forced to be black, then $uv$ is contained in every (possible) d.i.m. of $G$. For the correctness of the reduction steps, we have to argue that $G$ has a d.i.m. if and only if the reduced graph $G'$ has one (provided that no contradiction arises in the vertex coloring, i.e., it is feasible).
Then let us introduce three reduction steps which will be applied later.
[**Vertex Reduction.**]{} Let $u \in V(G)$. If $u$ is forced to be white, then
- color black all neighbors of $u$, and
- remove $u$ from $G$.
Let $G'$ be the reduced graph. Clearly, Vertex Reduction is correct, i.e., $G$ has a d.i.m. if and only if $G'$ has a d.i.m.
[**Edge Reduction.**]{} Let $uw \in E(G)$. If $u$ and $w$ are forced to be black, then
- color white all neighbors of $u$ and of $w$ (other than $u$ and $w$), and
- remove $u$ and $w$ from $G$.
Again, clearly, Edge Reduction is correct, i.e., $G$ has a d.i.m. if and only if $G'$ has a d.i.m.
The third reduction step is based on vertex degrees. A triangle in $G$ with vertices, say $a,b,c$, is called a [*peripheral triangle*]{} if $d_G(b)=d_G(c)=2$ (i.e., $N(b)=\{a,c\}$, $N(c)=\{a,b\}$) and $d_G(a) = 3$ (i.e., $a$ has exactly one further neighbor apart from $b,c$).
[**Peripheral Triangle Reduction.**]{} Let $abc$ be a peripheral triangle of $G$. Then remove $a,b,c$ from $G$.
\[pertriredcorrect\] $G$ has a d.i.m. if and only if, by applying Peripheral Triangle Reduction, the reduced graph $G'$ has a d.i.m.
[**Proof.**]{} Let $a,b,c$ induce a peripheral triangle in $G$, and let $u$ be the third neighbor of $a$.
First assume that $M$ is a d.i.m. of $G$. Recall that the triangle $a,b,c$ has exactly one $M$-edge. If $ab \in M$ then clearly, $G'$ has a d.i.m. $M':=M \setminus \{ab\}$, and correspondingly if $ac \in M$. Moreover, if $bc \in M$ then clearly, $G'$ has a d.i.m. $M':=M \setminus \{bc\}$.
Now assume that $G'$ has a d.i.m. $M'$. If $u$ is white then in $G$, $a$ is black and thus, either $M=M' \cup \{ab\}$ or $M=M' \cup \{ac\}$ is a d.i.m. of $G$. If $u$ is black then, by Proposition \[pawleafedge\], vertex $a$ is white and then $M=M' \cup \{bc\}$ is a d.i.m. of $G$.
The subsequent notions and observations lead to some possible reductions.
If an edge $e \in E$ is contained in [**every**]{} d.i.m. of $G$, we call it a [*forced*]{} edge of $G$. If an edge $e \in E$ is [**not**]{} contained in [**any**]{} d.i.m. of $G$, we call it an [*excluded*]{} edge of $G$ (we can denote this by a red edge-color).
Note that in a graph with d.i.m. $M$, the set $M' \subseteq M$ of forced edges is an induced matching. In our final algorithm solving the DIM problem on $S_{2,2,3}$-free graphs, the set $M'$ of forced edges will be computed and for example, it has to be checked whether $M'$ is really an induced matching.
\[dimC3C5C7C4\] Let $M$ be a dominating induced matching in $G$.
- $M$ contains at least one edge of every odd cycle $C_{2k+1}$ in $G$, $k \ge 1$, and exactly one edge of every odd cycle $C_3$, $C_5$, $C_7$ of $G$.
- No edge of any $C_4$ is in $M$.
(See e.g. Observation 2 in [@BraMos2014].)
By Observation \[dimC3C5C7C4\] $(i)$, every $C_3$ contains exactly one $M$-edge. Then, since the pairwise distance of edges in any d.i.m. $M$ is at least 2, we have:
\[cly:k4gemfree\] If a graph has a d.i.m. then it is $K_4$-free.
[Assumption 1.]{} By Corollary \[cly:k4gemfree\], assume that the input graph is $K_4$-free. For every subset of four vertices, one can check if they induce a $K_4$ (which can be done in polynomial time).
By Observation \[dimC3C5C7C4\] $(i)$ with respect to $C_3$ and the distance property, we have the following:
\[pawleafedge\] If $a,b,c,d$ induce a paw with $C_3$ $bcd$, and $a$ is adjacent to $b$ then its leaf edge $ab$ is excluded.
\[obs:diamondbutterfly\] The mid-edge of any diamond in $G$ and the two peripheral edges of any induced butterfly are forced edges of $G$.
[Assumption 2.]{} By Observation \[obs:diamondbutterfly\], assume that the input graph is (diamond,butterfly)-free: One can apply the Edge Reduction to each mid-edge of any induced diamond, and to each peripheral edge of any induced butterfly (which can be done in polynomial time).
Let $G_1$ denote the graph with $V(G_1)=\{x_1,\ldots,x_5,y_1,z_1,y_3,z_3\}$ such that $x_1,\ldots,x_5$ induce a $C_5$, $x_1,y_1,z_1$ induce a triangle, $x_3,y_3,z_3$ induce a triangle, and there are no other edges in $G_1$.
By Observation \[dimC3C5C7C4\] $(i)$, with respect to $C_3$ and $C_5$, we have:
\[obs:G1forcededges\] Every induced $G_1$ contains three forced edges of $G$, namely $y_1z_1$, $y_2z_2$, and $x_4x_5$.
[Assumption 3.]{} By Observation \[obs:G1forcededges\], assume that the input graph is $G_1$-free: One can apply the Edge Reduction to the corresponding three forced edges of any induced $G_1$ (which can be done in polynomial time).
Let $G_2$ denote the graph with $V(G_2)=\{x_1,\ldots,x_5,y_1,z_1,x^*\}$ such that $x_1,\ldots,x_5$ induce a $C_5$, $x_1,y_1,z_1$ induce a triangle, $x_2,x_3,x^*$ induce a triangle, and there are no other edges in $G_2$.
By Observation \[dimC3C5C7C4\] $(i)$, with respect to $C_3$ and $C_5$, and by Observation \[pawleafedge\], we have:
\[obs:G2forcededges\] Every induced $G_2$ contains three forced edges of $G$, namely $y_1z_1$, $x_2x^*$, and $x_4x_5$.
[Assumption 4.]{} By Observation \[obs:G2forcededges\], assume that the input graph is $G_2$-free: One can apply the Edge Reduction to the corresponding three forced edges of any induced $G_2$ (which can be done in polynomial time).
Let $G_3$ denote the graph with $V(G_3)=\{x_1,\ldots,x_5,y,x^*\}$ such that $x_1,\ldots,x_5$ induce a $C_5$, $x_1,x_2,x_3,y$ induce a $C_4$, $x_4,x_5,x^*$ induce a triangle, and there are no other edges in $G_3$.
By Observation \[dimC3C5C7C4\] $(i)$, with respect to $C_3$ and $C_5$, by Observation \[dimC3C5C7C4\] $(ii)$ with respect to $C_4$, and by Observation \[pawleafedge\], we have:
\[obs:G3forcededges\] Every induced $G_3$ contains a forced edge of $G$, namely $x_4x_5$.
[Assumption 5.]{} By Observation \[obs:G3forcededges\], assume that the input graph is $G_3$-free: One can apply the Edge Reduction to the corresponding forced edge of any induced $G_3$ (which can be done in polynomial time).
Recall that by Observation \[dimC3C5C7C4\] $(ii)$, every edge of an induced $C_4$ is excluded, and by Observation \[pawleafedge\], the leaf edge of an induced paw is excluded.
\[pawleafblackwhite\] Let $a,b,c,d$ induce a paw with $C_3$ $bcd$, and $ab \in E$. If $a$ is black then $b$ is white and $cd \in M$. That is, if the edge $cd$ is excluded, then $b$ is black and thus, vertex $a$ is forced to be white.
[Assumption 6.]{} By Observation \[pawleafblackwhite\] and the fact that all $C_4$-edges are excluded, and the leaf edges of paws are excluded, assume that in the input graph, there is no induced paw with vertices $a,b,c,d$, $C_3$ $bcd$ and leaf edge $ab \in E$, such that $cd$ is a $C_4$-edge: One can apply the Vertex Reduction to vertex $a$ whenever $cd$ is a $C_4$-edge (which can be done in polynomial time).
Let us conclude by pointing out that further similar observations could be stated to possibly detect forced edges or vertices forced to be white; for example, vertices forced to be white can be found by small degrees:
\[triangleneighbdeg1\]
- If $a,b,c$ induce a $C_3$ and $ad \in E$ such that $d_G(d)=1$ then $d$ is forced to be white.
- If $a,b,c,d$ induce a $C_4$ and $d_G(d)=2$ then $d$ is forced to be white.
[**Proof.**]{} $(i)$: Recall Observation \[pawleafedge\].
$(ii)$: Recall Observation \[dimC3C5C7C4\] $(ii)$.
Subsequently, concerning Assumptions 1-2, we will not recall them in an explicit way, but just recall that the input graph $G$ is $(K_4$, diamond, butterfly)-free. Concerning Assumptions 3-6, we will recall them in an explicit way.
The distance levels of an $M$-edge $xy$ in a $P_3$ {#subsec:distlevels}
--------------------------------------------------
Since it is trivial to check whether $G$ has a d.i.m. $M$ with exactly one edge, we can assume from now on that $|M| \geq 2$; for an edge $xy \in E$, assume that $xy \in M$, and based on [@BraMos2017; @BraMos2017/2], we first describe some general structure properties for the distance levels of $xy \in M$. Since $G$ is $(K_4$, diamond, butterfly)-free, we have:
\[obse:neighborhood\] For every vertex $v$ of $G$, $N(v)$ is the disjoint union of isolated vertices and at most one edge. Moreover, for every edge $xy \in E$, there is at most one common neighbor of $x$ and $y$.
We have:
\[obse:xy-in-P3\] If $|M| \geq 2$ then there is an edge in $M$ which is contained in an induced $P_3$ of $G$.
[**Proof.**]{} Let $xy \in M$ and assume that $xy$ is not part of an induced $P_3$ of $G$. Since $G$ is connected and $|M| \geq 2$, $(N(x) \cup N(y)) \setminus \{x,y\} \neq \emptyset$, and since we assume that $xy$ is not part of an induced $P_3$ of $G$ and $G$ is $K_4$- and diamond-free, there is exactly one neighbor of $xy$, namely a common neighbor, say $z$ of $x$ and $y$. Again, since $|M| \geq 2$, $z$ has a neighbor $a \notin \{x,y\}$, and since $G$ is $K_4$- and diamond-free, $a,x,y,z$ induce a paw. Clearly, the edge $za$ is excluded and has to be dominated by a second $M$-edge, say $ab \in M$ but now, since $G$ is butterfly-free, $zb \notin E$. Thus, $z,a,b$ induce a $P_3$ in $G$, and Observation \[obse:xy-in-P3\] is shown.
Thus, let $xy \in M$ be an $M$-edge for which there is a vertex $r$ such that $\{r,x,y\}$ induce a $P_3$ with edge $rx \in E$. Then, by the assumption that $xy \in M$, $x$ and $y$ are black and can lead to a (partial) feasible [*$xy$-coloring*]{} (if no contradiction arises). We consider a partition of $V$ into the distance levels $N_i=N_i(xy)$, $i \ge 1$, (and $N_0:=\{x,y\}$) with respect to the edge $xy$ (under the assumption that $xy \in M$).
Recall that by (\[IV(M)partition\]), $V=I \cup V(M)$ is a partition of $V$ where $I$ is an independent set (of white vertices) while $V(M)$ is the set of black vertices. Since we assume that $xy \in M$, clearly, $N_1 \subseteq I$ and thus: $$\label{N1subI}
N_1 \mbox{ is an independent set of white vertices.}$$
Moreover, no edge between $N_1$ and $N_2$ is in $M$. Since $N_1 \subseteq I$ and all neighbors of vertices in $I$ are in $V(M)$, we have: $$\label{N2M2S2}
G[N_2] \mbox{ is the disjoint union of edges and isolated vertices.}$$
Let $M_2$ denote the set of edges $uv \in E$ with $u,v \in N_2$ and let $S_2 = \{u_1,\ldots,u_k\}$ denote the set of isolated vertices in $N_2$; $N_2=V(M_2) \cup S_2$ is a partition of $N_2$. Obviously: $$\label{M2subM}
M_2 \subseteq M \mbox{ and } S_2 \subseteq V(M).$$
If for $xy \in M$, an edge $e \in E$ is contained in [**every**]{} dominating induced matching $M$ of $G$ with $xy \in M$, we say that $e$ is an [*$xy$-forced*]{} $M$-edge. The Edge Reduction step can also be applied for $xy$-forced $M$-edges (then, in the unsuccessful case, $G$ has no d.i.m. containing $xy$), and correspondingly for white vertices resulting from the black color of $x$ and $y$ and peripheral triangles.
Obviously, by (\[M2subM\]), we have: $$\label{M2xymandatory}
\mbox{Every edge in } M_2 \mbox{ is an $xy$-forced $M$-edge}.$$
Thus, from now on, after applying the Edge Reduction for $M_2$-edges, we can assume that $M_2=\emptyset$, i.e., $N_2=S_2 = \{u_1,\ldots,u_k\}$. For every $i \in \{1,\ldots,k\}$, let $u'_i \in N_3$ denote the [*$M$-mate*]{} of $u_i$ (i.e., $u_iu'_i \in M$). Let $M_3=\{u_iu'_i: i \in \{1,\ldots,k\}\}$ denote the set of $M$-edges with one endpoint in $S_2$ (and the other endpoint in $N_3$). Obviously, by (\[M2subM\]) and the distance condition for a d.i.m. $M$, the following holds: $$\label{noMedgesN3N4}
\mbox{ No edge with both ends in } N_3 \mbox{ and no edge between } N_3 \mbox{ and } N_4 \mbox{ is in } M.$$
As a consequence of (\[noMedgesN3N4\]) and the fact that every triangle contains exactly one $M$-edge (see Observation \[dimC3C5C7C4\] $(i)$), we have: $$\label{triangleaN3bcN4}
\mbox{For every triangle $abc$} \mbox{ with } a \in N_3, \mbox{ and } b,c \in N_4, \mbox{ $bc \in M$ is an $xy$-forced $M$-edge}.$$
This means that for the edge $bc$, the Edge Reduction can be applied, and from now on, we can assume that there is no such triangle $abc$ with $a \in N_3$ and $b,c \in N_4$, i.e., for every edge $uv \in E$ in $N_4$: $$\label{edgeN4N3neighb}
N(u) \cap N(v) \cap N_3 = \emptyset.$$
According to $(\ref{M2subM})$ and the assumption that $M_2=\emptyset$ (recall $N_2 = \{u_1,\ldots,u_k\}$), let:
1. $T_{one} := \{t \in N_3: |N(t) \cap N_2| = 1\}$;
2. $T_i := T_{one} \cap N(u_i)$, $i \in \{1,\ldots,k\}$;
3. $S_3 := N_3 \setminus T_{one}$.
By definition, $T_i$ is the set of [*private*]{} neighbors of $u_i$ in $N_3$ (note that $u'_i \in T_i$), $T_1 \cup \ldots \cup T_k$ is a partition of $T_{one}$, and $T_{one} \cup S_3$ is a partition of $N_3$.
\[lemm:structure2\] The following statements hold:
1. For all $i \in \{1,\ldots,k\}$, $T_i \cap V(M)=\{u_i'\}$.
2. For all $i \in \{1,\ldots,k\}$, $T_i$ is the disjoint union of isolated vertices and at most one edge.
3. $G[N_3]$ is bipartite.
4. $S_3 \subseteq I$, i.e., $S_3$ is an independent vertex set of white vertices.
5. If a vertex $t_i \in T_i$ sees two vertices in $T_j$, $i \neq j$, $i,j \in \{1,\ldots,k\}$, then $u_it_i \in M$ is an $xy$-forced $M$-edge.
[**Proof.**]{} $(i)$: Holds by definition of $T_i$ and by the distance condition of a d.i.m. $M$.
$(ii)$: Holds by Observation \[obse:neighborhood\].
$(iii)$: Follows by Observation \[dimC3C5C7C4\] $(i)$ since every odd cycle in $G$ must contain at least one $M$-edge, and by (\[noMedgesN3N4\]).
$(iv)$: If $v \in S_3$, i.e., $v$ sees at least two $M$-vertices, then clearly $v \in I$, and thus, $S_3 \subseteq I$ is an independent vertex set (recall that $I$ is an independent vertex set).
$(v)$: Suppose that $t_1 \in T_1$ sees $a$ and $b$ in $T_2$. Then, if $ab \in E$, $u_2,a,b,t_1$ induce a diamond in $G$. Thus, $ab \notin E$ and now, $u_2,a,b,t_1$ induce a $C_4$ in $G$; the only possible $M$-edge for dominating $t_1a,t_1b$ is $u_1t_1$, i.e., $t_1=u'_1$.
From now on, by Lemma \[lemm:structure2\] $(v)$, we can assume that for every $i,j \in \{1,\ldots,k\}$, $i \neq j$, any vertex $t_i \in T_i$ sees at most one vertex in $T_j$.
See [@BraMos2017/2] for the following fact:
\[P1\] If $v \in N_2$ then $v$ is an endpoint of an induced $P_4$, say with vertices $v,v_1,v_2,v_3$, $v_1 \in N_1$, and with edges $vv_1 \in E$, $v_1v_2 \in E$, $v_2v_3 \in E$, and if $v \in N_i$ for $i \ge 3$ then $v$ is an endpoint of an induced $P_5$, say with vertices $v,v_1,v_2,v_3,v_4$ such that $v_1,v_2,v_3,v_4 \in \{x,y\} \cup N_1 \cup \ldots \cup N_{i-1}$ and with edges $vv_1 \in E$, $v_1v_2 \in E$, $v_2v_3 \in E$, $v_3v_4 \in E$.
[**Proof.**]{} First assume that $v \in N_2$. Since $xy$ is part of a $P_3$ with vertices $x,y,r$ and edges $xy, xr$, we have the following cases: If $vr \in E$ then $(v,r,x,y)$ is a $P_4$. Now assume that $vr \notin E$, and let $v_1 \in N_1$ be a neighbor of $v$. If $v_1r \in E$ then, since $G$ is diamond-free, $v_1x \notin E$ or $v_1y \notin E$. If $v_1x \in E$ and $v_1y \notin E$ then $(v,v_1,x,y)$ is a $P_4$, and if $v_1x \notin E$ and $v_1y \in E$ then $(v,v_1,y,x)$ is a $P_4$. Finally, if $v_1r \notin E$ then if $v_1x \in E$, $(v,v_1,x,r)$ is a $P_4$, and if $v_1x \notin E$ then $v_1y \in E$ and now, $(v,v_1,y,x)$ is a $P_4$.
If $v \in N_i$ for $i \ge 3$ then by similar arguments as above, $v$ is endpoint of a $P_5$ as described above.
Let $X := \{x,y\} \cup N_1 \cup N_2 \cup N_3$ and $Y := V \setminus X$. Subsequently, for checking if $G$ has a d.i.m. $M$ with $xy \in M$, we first consider the possible colorings for $G[X]$.
Coloring $G[X]$ {#ColoringG[X]}
===============
As in [@BraMos2017/2], we have:
\[lemm:N4empty\] The following statements hold:
1. For every edge $vw \in E$, $v,w \in N_3$, with $vu_i \in E$ and $wu_j \in E$ $($possibly $i=j)$, we have $|\{v,w\} \cap \{u'_i,u'_j\}| = 1$.
2. For every edge $st \in E$ with $s \in S_3$ and $t \in T_i$, $t=u'_i$ holds, and thus, $u_it$ is an $xy$-forced $M$-edge.
[**Proof.**]{} $(i)$: By (\[noMedgesN3N4\]), $N_3$ does not contain any $M$-edge, and clearly, if $vw \in E$ then either $v$ or $w$ is black; without loss of generality, let $v$ be black but then $v=u'_i$ and $w$ is white, i.e., $w \neq u'_j$.
$(ii)$: By Lemma \[lemm:structure2\] $(iv)$, $S_3 \subseteq I$ and thus, by Lemma \[lemm:N4empty\] $(i)$, for the edge $st$ with $s \in S_3$, $s$ is white and thus, $t=u'_i$ holds.
From now on, after the Edge Reduction step, we can assume that $S_3$ is isolated in $G[N_3]$. This means that every edge between $N_2$ and $N_3$ containing a vertex of $S_3$ is dominated. If $N_4 \neq \emptyset$ and $t \in N_4$ has a neighbor $s \in S_3$ then $t$ is forced to be black, and thus, every neighbor of $t$ in $N_3$ is forced to be white.
Thus, for coloring $G[X]$, we can assume that $S_3 = \emptyset$, i.e., $N_3=T_1 \cup \ldots \cup T_k$.
Recall that:
- All neighbors in $T_1 \cup \ldots \cup T_k$ of a black vertex in $T_1 \cup \ldots \cup T_k$ must be colored white, and all neighbors of a white vertex in $T_1 \cup \ldots \cup T_k$ must be colored black.
- Every $T_i$, $i \in \{1,\ldots,k\}$, contains exactly one vertex which is black. Thus, if $t_i \in T_i$ is black then all the remaining vertices of $T_i$ must be colored white.
- If all but one vertices of $T_i$, $i \in \{1,\ldots,k\}$, are white and the final vertex $t$ is not yet colored, then $t$ must be colored black. In particular, if $|T_i|=1$, i.e., $T_i=\{t_i\}$, then $t_i$ is forced to be black, and $u_it_i$ is an $xy$-forced $M$-edge.
- All neighbors in $N_4$ of a black vertex in $T_1 \cup \ldots \cup T_p$ must be colored white, and all neighbors in $N_4$ of a white vertex in $T_1 \cup \ldots \cup T_p$ must be colored black.
- All neighbors in $T_1 \cup \ldots \cup T_p$ of a black vertex in $N_4$ must be colored white, and all neighbors in $T_1 \cup \ldots \cup T_p$ of a white vertex in $N_4$ must be colored black.
Thus, again after the Edge Reduction step, we can assume that for every $i \in \{1,\ldots,k\}$, $|T_i| \ge 2$. However, it can be shown also by $S_{2,2,3}$-free arguments: If $|T_i|=1$ and $T_i$ contacts $T_j$, $i \neq j$, such that $T_j$ contacts only $T_i$, then $T_j$ is [*trivial*]{}.
\[singleTiconncomponents\] Let $H$ be a connected component of $G[S_2 \cup T_{one}]$ containing a $T_i$ with $|T_i|=1$. If $T_i$ contacts two nontrivial parts of the connected component, say $H_1$ and $H_2$, then there is an edge between $H_1$ and $H_2$, i.e., $H \setminus T_i$ is still connected.
[**Proof.**]{} Without loss of generality, assume that $|T_3|=1$, i.e., $T_3=\{t_3\}$, and $T_1,T_2$ are part of $H_1$ such that $T_3$ contacts $T_2$, and $T_2$ contacts $T_1$, as well as $T_4,T_5$ are part of $H_2$ such that $T_3$ contacts $T_4$ and $T_4$ contacts $T_5$. Suppose to the contrary that $H_1 {\text{\textcircled{{\footnotesize 0}}}}H_2$. Let $w_3 \in N_1$ be a neighbor of $u_3$, and without loss of generality, let $w_3x \in E$. Let $t_2 \in T_2$ with $t_2t_3 \in E$, and let $t_4 \in T_4$ with $t_4t_3 \in E$. Since $t_3$ is black, $t_2$ and $t_4$ are white, and thus, we can assume that $|T_2| \ge 2$ and $|T_4| \ge 2$ since they must have a black vertex; let $t'_2 \in T_2$ and $t'_4 \in T_4$ with $t'_i \neq t_i$, $i=2,4$, be possible black candidates, and thus, $t'_it_3 \notin E$, $i=2,4$ (otherwise, they are still white).
Since $t_3,t_2,u_2,t_4,u_4,u_3,w_3,x$ (with center $t_3$) do not induce an $S_{2,2,3}$, we have $u_2w_3 \in E$ or $u_4w_3 \in E$.
Since $t_3,t_2,t'_2,t_4,t'_4,u_3,w_3,x$ (with center $t_3$) do not induce an $S_{2,2,3}$, we have $t_2t'_2 \notin E$ or $t_4t'_4 \notin E$ (recall that $t'_it_3 \notin E$, $i=2,4$).
Similarly, if $t_2$ contacts $T_1$, i.e., there is $t_1 \in T_1$ with $t_1t_2 \in E$ and $t_4$ contacts $T_5$, i.e., there is $t_4 \in T_4$ with $t_4t_5 \in E$, then $t_1$ and $t_5$ are black, and thus, $t_3t_1 \notin E$ and $t_3t_5 \notin E$. But then $t_3,t_2,t_1,t_4,t_5,u_3,w_3,x$ (with center $t_3$) induce an $S_{2,2,3}$, which is a contradiction. Thus, without loss of generality, $t_2 {\text{\textcircled{{\footnotesize 0}}}}T_1$; let $t'_2$ contact $T_1$.
Assume first that $t_2t'_2 \notin E$ but $t_4t'_4 \in E$. Since $t_3,t_2,u_2,t_4,t'_4,u_3,w_3,x$ (with center $t_3$) do not induce an $S_{2,2,3}$, we have $u_2w_3 \in E$. Recall that $t_3t'_2 \notin E$. Let $t_1t'_2 \in E$ for some $t_1 \in T_1$.
Since $t_3,t_1,t'_2,t_4,t'_4,u_3,w_3,x$ (with center $t_3$) do not induce an $S_{2,2,3}$, we have $t_3t_1 \notin E$ but now, $u_2,w_3,x,t'_2,t_1,t_2,t_3,t_4$ (with center $u_2$) induce an $S_{2,2,3}$, which is a contradiction. Thus, $t_2t'_2 \notin E$ and $t_4t'_4 \notin E$. Let $t_5t'_4 \notin E$ for some $t_5 \in T_5$. Recall that $t_3t'_4 \notin E$.
Since $t_3,t_1,t'_2,t_5,t'_4,u_3,w_3,x$ (with center $t_3$) do not induce an $S_{2,2,3}$, we have $t_3t_1 \notin E$ or $t_3t_5 \notin E$; by symmetry, assume that $t_3t_1 \notin E$.
Since $u_2,w_3,x,t'_2,t_1,t_2,t_3,t_4$ (with center $u_2$) do not induce an $S_{2,2,3}$, we have $u_2w_3 \notin E$; recall that this implies $u_4w_3 \in E$.
Since $t_3,t_2,u_2,t_5,t'_4,u_3,w_3,x$ (with center $t_3$) do not induce an $S_{2,2,3}$, we have $t_3t_5 \notin E$.
But now, if $t_4t_5 \notin E$ then $u_4,w_3,x,t'_4,t_5,t_4,t_3,t_2$ (with center $u_4$) induce an $S_{2,2,3}$, and if $t_4t_5 \in E$ then $t_3,t_2,u_2,t_4,t_5,u_3,w_3,x$ (with center $t_3$) induce an $S_{2,2,3}$, which is a contradiction.
Thus, Lemma \[singleTiconncomponents\] is shown.
From now on, we assume that for every $i \in \{1,\ldots,k\}$, $|T_i| \ge 2$.
Recall that by Lemma \[lemm:structure2\] $(v)$ (and since $G$ is diamond-free), if $t_i \in T_i$ sees two vertices $t_j,t'_j \in T_j$, $i \neq j$, then $t_i$ is forced to be black and thus, all other vertices in $T_i$ are white, i.e., $T_i$ is completely colored. Moreover, $t_j,t'_j \in T_j$ are forced to be white. Thus, we can assume: $$\label{tiatmostoneneighbinIj}
\mbox{Every vertex of } T_i \mbox{ has at most one neighbor in } T_j, j \neq i.$$
Let $K$ be the connected component of $G[S_2 \cup T_{one}]$ containing $t_i$ (without loss of generality, say $i = 1$ and $K$ is the subgraph induced by $\{u_1,\ldots,u_p\}$ and by $T_1 \cup \ldots \cup T_p$, $2 \leq p \leq k$).
\[DIMxyN1N2N3pol\] Showing that $G$ has no d.i.m. $M$ with $t_1 \in V(M)$, or finding a d.i.m. $M(t_1)$ of $K$ with $u_1t_1 \in M$ can be done in polynomial time since there is only a polynomial number of possible black-white colorings of $K$.
[**Proof.**]{} First we show:
\[S223frconncompinN3\] If there are two edges, namely between $T_i$ and $T_j$ and between $T_j$ and $T_{\ell}$ $($possibly $i=\ell)$ or between $T_i$ and $T_j$ and between $T_j$ and $N_4$ then they do not induce a $2P_2$ in $G[N_3 \cup N_4]$.
[*Proof.*]{} Assume first without loss of generality that there are two edges, namely one between $T_1$ and $T_2$ and another between $T_2$ and $T_3$ or between $T_2$ and $N_4$.
Suppose to the contrary that $t_1t_2 \in E$ and $t'_2t_3 \in E$ induce a $2P_2$ in $G[N_3 \cup N_4]$ for $t_i \in T_i$, $i=1,2$, $t_3 \in T_3 \cup N_4$ and $t'_2 \in T_2$. Recall that $y,x,r$ induce a $P_3$. If $u_2r \in E$ then $u_2,t_2,t_1,t'_2,t_3,r,x,y$ (with center $u_2$) induce an $S_{2,2,3}$, which is a contradiction. Thus, $u_2r \notin E$; let $w \in N_1$ with $u_2w \in E$ and without loss of generality, let $wx \in E$. But then $u_2,t_2,t_1,t'_2,t_3,w,x,r$ (with center $u_2$) induce an $S_{2,2,3}$, which is again a contradiction. Analogously, the same can be shown for two edges between $T_1$ and $T_2$.
Thus, Claim \[S223frconncompinN3\] is shown. $\diamond$
The procedure starts with a subset $T_i$, say $i=1$, which is not yet completely colored, and in particular, none of its vertices is already black. Recall that by (\[tiatmostoneneighbinIj\]), for every $j \in \{2,\ldots,p\}$, any vertex $t_1 \in T_1$ sees at most one vertex in $T_j$. We have to check for every vertex $t_1 \in T_1$ (which is not yet colored) whether $t_1$ could be black.
We are going to show that, once $t_1 \in T_1$ is assumed to be black, it will lead to a complete black-white coloring of all other vertices in $K$ or to a contradiction.
For instance, if there are three edges between $T_1$ and $T_2$, say $t_1t_2 \in E$, $t'_1t'_2 \in E$, $t''_1t''_2 \in E$, $t_i,t'_i,t''_i \in T_i$, $i=1,2$, then $t_1$ is black if and only if $t_2$ is white, $t'_1$ is black if and only if $t'_2$ is white, and $t''_1$ is black if and only if $t''_2$ is white. Without loss of generality, assume that $t_1$ is black, and $t_2$ is white. Then $t'_1$ is white, and $t'_2$ is black, but now, $t''_1$ and $t''_2$ are white which leads to a contradiction.
Let us say that a vertex $t \in T_i$ (for $i \in \{1,\ldots,k\}$) is an [*$N_3$-out-vertex*]{} of $T_i$ if it contacts some $T_j$ with $j \neq i$, $t$ is an [*$N_4$-out-vertex*]{} of $T_i$ if it contacts $N_4$, and $t$ is an [*in-vertex*]{} of $T_i$ otherwise. For every $T_i$, the set of in-vertices of $T_i$ can be reduced to at most one such vertex: $$\label{atmostoneinvertex}
\mbox{ For every } i \in \{1,\ldots,k\}, T_i \mbox{ has at most one in-vertex}.$$
In fact, if there is an in-vertex in $T_i$ then one can reduce the set of all in-vertices of $T_i$ to exactly one of them with minimum weight; that can be done in polynomial time.
Recall that we start with some $T_i$, say $i=1$, which is not yet completely colored. Then by (\[tiatmostoneneighbinIj\]), for $j \neq 1$, no vertex in $T_1$ has two neighbors in $T_j$. If the color of a vertex $t_1 \in T_1$ is black, then the color of all vertices of $T_1$ and the color of all vertices of any $T_j$ which is adjacent to $T_1$ is forced.
First assume that an $N_3$-out-vertex $t_1 \in T_1$, which is not yet colored, could be a black vertex, and recall that in any $T_j$, $t_1$ has at most one neighbor; without loss of generality, assume that $t_1$ has exactly one neighbor in $T_2$.
Then all vertices in $T_1 \setminus \{t_1\}$ are white. If a white vertex $t'_1$ of $T_1$ has a neighbor $t_2 \in T_2$ then $t_2$ is black and all other vertices in $T_2$ are white, i.e., $T_2$ is completely colored. Thus assume that no white vertex in $T_1$ has a neighbor in $T_2$: Let $t_1t_2 \in E$ with $t_2 \in T_2$. Then $t_2$ is white, and if $t_2$ has a neighbor $t'_2 \in T_2$ then $t'_2$ is black and thus, $T_2$ is completely colored. Thus, assume that $t_2$ has no neighbor in $T_2$.
If all other vertices in $T_2$ are in-vertices then by $(\ref{atmostoneinvertex})$, there is only one of them, say $t'_2$, and thus, $t'_2$ is forced to be black, and $T_2$ is completely colored.
Thus, first assume that there is a second $N_3$-out-vertex, say $t'_2 \in T_2$ which could be black but is not yet colored. Then clearly, $t'_2$ has no neighbor in $T_1 \cup \{t_2\}$. Let $t'_2t_3 \in E$ for $t_3 \in T_3$. Again by (\[tiatmostoneneighbinIj\]), $t_3t_2 \notin E$. Since by Claim \[S223frconncompinN3\], $t_1,t_2,t'_2,t_3$ do not induce a $2P_2$, the only possible edge between $t_1t_2$ and $t'_2t_3$ is $t_1t_3 \in E$, but now, $t_3$ is white which implies that $t'_2$ is black and thus, $T_2$ is completely colored.
If $t'_2t_3 \in E$ for $t_3 \in N_4$ then again, by Claim \[S223frconncompinN3\], $t_1,t_2,t'_2,t_3$ do not induce a $2P_2$, and thus, since $t'_2$ is not yet colored, it follows that $t_3t_1 \in E$ and thus, $t_3$ is white and now, $t'_2$ is black and thus, $T_2$ is again completely colored.
Finally, if $t_1$ is the in-vertex in $T_1$ and we assume that $t_1$ is black then every neighbor of the out-vertices of $T_1$ is forced to be black, and thus, every $T_j$ which contacts $T_1$ is completely colored.
In the same way as above, it can be done for every $T_i$ which is not yet completely colored but is adjacent to an already completely colored $T_j$.
Thus, Theorem \[DIMxyN1N2N3pol\] is shown.
Next we show:
\[lemm:G\[X\]coloring\] For $S_{2,2,3}$-free graphs $G$ with $N_4 \ne \emptyset$, the number of feasible $xy$-colorings of $G[X]$ $($with contact to $N_4)$ is at most polynomial. In particular, such $xy$-colorings can be detected in polynomial time.
For the proof of Theorem \[lemm:G\[X\]coloring\], we will collect some facts and propositions below.
\[rem1\] Recall that according to Theorem $\ref{DIMxyN1N2N3pol}$, once the color of a vertex in $T_i$, $1 \le i \le k$, is fixed to be black, then the color of all vertices of the connected component of $G[S_2 \cup T_{one}]$ containing $T_i$ is forced $($not necessarily in a feasible way$)$.
By Remark \[rem1\] and by Lemma \[lemm:structure2\], we have:
\[QconncompK\] Let $Q$ denote the family of connected components of $G[S_2 \cup T_{one}]$, and let $K$ be a member of $Q$.
- By Lemma $\ref{lemm:structure2}$ $(iv)$, any vertex in $V(K) \cap T_{one}$ contacting $S_3$ is black, and thus, the color of each vertex of $K$ is forced.
- If for some $i \in \{1,\ldots,k\}$, $K$ contains a subset $T_i$ such that $|T_i| = 1$, then by Lemma $\ref{lemm:structure2}$ $(i)$, the vertex in $T_i$ is black, and thus, the color of each vertex of $K$ is forced.
- If for some $i \in \{1,\ldots,k\}$, $K$ contains a subset $T_i$ such that $|T_i| \geq 2$ and there is a vertex $z \in N_4$ with $z {\text{\textcircled{{\footnotesize 1}}}}T_i$ then, by the $C_4$-property in Observation $\ref{dimC3C5C7C4}$ $(ii)$ and since $G$ is diamond-free, $G$ has no d.i.m. with $xy \in M$.
- If $K {\text{\textcircled{{\footnotesize 0}}}}N_4$ then clearly, $K$ can be treated independently of the other members of $Q$.
Thus, by Proposition \[QconncompK\], we can restrict $Q$ as follows: Let $Q^*$ be the family of connected components $K$ of $G[S_2 \cup T_{one}]$ such that:
- no vertex in $V(K) \cap T_{one}$ contacts $S_3$,
- $V(K)$ contains no subset $T_i$, $1 \le i \le k$, such that $|T_i|= 1$,
- for any $z \in N_4$, there is at least one non-neighbor of $z$ in $V(K) \cap N_3$, and
- some vertex of $V(K)$ contacts $N_4$.
\[P2\] $|Q^*| \le 3$.
[**Proof.**]{} Suppose to the contrary that $|Q^*| \ge 4$; let $L_1,\ldots,L_4 \in Q^*$. Let $N_{x} := \{w \in N_1: w$ is adjacent to $x$ and non-adjacent to $y\}$, $N_{y} := \{w \in N_1: w$ is adjacent to $y$ and non-adjacent to $x\}$, and $N_{xy} := \{w \in N_1: w$ is adjacent to $x$ and $y\}$. Then $N_{xy} \cup N_x \cup N_y$ is a partition of $N_1$. Since $G$ is (diamond,$K_4$)-free, $|N_{xy}| \leq 1$. Since $\{r,x,y\}$ induce a $P_3$ with edge $rx \in E$, we have $N_{x} \neq \emptyset$. We first show:
\[XQ\*claim1\] For every $w \in N_x$ $(w \in N_y$, respectively$)$, at most one component in $Q^*$ contacts $w$.
[*Proof.*]{} Suppose to the contrary that there is a vertex $w \in N_x$ and there are two components $L_1,L_2 \in Q^*$ such that for $u_i \in S_2 \cap V(L_i)$, $i=1,2$, $wu_1 \in E$ and $wu_2 \in E$. Recall that $T_i=N(u_i) \cap T_{one}$ and $t_i \in T_i$ (subsequently, we will also use this in the proofs of the following claims). Then by (R4), there are vertices $t_1 \in T_1$ and $z \in N_4$ with $t_1z \in E$, and by (R3), there is $t_2 \in T_2$ with $zt_2 \notin E$. But now, $w,x,y,u_2,t_2,u_1,t_1,z$ (with center $w$) induce an $S_{2,2,3}$, which is a contradiction. Thus, at most one component in $Q^*$ contacts $w \in N_x$. Analogously, it is true for $w \in N_y$. $\diamond$
\[XQ\*claim2\] At most two components in $Q^*$ contact $N_x$ $(N_y$, respectively$)$.
[*Proof.*]{} Suppose to the contrary that there are three such components $L_1,L_2,L_3 \in Q^*$ contacting $N_x$. By Claim \[XQ\*claim1\], $L_1,L_2,L_3$ do not have common neighbors in $N_x$. Thus, let $w_i$, $i=1,2,3$, be distinct neighbors of $L_i$ in $N_x$. But then $x,w_1,u_1,w_2,u_2,w_3,u_3,t_3$ (with center $x$) induce an $S_{2,2,3}$, which is a contradiction. Thus, at most two components in $Q^*$ contact $N_x$, and analogously, at most two components in $Q^*$ contact $N_y$. $\diamond$
However, we can show:
\[XQ\*claim3\] If two components, say $L_1,L_2$ in $Q^*$ contact $N_x$ then no other component $L_i$, $i \ge 3$, in $Q^*$ contacts $N_y$.
[*Proof.*]{} Assume that $L_1,L_2$ contact $N_x$, say $w_iu_i \in E$, $i=1,2$, for $w_i \in N_x$ (recall $w_1 \neq w_2$ and $w_1u_2 \notin E, w_2u_1 \notin E$ by Claim \[XQ\*claim1\]), and suppose to the contrary that there is a component $L_3 \in Q^*$ contacting $N_y$, say $wu_3 \in E$ for $w \in N_y$.
By Claim \[XQ\*claim1\] and since $L_3$ contacts $w$, $L_1,L_2$ do not contact $w$ but now, $x,y,w,w_1,u_1,w_2,u_2,t_2$ (with center $x$) induce an $S_{2,2,3}$, which is a contradiction. $\diamond$
\[XQ\*claim4\] At most two components in $Q^*$ contact $N_{xy}$.
[*Proof.*]{} Let $N_{xy}=\{w\}$ (recall that $|N_{xy}| \le 1$). Suppose to the contrary that there are three such components $L_1,L_2,L_3 \in Q^*$ contacting $N_{xy}$, say $u_1,u_2,u_3$, $u_i \in L_i \cap N_2$, contact $w$. Assume that $z \in N_4$ contacts $t_3 \in T_3$. Clearly, by (R3), there are vertices $t_1 \in T_1, t_2 \in T_2$ with $zt_1 \notin E$, and $zt_2 \notin E$. But then $w,u_1,t_1,u_2,t_2,u_3,t_3,z$ (with center $w$) induce an $S_{2,2,3}$, which is a contradiction. $\diamond$
Next we show:
\[XQ\*claim5\] If two components, say $L_1,L_2$ in $Q^*$ contact $N_{xy}$ then no other component $L_i$, $i \ge 3$, in $Q^*$ contacts $N_x$ or $N_y$.
[*Proof.*]{} Let $N_{xy}=\{w\}$ (recall that $|N_{xy}| \le 1$), and let $L_1,L_2$ in $Q^*$ contact $w$, say $u_i \in L_i \cap N_2$, $i=1,2$, contact $w$. Suppose to the contrary that there is an $L_3$ in $Q^*$ contacting $N_x$, say $u_3 \in L_3 \cap N_2$ and $w_x \in N_x$ with $u_3w_x \in E$. Recall $u_3w \notin E$ by Claim \[XQ\*claim4\]. If $u_1w_x \notin E$ and $u_2w_x \notin E$ then $w,u_1,t_1,u_2,t_2,x,w_x,u_3$ (with center $w$) would induce an $S_{2,2,3}$. By Claim \[XQ\*claim1\], at most one of $u_1,u_2$ contacts $w_x$, say without loss of generality $u_1w_x \in E$ and $u_2w_x \notin E$. Recall that by (R3), there is a vertex $z \in N_4$ with $zt_1 \in E$ and $zt_2 \notin E$ but now, $u_1,t_1,z,w_x,u_3,w,u_2.t_2$ (with center $u_1$) induce an $S_{2,2,3}$, which is a contradiction. $\diamond$
From now on, we can assume that at most one component in $Q^*$ contacts $N_{xy}$. If no component in $Q^*$ contacts $N_{xy}$ then by Claim \[XQ\*claim3\], $|Q^*| \le 3$. Thus assume that $L_1 \in Q^*$ contacts $N_{xy}$, say $u_1w \in E$ for $u_1 \in L_1 \cap N_2$ and $N_{xy}=\{w\}$. Suppose to the contrary that there are three further components $L_2,L_3,L_4 \in Q^*$ contacting $N_x$, $N_y$. By Claim \[XQ\*claim2\], at most two of them contact $N_x$, and at most two of them contact $N_y$. Without loss of generality, assume that $L_2,L_3$ contact $N_x$, say $u_2w_2 \in E$ and $u_3w_3 \in E$ for $w_2,w_3 \in N_x$, $w_2 \neq w_3$, and $u_i \in L_i \cap N_2$, $i=2,3$. By Claim \[XQ\*claim1\], $u_2w_3 \notin E$ and $u_3w_2 \notin E$.
By the assumption that $L_1 \in Q^*$ contacts $N_{xy}$, and by Claim \[XQ\*claim5\], we have $u_2w \notin E$ and $u_3w \notin E$. Let $z \in N_4$ be a neighbor of $t_1 \in L_1 \cap N_3$. Since $x,w_2,u_2,w_3,u_3,w,u_1,t_1$ (with center $x$) do not induce an $S_{2,2,3}$, we have $u_1w_2 \in E$ or $u_1w_3 \in E$. If $u_1w_2 \in E$ and $u_1w_3 \in E$ then $u_1,t_1,z,w_2,u_2,w_3,u_3,t_3$ (with center $u_1$) would induce an $S_{2,2,3}$ (recall that $t_3z \notin E$). Thus, without loss of generality, assume that $u_1w_2 \notin E$ and $u_1w_3 \in E$. But then $w_3,x,w_2,u_3,t_3,u_1,t_1,z$ (with center $w_3$) induce an $S_{2,2,3}$, which is a contradiction. This finally leads to $|Q^*| \le 3$.
Thus, Proposition \[P2\] is shown.
The proof of Theorem \[lemm:G\[X\]coloring\] follows by Remark 1 and by Proposition \[P2\].
Coloring $G[Y]$ {#ColoringG[Y]}
===============
Recall that $X := \{x,y\} \cup N_1 \cup N_2 \cup N_3$ and $Y := V \setminus X$. From now on, let $Y \neq \emptyset$. Subsequently, we apply the polynomial-time solution for $S_{2,2,2}$-free graphs [@HerLozRieZamdeW2015] (see Theorem \[DIMpolresults\] $(iii)$). In particular let us try to connect the “coloring approach” of [@HerLozRieZamdeW2015] with the above. Recall that for a d.i.m. $M$ of $G$, $V(G)=V(M) \cup I$ is a partition of $V(G)$, all vertices of $V(M)$ are black and all vertices of $I$ are white. Recall the following [*forcing rules*]{} (under the assumption that $xy \in M$):
- If a vertex $v$ is white then all of its neighbors must be black.
- If two adjacent vertices are black then all of their neighbors are white.
- If a vertex $u$ is black and all of its neighbors, except $v \in N(u)$, are white, then $v$ must be black.
Let us fix a feasible $xy$-coloring of $X$ if there is one (otherwise $xy \notin M$). Consequently, $G[Y]$ has a fixed partial $xy$-coloring of its vertices, due to the forcing rules. We try to extend it to a complete feasible $xy$-coloring.
\[obse:N4\] The fixed $xy$-coloring of $G[X]$ leads to a unique coloring of all vertices of $N_4$.
[**Proof.**]{} Let $v \in N_4$ and let $u \in N_3$ be a neighbor of $v$. If $u$ is white then $v$ is black, and if $u$ is black then, since by fact (\[noMedgesN3N4\]), the $M$-mate of $u$ is in $N_2$, $v$ is white.
First assume that $G[Y]$ is $S_{2,2,2}$-free. Then by Theorem \[DIMpolresults\] $(iii)$ (and the coloring approach of [@HerLozRieZamdeW2015]), one can check in polynomial time whether $G[Y]$ admits a feasible coloring of its vertices (consistent with the fixed $xy$-coloring of $X$), i.e., whether $G$ admits a complete $xy$-coloring of its vertices.
From now on, assume that $G[Y]$ is not $S_{2,2,2}$-free. Then let us show that, while $G[Y]$ contains an induced $S_{2,2,2}$, say $H$, one can remove some vertices of $H$, in order to obtain a reduced subgraph of $G[Y]$ which admits a feasible coloring of its vertices (consistent with the fixed $xy$-coloring of $X$) if and only if $G[Y]$ does so.
Let $H$ be an induced $S_{2,2,2}$ in $G[Y]$ with vertices $V(H)=\{d,a_1,a_2,b_1,b_2,c_1,c_2\}$ and edges $da_1$, $db_1$, $dc_1$, $a_1a_2$, $b_1b_2$, $c_1c_2$ (in particular, $d$ is the center of $H$). Let $p := \min \{i: h \in N_i \cap V(H)\}$, that is, $N_p$ is the $xy$-distance level with smallest distance to $xy$ to which a vertex of $H$ belongs (in particular $p \geq 4$ by construction).
Then there exists a vertex, say $z \in N_{p-1}$, $p-1 \ge 3$, contacting $H$. By Proposition \[P1\], we have:
- $z$ is the endpoint of an induced $P_5$ $(z,z_2,z_3,z_4,z_5)$, say $P(z)$, such that $z_2 \in N_{p-2}, z_3 \in N_{p-3}, z_4 \in N_{p-4}$\
(note that then no vertex in $H$ is adjacent to $z_2,z_3,z_4,z_5$, and no neighbor of $H$ is adjacent to $z_3,z_4,z_5$).
\[P3\] Vertex $z$ is nonadjacent to $d$, and in general, $N_{p-1} \cap N(d) = \emptyset$.
[**Proof.**]{} Suppose to the contrary that $z$ is adjacent to $d$. If $z {\text{\textcircled{{\footnotesize 0}}}}\{a_1,a_2\}$ and $z {\text{\textcircled{{\footnotesize 0}}}}\{b_1,b_2\}$ then $d,a_1,a_2,b_1,b_2,z,z_2,z_3$ (with center $d$) induce an $S_{2,2,3}$, and analogously for $z {\text{\textcircled{{\footnotesize 0}}}}\{a_1,a_2\}$ and $z {\text{\textcircled{{\footnotesize 0}}}}\{c_1,c_2\}$, as well as for $z {\text{\textcircled{{\footnotesize 0}}}}\{b_1,b_2\}$ and $z {\text{\textcircled{{\footnotesize 0}}}}\{c_1,c_2\}$. Thus, without loss of generality, we can assume that $z$ sees $a_1$ or $a_2$ and $z$ sees $b_1$ or $b_2$. Now, since $G$ is diamond-free, $z$ is adjacent to exactly one vertex in $\{a_1,a_2\}$ and to exactly one vertex in $\{b_1,b_2\}$, but then $z,a_1,a_2,b_1,b_2,z_2,z_3,z_4$ (with center $z$) induce an $S_{2,2,3}$, which is a contradiction.
\[P4\] Vertex $z$ is adjacent to some vertex in $\{a_1,b_1,c_1\}$.
[**Proof.**]{} Suppose to the contrary that $z {\text{\textcircled{{\footnotesize 0}}}}\{a_1,b_1,c_1\}$. Then, since by Proposition \[P3\], $zd \notin E$, $z$ is adjacent to some vertex in $\{a_2,b_2,c_2\}$.
If $z$ is adjacent to exactly one vertex in $\{a_2,b_2,c_2\}$, say by symmetry, $za_2 \in E$, $zb_2 \notin E$, $zc_2 \notin E$, then $d,a_1,a_2,z,b_1,b_2,c_1,c_2$ (with center $d$) induce an $S_{2,2,3}$.
If $z$ is adjacent to at least two vertices in $\{a_2,b_2,c_2\}$, say by symmetry, $za_2 \in E$, $zb_2 \in E$, then $z,a_1,a_2,b_1,b_2,z_2,z_3,z_4$ (with center $z$) induce an $S_{2,2,3}$, which is a contradiction.
\[P5\] Without loss of generality, we can assume that $z {\text{\textcircled{{\footnotesize 1}}}}\{a_1,a_2\}$, $z$ sees exactly one vertex in $\{b_1,b_2\}$, and $z {\text{\textcircled{{\footnotesize 0}}}}\{c_1,c_2\}$.
[**Proof.**]{} By Proposition \[P4\], assume without loss of generality that $za_1 \in E$. Since $d,b_1,b_2,c_1,c_2,a_1,z,z_2$ (with center $d$) do not induce an $S_{2,2,3}$, $z$ is adjacent either to some vertex in $\{b_1,b_2\}$ or to some vertex in $\{c_1,c_2\}$; without loss of generality, let $z$ be adjacent to some vertex in $\{b_1,b_2\}$, say $zb_1 \in E$. Then, since $z,a_1,a_2,b_1,b_2,z_2,z_3,z_4$ (with center $z$) do not induce an $S_{2,2,3}$, either $z {\text{\textcircled{{\footnotesize 1}}}}\{a_1,a_2\}$ or $z {\text{\textcircled{{\footnotesize 1}}}}\{b_1,b_2\}$; without loss of generality, let $z {\text{\textcircled{{\footnotesize 1}}}}\{a_1,a_2\}$.
Now, since $G$ is butterfly-free, neither $z {\text{\textcircled{{\footnotesize 1}}}}\{b_1,b_2\}$ nor $z {\text{\textcircled{{\footnotesize 1}}}}\{c_1,c_2\}$, and again, by the previous arguments, $z$ has a neighbor in $\{b_1,b_2\}$ or in $\{c_1,c_2\}$; by symmetry, let $z$ have exactly one neighbor in $\{b_1,b_2\}$, say $b_i$ where $i \in \{1,2\}$. Moreover, if $z$ has a neighbor in $\{c_1,c_2\}$, say $c_j$ where $j \in \{1,2\}$, then $z,b_i,b_{3-i},c_j,c_{3-j},z_2,z_3,z_4$ (with center $z$) induce an $S_{2,2,3}$. Thus, $z {\text{\textcircled{{\footnotesize 0}}}}\{c_1,c_2\}$, and Proposition \[P5\] is shown.
By Observation \[dimC3C5C7C4\] $(i)$, it follows:
\[P5one\] Exactly one of the edges $za_1,za_2,a_1a_2$ of the triangle $za_1a_2$ is in $M$.
\[lemm:N7\] If $N_7 = \emptyset$ then one can detect a white vertex of $H$ in polynomial time.
[**Proof.**]{} Assume $N_7 = \emptyset$. Then $p \le 6$, hence $z \in N_j$ with $3 \le j \le 5$. If $z \in N_5$ then $V(H) \subseteq N_6$ since $N_7 = \emptyset$, i.e., $d \in N_6$ but by Proposition \[P3\], $N_{p-1} \cap N(d) = \emptyset$, which is a contradiction. Thus, $z \in N_3 \cup N_4$. Then by the fixed $xy$-coloring of $G[X]$ (if $z \in N_3$) and by Proposition \[obse:N4\] (if $z \in N_4$), the color of $z$ is fixed. Now, since $z,a_1,a_2$ induce a $C_3$, we have:
- if $z$ is black and $zb_2 \in E$, then $b_2$ is white;
- if $z$ is black and $zb_1 \in E$, then $b_1$ is white;
- if $z$ is white, then $d$ is white.
Finally, vertex $z$ can be computed in polynomial time by definition.
\[theo:N7\] If $G$ is $S_{2,2,3}$-free and for $xy \in E$ which is part of a $P_3$ in $G$, $N_7 = \emptyset$, then one can check in polynomial time if $G$ has a d.i.m. $M$ with $xy \in M$.
[**Proof.**]{} The proof is given by the following procedure.
\[DIMwithxyN1N6\]
xxxxxxx =\
[**Input:**]{} A connected $(S_{2,2,3},K_4$,diamond,butterfly$)$-free graph $G = (V,E)$, which enjoys\
Assumptions 1-6 of Section $\ref{exclforced}$ and\
an edge $xy \in E$, which is part of a $P_3$ in $G$, with $N_7(xy)=\emptyset$.\
[**Task:**]{} Return a d.i.m. $M$ with $xy \in M$ $($STOP with success$)$ or\
a proof that $G$ has no d.i.m. $M$ with $xy \in M$ $($STOP with failure$)$.
- Set $M:= \{xy\}$. Determine the distance levels $N_i = N_i(xy)$, $1 \le i \le 6$, with respect to $xy$.
- Check whether $N_1$ is an independent set $($see fact $(\ref{N1subI}))$ and $N_2$ is the disjoint union of edges and isolated vertices $($see fact $(\ref{N2M2S2}))$. If not, then STOP with failure.
- For the set $M_2$ of edges in $N_2$, apply the Edge Reduction for every edge in $M_2$. Moreover, apply the Edge Reduction for each edge $bc$ according to fact $(\ref{triangleaN3bcN4})$ and then for each edge $u_it_i$ according to Lemma $\ref{lemm:structure2}$ $(v)$.
- [**if**]{} $N_4 = \emptyset$ [**then**]{} apply the approach described in Section $\ref{ColoringG[X]}$. Then either return that $G$ has no d.i.m. $M$ with $xy \in M$ or return $M$ as a d.i.m. with $xy \in M$.
- [**if**]{} $N_4 \neq \emptyset$ [**then**]{} for $X := \{x,y\} \cup N_1 \cup N_2 \cup N_3$ and $Y := V \setminus X$ $($according to the results of Section $\ref{ColoringG[X]})$ [**do**]{}
- Compute all black-white $xy$-colorings of $G[X]$. If no such $xy$-coloring without contradiction exists, then STOP with failure.
- [**for each**]{} $xy$-coloring of $G[X]$ [**do**]{}
1. Derive a partial coloring of $G[Y]$ by the forcing rules;
2. [**if**]{} a contradiction arises in vertex coloring [**then**]{} STOP with failure for this $xy$-coloring of $G[X]$ and proceed to the next $xy$-coloring of $G[X]$.
3. Set $G[Y]: = F$.
4. [**while**]{} $F$ contains a $S_{2,2,2}$ say $H$ [**do**]{}:
- Detect a white vertex $h \in V(H)$ by Lemma $\ref{lemm:N7}$, and
- apply the Vertex Reduction to $h$;
- [**if**]{} a contradiction arises in the vertex coloring [**then**]{} STOP with failure for this $xy$-coloring of $G[X]$ and proceed to the next $xy$-coloring of $G[X]$ [**else**]{} let $F'$ be the resulting subgraph of $F$; set $F:= F'$.
5. Apply the algorithm of Hertz et al. $($see Theorem $\ref{DIMpolresults}$ $(iii))$ to determine if $F$ $\{$which is $S_{2,2,2}$-free by the above$\}$ has a d.i.m.
6. [**if**]{} $F$ has a d.i.m. [**then**]{} STOP and return the $xy$-coloring of $G$ derived by the $xy$-coloring of $G[X]$ and by such a d.i.m. of $G[Y]$.
- STOP and return “$G[Y]$ has no d.i.m.”.
The correctness of Procedure \[DIMwithxyN1N6\] follows from the structural analysis of $S_{2,2,3}$-free graphs with a d.i.m. and by the results in the present section.
The polynomial time bound of Procedure \[DIMwithxyN1N6\] follows from the fact that Steps (a) and (b) can clearly be done in polynomial time, Step (c) can be done in polynomial time since the Edge Reduction can be done in polynomial time, Step (d) can be done in polynomial time by the results in Section $\ref{ColoringG[X]}$, Step (e) can be done in polynomial time since the Vertex Reductions can be executed in polynomial time, since the solution algorithm of Hertz et al. (see Theorem \[DIMpolresults\] $(iii)$) can be executed in polynomial time, and by the results in the present section.
Now we consider the general case when $N_7$ is nonempty.
The case $N_7 \neq \emptyset$ {#sec:N7nonempty}
=============================
By Proposition \[P5\], let us distinguish between $zb_2 \in E$ and $zb_1 \in E$. Then the goal is to detect a white vertex of $S_{2,2,2}$ $H$ or a peripheral triangle with a vertex of $H$. Recall $p := \min \{i: h \in N_i \cap V(H)\}$.
\[obse:p5\] If $p \leq 5$, then one can easily detect a white vertex of $H$.
[**Proof.**]{} In fact, if $p \leq 5$, then the color of $z$ (recall $z \in N_{p-1}$) is known by Proposition \[obse:N4\]. Then, as in the proof of Lemma \[lemm:N7\], one can easily detect a white vertex of $H$.
From now on, let us assume that $p \geq 6$. In particular, let $z_5 \in N_{p-5}$ be a neighbor of $z_4$, and let $z_6 \in N_{p-6}$ be a neighbor of $z_5$.
The case $zb_2 \in E$
---------------------
If $b_2$ has two neighbors $m_1,m_2 \notin V(H) \cup \{z\}$ such that $b_2,m_1,m_2$ induce a triangle then we call $b_2,m_1,m_2$ an [*external triangle*]{}.
\[b2triangle\] If $b_2$ is part of an external triangle $b_2,m_1,m_2$ then $b_2$ is white and $m_1m_2 \in M$ is forced.
[**Proof.**]{} Let $b_2,m_1,m_2$ be an external triangle. Recall that $z,b_2,b_1,d,a_1$ induce a $C_5$. By Observation \[pawleafedge\], the (leaf) edges $zb_2$, $a_1d$, and $b_2b_1$ are excluded. Then by Observation \[dimC3C5C7C4\] $(i)$, for the $C_5$, either $b_1d \in M$ or $a_1z \in M$ which implies that $b_2$ is white and consequently, $m_1m_2 \in M$ is forced. Thus, Proposition \[b2triangle\] is shown.
\[degb2>2dblackimpl\] If $d_G(b_2)>2$ and $b_2$ is not part of an external triangle then $d$ is white.
[**Proof.**]{} Let $m_1 \notin \{z,b_1\}$ be again a third neighbor of $b_2$. Suppose to the contrary that $d$ is black. Then, since $a_1d$ is excluded, $a_1$ is white which implies that $za_2 \in M$, $z_2$ is white, $z_3$ is black, $b_2$ is white and thus, $m_1$ and $b_1$ are black, i.e., $b_1d \in M$. Moreover, $c_1$ is white and $c_2$ is black. Clearly, $m_1$ misses $b_1,d,z,a_2$, and recall that $m_1$, as a neighbor of $H$, misses $z_3,z_4,z_5$
Let $m_2$ be an $M$-mate of $m_1$. Since by assumption, $b_2$ is not part of an external triangle, we have $m_2b_2 \notin E$. We first claim that $m_1c_2 \notin E$ and thus, $m_2 \neq c_2$:
If $m_2 = c_2$ then, since $b_2,m_1,c_2,b_1,d,z,z_2,z_3$ (with center $b_2$) do not induce an $S_{2,2,3}$, we have $m_1z_2 \in E$ but now, $z_2,m_1,c_2,z,a_2,z_3,z_4,z_5$ (with center $z_2$) induce an $S_{2,2,3}$, which is a contradiction. Thus, $m_2 \neq c_2$ is shown.
Recall again that $m_1$, as a neighbor of $H$, misses $z_3,z_4,z_5$; similarly, $m_2$ misses $z_4,z_5$; furthermore, $m_2$ misses $z_3$ as well, since $z_3$ is black.
Since $V(H) \cup \{m_1\}$ does not induce an $S_{2,2,3}$, we have $m_1a_1 \in E$ or $m_1c_1 \in E$.
Next we claim that $m_1z_2 \notin E$:
Suppose to the contrary that $m_1z_2 \in E$. If $m_1c_1 \in E$ then $m_1,c_1,c_2,b_2,b_1,z_2,z_3,z_4$ (with center $m_1$) would induce an $S_{2,2,3}$, and if $m_1a_1 \in E$ then $m_1,a_1,a_2,b_2,b_1,z_2,z_3,z_4$ (with center $m_1$) would induce an $S_{2,2,3}$, which is a contradiction. Thus $m_1z_2 \notin E$.
Since $b_2,m_1,m_2,b_1,d,z,z_2,z_3$ (with center $b_2$) do not induce an $S_{2,2,3}$, we have $z_2m_2 \in E$ (recall $z_3m_2 \not \in E$).
But then $z_2,m_2,m_1,z,a_2,z_3,z_4,z_5$ (with center $z_2$) induce an $S_{2,2,3}$, which is a contradiction.
Thus, Proposition \[degb2>2dblackimpl\] is shown.
\[b2deg2dwhite\] If $d_G(b_2)=2$ then $d$ is white or there is a peripheral triangle with $c_1$ and $c_2$.
[**Proof.**]{} Let $d_G(b_2)=2$, i.e., $N_G(b_2)=\{b_1,z\}$. Note that in this case, $b_1$ must be black since $zb_2 \notin M$ and the edge $b_1b_2$ can only be dominated by an $M$-edge containing vertex $b_1$.
Suppose to the contrary that $d$ is black, i.e., $b_1d \in M$, and there is no peripheral triangle with $c_1$ and $c_2$.
Then $a_1$ is white and thus, $za_2 \in M$ which implies that also $b_2,c_1,z_2$ are white and $c_2,z_3$ are black, and then there is an $M$-mate $c \notin V(H)$ such that $cc_2 \in M$. Clearly, since $d_G(b_2)=2$, $cb_2 \notin E$, and since $c$ is black, $c {\text{\textcircled{{\footnotesize 0}}}}\{a_2,z,b_1,d\}$. Moreover, since $c$ contacts $H$, $c$ misses $z_3,z_4,z_5$, and $cz_2 \notin E$ since $z_2,z,a_2,c,c_2,z_3,z_4,z_5$ (with center $z_2$) do not induce an $S_{2,2,3}$.
Since $V(H) \cup \{c\}$ does not induce an $S_{2,2,3}$ with center $d$, we have $ca_1 \in E$ or $cc_1 \in E$. Since $a_1,c,c_2,d,b_1,z,z_2,z_3$ (with center $a_1$) do not induce an $S_{2,2,3}$, we have $ca_1 \notin E$, which implies $cc_1 \in E$.
Next we claim: $$\label{d(c1)=3}
N(c_1)=\{d,c_2,c\} \mbox{, i.e., } d_G(c_1)=3.$$
[*Proof.*]{} Suppose to the contrary that there is a vertex $d_1 \notin V(H) \cup \{c\}$ with $c_1d_1 \in E$. Then since $c_1$ is white, $d_1$ is black and thus, there is an $M$-mate $d_2$ of $d_1$. Since $za_2, db_1, cc_2 \in M$, we have $d_2 \notin \{z,a_2,d,b_1,c,c_2\}$.
Since $c_1,c,c_2,d_1,d_2$ do not induce a butterfly (recall that $G$ is butterfly-free), $d_2c_1 \notin E$. Clearly, since $d,a_1,a_2,b_1,b_2,c_1,d_1,d_2$ (with center $d$) do not induce an $S_{2,2,3}$, and since $d_G(b_2)=2$ and $b_1,d,a_2$ are black, we have $d_1a_1 \in E$ or $d_2a_1 \in E$.
Since $z,a_2,a_1,d_1,d_2$ do not induce a butterfly, we have $d_1a_1 \notin E$ or $d_2a_1 \notin E$. If $d_2a_1 \in E$ and $d_1a_1 \not \in E$, then $d,c_1,d_1,d_2,a_1$ and $z,a_2,c_2,c$ induce a $G_1$, which is not possible by Assumption 3. Thus $d_2a_1 \notin E$ and $d_1a_1 \in E$.
Recall that $z$ is black, $z_2$ is white and $z_3$ is black, and $d_1$ (as a neighbor of $H$) misses $z_3,z_4,z_5$. Since $a_1,d_1,d_2,d,b_1,z,z_2,z_3$ (with center $a_1$) do not induce an $S_{2,2,3}$, we have $d_1z_2 \in E$ or $d_2z_2 \in E$. If $d_1z_2 \in E$ then $d_1,c_1,c_2,a_1,a_2,z_2,z_3,z_4$ (with center $d_1$) would induce an $S_{2,2,3}$. Thus, $d_1z_2 \notin E$ which implies $d_2z_2 \in E$ but now, $d_1,c_1,c_2,a_1,a_2,d_2,z_2,z_3$ (with center $d_1$) induce an $S_{2,2,3}$, which is a contradiction. Thus, (\[d(c1)=3\]) is shown. $\diamond$
Since we supposed that there is no peripheral triangle with $c_1$ and $c_2$, by (\[d(c1)=3\]), we have a further neighbor of $c$ or $c_2$, without loss of generality, say $q \notin V(H) \cup \{c\}$ with $cq \in E$.
Clearly, $q$ is white. Thus, $q$ misses $a_1,z_2,b_2,c_1$, and since $G$ is diamond-free, $q$ misses $c_2$.
Since $d,a_1,a_2,b_1,b_2,c_1,c,q$ (with center $d$) do not induce an $S_{2,2,3}$, vertex $q$ is adjacent to $b_1$ or $d$ or $a_2$. It follows that $q$ contacts $H$. Thus, $q$ misses $z_3,z_4,z_5$ by definition of $z$.
If $qa_2 \in E$ then $qb_1 \in E$ or $qd \in E$ since $d,b_1,b_2,c_1,c_2,a_1,a_2,q$ (with center $d$) do not induce an $S_{2,2,3}$.
If $qb_1 \in E$ and $qd \notin E$ then $qz \in E$ since $d,b_1,q,c_1,c_2,a_1,z,z_2$ (with center $d$) do not induce an $S_{2,2,3}$. But now, $z,z_2,z_3,a_1,d,q,c,c_2$ (with center $z$) induce an $S_{2,2,3}$, which is a contradiction.
If $qb_1 \notin E$ and $qd \in E$ then $qa_2 \in E$ since $d,b_1,b_2,a_1,a_2,q,c,c_2$ (with center $d$) do not induce an $S_{2,2,3}$. If $qa_2 \in E$ then, since $G$ is diamond-free, $qz \notin E$. But now, $q,c,c_2,d,b_1,a_2,z,z_2$ (with center $q$) induce an $S_{2,2,3}$, which is a contradiction.
Thus, we have $qb_1 \in E$ and $qd \in E$, which leads to a $C_4$ induced by $q,c,c_1,d$ and a triangle induced by $q,d,b_1$, and then to a paw induced by $q,d,b_1,b_2$ with the $C_4$-edge $qd$ but this is not possible by Assumption 6.
Thus, the assumption that $d$ is black and there is no peripheral triangle with $c_1$ and $c_2$ leads to a contradiction, and Proposition \[b2deg2dwhite\] is shown.
\[lemm:zb2\] If $zb_2 \in E$, then we can detect a white vertex of $H$ or a peripheral triangle with a vertex in $H$ in polynomial time.
[**Proof.**]{} Checking if there is a peripheral triangle with a vertex in $H$ can be done in polynomial time. Then let us assume that there is no such peripheral triangle—in particular, there is no peripheral triangle with $c_1$ and $c_2$.
If $b_2$ is part of an external triangle, then by Proposition \[b2triangle\], $b_2$ is white. If $b_2$ is not part of an external triangle, then we have: If $d_G(b_2)>2$, then by Proposition \[degb2>2dblackimpl\], $d$ is white; if $d_G(b_2)=2$ then by Proposition \[degb2>2dblackimpl\] and since there is no peripheral triangle with $c_1$ and $c_2$, $d$ is white. Obviously, we can check all of these steps in polynomial time.
The case $zb_1 \in E$ {#section5.2}
---------------------
Recall that by Proposition \[obse:p5\], we can assume that for $z \in N_{p-1}$, $p \geq 6$. Since $zb_1 \in E$, vertices $z,a_1,d,b_1$ induce a $C_4$, by Observation \[dimC3C5C7C4\] $(ii)$, $za_1$ is excluded, and by Observation \[dimC3C5C7C4\] $(i)$, either $za_2 \in M$ or $a_1a_2 \in M$, i.e., $a_2$ is black.
If $a_2$ has a third neighbor, say $x \notin \{z,a_1\}$ then, by Observation \[pawleafblackwhite\], the paw induced by $z,a_1,a_2,x$ would imply that vertex $x$ is white (which leads to Vertex Reduction). Thus, from now on, we can assume that $d_G(a_2)=2$.
Let $P(H) := \{t \in V \setminus V(H): t$ contacts $H$, and $t$ is the endpoint of an induced $P_4$ of $G[V \setminus V(H)]$ of vertices $t,t_2,t_3,t_4$ such that $t_2,t_3,t_4$ do not contact $H \}$.
Clearly, $z \in P(H)$.
\[prop:adjacencyH\] Let $t \in P(H)$ and let $W := \{(a_1,a_2),(b_1,b_2),(c_1,c_2)\}$. Then $t$ is nonadjacent to $d$, is adjacent to both vertices of one pair in $W$, is adjacent to exactly one vertex of one pair in $W$, and is nonadjacent to both vertices of one pair in $W$.
[**Proof.**]{} Proposition \[prop:adjacencyH\] follows by Propositions \[P3\], \[P4\], and \[P5\], since they hold for $z$ and (by a similar argument) for any vertex of $P(H)$ as well.
Since $G$ is (diamond,$K_4$)-free, Proposition \[prop:adjacencyH\] implies $|P(H)| \le 3$. Note that $P(H)$ can be computed in polynomial time.
\[prop:P(H)second\] If $P(H) = \{z\}$ then:
- $z$ is a cut-vertex for $G$; in particular let $K$ denote the connected component of $G[V \setminus \{z\}]$ containing $H$, and let $K_{xy}$ denote the connected component of $G[V \setminus \{z\}]$ containing $xy$;
- in $G[V(K) \cup \{z\}]$ we have $dist(za_2,v) \leq 6$ and $dist(a_1a_2,v) \leq 6$ for any $v \in K$;
- one can check in polynomial time whether $G[V(K) \cup \{z\}]$ admits a d.i.m. containing $za_2$ and a d.i.m. containing $a_1a_2$.
- every connected component of $G[V \setminus \{z\}]$ except $K$ and $K_{xy}$ contains only one vertex, and if there is such a component, say $\{z'\}$, then $z'$ is white.
[**Proof.**]{} $(i)$: Recall $X := \{x,y\} \cup N_1 \cup N_2 \cup N_3$. By definition of $P(H)$, since $P(H) = \{z\}$ and since $p \geq 6$, each path from $H$ to $X$ must involve $z$. Then $G[V \setminus \{z\}]$ is disconnected: in particular, $G[X]$ and $H$ belong to two distinct connected components of it.
$(ii)$: We first claim that in $G[V(K) \cup \{z\}]$ we have $dist(za_2,v) \leq 6$ for any $v \in K$. Suppose to the contrary that there is a vertex $v \in K$ such that $dist(za_2,v) \geq 7$. Then clearly $v \not \in H$. Let $P$ be a shortest path in $K$ from $v$ to $H$. Then let $t$ be the vertex of $P$ contacting $H$. Then, since $P(H) \setminus \{z\} = \emptyset$, path $P$ has at most three vertices not in $H$, i.e. say $t,t_1,v$. Then, since $dist(za_2,v) \geq 7$, $t$ is nonadjacent to any vertex of $H$ except $c_2$. But then $d,a_1,a_2,b_1,b_2,c_1,c_2,t$ (with center $d$) induce an $S_{2,2,3}$, which is a contradiction.
The assertion concerning $dist(a_1a_2,v) \leq 6$ can be proved similarly.
$(iii)$: This follows by statement $(ii)$ and by Theorem \[theo:N7\] (with $G[V(K) \cup \{v\}]$ instead of $G$ and with $za_1$ and $a_1a_2$ instead of $xy$).
$(iv)$: If $K'$ is a connected component of $G[V \setminus \{z\}]$, $K' \neq K$, $K' \neq K_{xy}$, and $K'$ contains an edge then, since $z$ is adjacent to the $S_{2,2,2}$ $H$, this leads to an $S_{2,2,3}$ with center $z$. If $K'$ contains only one vertex, say $z'$, then, since $z,z',a_1,a_2$ induce a paw and $d_G(z')=1$, it follows by Observation \[triangleneighbdeg1\] that $z'$ is white.
\[P(H)nonemptyimplCase5\] If $P(H) \setminus \{z\} \neq \emptyset$ then $d$ is white and $a_1a_2 \in M$ is forced.
[**Proof.**]{} Recall that, by Proposition \[P5\], $z$ is adjacent to $a_1,a_2$ and in this case, to $b_1$. Since $P(H) \setminus \{z\} \neq \emptyset$, let $t \in P(H) \setminus \{z\}$. Then by definition of $P(H)$, let $t,t_2,t_3,t_4$ induce a $P_4$ with edges $tt_2,t_2t_3,t_3t_4$ such that $t_2,t_3,t_4$ do not contact $H$. Note that, by definition of $z$, vertex $t$ is nonadjacent to $z_3,z_4,z_5$, and since $d_G(a_2)=2$, $ta_2 \notin E$. Thus, only $t {\text{\textcircled{{\footnotesize 1}}}}\{b_1,b_2\}$ or $t {\text{\textcircled{{\footnotesize 1}}}}\{c_1,c_2\}$ is possible. For proving Lemma \[P(H)nonemptyimplCase5\], let us consider the following two cases which are exhaustive by Proposition \[prop:adjacencyH\].
[**Case 1.**]{} $t {\text{\textcircled{{\footnotesize 1}}}}\{b_1,b_2\}$.
We first claim: $$\label{b1b2tnonadjzz2}
tz \notin E \mbox{ and } tz_2 \notin E.$$
[*Proof.*]{} Since $G$ is diamond-free and thus, $b_1,b_2,t,z$ do not induce a diamond, we have $tz \notin E$. Recall that $t {\text{\textcircled{{\footnotesize 0}}}}\{a_2,z_3,z_4,z_5\}$. Since $z_2,z,a_2,t,b_2,z_3,z_4,z_5$ (with center $z_2$) do not induce an $S_{2,2,3}$, we have $tz_2 \notin E$. $\diamond$
Next we claim: $$\label{b1b2tnonadja1}
ta_1 \notin E.$$
[*Proof.*]{} Clearly, if $ta_1 \in E$ then, since $t {\text{\textcircled{{\footnotesize 1}}}}\{b_1,b_2\}$, $tc_1 \notin E$ and $tc_2 \notin E$. By (\[b1b2tnonadjzz2\]), $tz_2 \notin E$ but then $a_1,t,b_2,z,z_2,d,c_1,c_2$ (with center $a_1$) induce an $S_{2,2,3}$, which is a contradiction. Thus $ta_1 \notin E$. $\diamond$
Next we claim: $$\label{b1b2tnonadjc1}
tc_1 \notin E.$$
[*Proof.*]{} Suppose to the contrary that $tc_1 \in E$. Recall that by (\[b1b2tnonadjzz2\]), $tz_2 \notin E$. Thus clearly $t_2 \neq z_2$. First we claim that $zt_2 \notin E$: Otherwise, if $zt_2 \in E$ and since $G$ is butterfly-free, we have $z_2t_2 \notin E$ but then $z_3t_2 \notin E$ since otherwise, $t_2,z,a_1,z_3,z_4,t,c_1,c_2$ (with center $t_2$) would induce an $S_{2,2,3}$ (vertex $t_2$ is nonadjacent to $z_4$ by definition of $z$). Then $z,a_1,d,t_2,t,z_2,z_3,z_4$ (with center $z$) induce an $S_{2,2,3}$, which is a contradiction. Thus, $zt_2 \notin E$.
Next we claim that $zt_3 \notin E$: If $zt_3 \in E$ then $z,t_3,t_2,t,b_1$ and $a_1,a_2,b_2$ induce a $G_2$, which is impossible by Assumption 4.
Thus $zt_3 \notin E$ but now, $t,t_2,t_3,c_1,c_2,b_1,z,a_1$ (with center $t$) induce an $S_{2,2,3}$, which is a contradiction. Thus, (\[b1b2tnonadjc1\]) is shown. $\diamond$
Since $t$ is nonadjacent to $a_1,a_2,c_1$, we have $tc_2 \in E$. Then $t,b_1,d,c_1,c_2$ induce a $C_5$. Finally we claim: $$\label{b1b2tadjc2}
a_1a_2 \in M \mbox{ is forced and } d \mbox{ is white}.$$
[*Proof.*]{} As before, by (\[b1b2tnonadjzz2\]), $tz_2 \notin E$ and thus, $t_2 \neq z_2$. Suppose that $a_1a_2 \notin M$, i.e., $za_2 \in M$. Then $a_1,z_2$ and $b_1$ are white, $d$ and $b_2$ are black and thus, $b_2t \in M$, and by the $C_5$ property, $dc_1 \in M$. Moreover, $t_2$ is white and thus, $t_3$ is black which implies that $z t_3 \notin E$.
Since $b_1,z,a_2,d,c_1,t,t_2,t_3$ (with center $b_1$) do not induce an $S_{2,2,3}$, we have $zt_2 \in E$. Since $t_2,z_2$ are white, we have $t_2z_2 \notin E$. Since $z,t_2,t,z_2,z_3,a_1,d,c_1$ (with center $z$) do not induce an $S_{2,2,3}$, we have $t_2z_3 \in E$.
Note that $t_2z_4 \notin E$ by definition of $z$. But now, $t_2,z,a_2,z_3,z_4,t,c_2,c_1$ (with center $t_2$) induce an $S_{2,2,3}$, which is a contradiction.
Thus, $a_1a_2 \in M$ is forced, $d$ is white and (\[b1b2tadjc2\]) is shown. $\diamond$
[**Case 2.**]{} $t {\text{\textcircled{{\footnotesize 1}}}}\{c_1,c_2\}$.
Then clearly, since $G$ is diamond-free, $td \notin E$, since $d_G(a_2)=2$, $ta_2 \notin E$, and $t$ is nonadjacent to $b_1$ or $b_2$.
First we claim: $$\label{c1c2tnonadjz}
tz \notin E.$$
[*Proof.*]{} If $tz \in E$ then, since $G$ is diamond-free, $ta_1 \notin E$. Then, since $G$ is butterfly-free, $tb_1 \notin E$. But now, $t,c_1,d,b_1,z$ and $a_1,c_2$ induce a $G_3$, which is impossible by Assumption 5. Thus, (\[c1c2tnonadjz\]) is shown. $\diamond$
Note that $tz_2 \notin E$ since $z_2,z,a_2,t,c_1,z_3,z_4,z_5,$ (with center $z_2$) do not induce an $S_{2,2,3}$.
We claim: $$\label{c1c2tnonadja1a2}
ta_1 \notin E.$$
[*Proof.*]{} If $ta_1 \in E$ then $tb_1 \notin E$ and $tb_2 \notin E$. Recall that $tz \notin E$ and $tz_2 \notin E$. But then $z,z_2,z_3,b_1,b_2,a_1,t,c_2$ (with center $z$) induce an $S_{2,2,3}$, which is a contradiction. Thus $ta_1 \notin E$. $\diamond$
Since $ta_1 \notin E$ and $ta_2 \notin E$, it follows that $tb_1 \in E$ or $tb_2 \in E$. Recall that $za_1 \notin M$, i.e., only $za_2 \in M$ or $a_1a_2 \in M$ is possible. Finally we show: $$\label{c1c2tadjb1orb2}
a_1a_2 \in M \mbox{ is forced and } d \mbox{ is white}.$$
[*Proof.*]{} Suppose that $a_1a_2 \notin M$, i.e., $za_2 \in M$. Then $a_1$ and $b_1$ are white, which implies that $d$ and $b_2$ are black.
First let $tb_2 \in E$. Since $d$ is black, by the $C_3$ property with respect to triangle $c_1c_2t$, we have $c_2t \in M$, which is a contradiction since $b_2$ is black.
Now let $tb_1 \in E$. Then, since $b_1$ is white, $t$ is black. Then, since $d$ is black, $c_1$ is white and thus, $tc_2 \in M$.
Recall that $tz \notin E$ and $tz_2 \notin E$. Let $d'$ be an $M$-mate of $d$ such that $dd' \in M$, and let $b'_2$ be an $M$-mate of $b_2$ such that $b_2b'_2 \in M$. Clearly, $d'z \notin E$ and $b'_2z \notin E$.
We claim that $d'z_2 \notin E$ and $b'_2z_2 \notin E$: If $b'_2z_2 \in E$ then $z_2,b'_2,b_2,z,a_2,z_3,z_4,z_5$ (with center $z_2$) would induce an $S_{2,2,3}$, and analogously, if $d'z_2 \in E$ then $z_2,d',d,z,a_2,z_3,z_4,z_5$ (with center $z_2$) would induce an $S_{2,2,3}$. Thus, $d'z_2 \notin E$ and $b'_2z_2 \notin E$.
Furthermore, since $G$ is butterfly-free, we have $b_1d' \notin E$ or $b_1b'_2 \notin E$. If $b_1d' \notin E$ then $b_1,t,c_2,d,d',z,z_2,z_3$ (with center $b_1$) would induce an $S_{2,2,3}$ (note that $z_3d' \notin E$), and if $b_1b'_2 \notin E$ then $b_1,t,c_2,b_2,b'_2,z,z_2,z_3$ (with center $b_1$) would induce an $S_{2,2,3}$ (note that $z_3b'_2 \notin E$), which is a contradiction.
Thus, (\[c1c2tadjb1orb2\]) is shown. $\diamond$
This completes the proof of Lemma \[P(H)nonemptyimplCase5\].
Then let us summarize the results for the case $zb_1 \in E$ as follows. Recall that $P(H)$ can be computed in polynomial time.
\[lemm:zb1\] Assume that $zb_1 \in E$.
- If $P(H) \setminus \{z\} \neq \emptyset$ then by Lemma $\ref{P(H)nonemptyimplCase5}$, $a_1a_2 \in M$ and $d$ is a white vertex, that is, one can easily detect a white vertex of $H$.
- If $P(H) \setminus \{z\} = \emptyset$, then $z$ is a cut-vertex of $G$ and in particular, if $K$ denotes the connected component of $G[V \setminus \{z\}]$ containing $H$, one can check in polynomial time whether $G[K \cup \{z\}]$ admits a d.i.m. containing $za_2$ and a d.i.m. containing $a_1a_2$.
Deleting $S_{2,2,2}$’s in $G[Y]$
--------------------------------
Let $H$ be an induced $S_{2,2,2}$ in $G[Y]$ with vertices $V(H)=\{d,a_1,a_2,b_1,b_2,c_1,c_2\}$ and edges $da_1$, $db_1$, $dc_1$, $a_1a_2$, $b_1b_2$, $c_1c_2$ (as above). Let $p := \min \{i: h \in N_i \cap V(H)\}$, that is, $N_p$ is the $xy$-distance level with smallest distance to $xy$ to which a vertex of $H$ belongs (in particular $p \geq 4$ by construction).
We say that $H$ is [*critical*]{} if
- there is a contacting vertex for $H$, say $z$, with $z \in N_{p-1}$ and $p \geq 6$, such that $za_1,za_2,zb_1 \in E$, and
- $P(H) = \{z\}$ (cf. Section \[section5.2\]).
Otherwise, $H$ is [*non-critical*]{}.
Recall that if $H$ is a critical $S_{2,2,2}$ then either $za_2 \in M$ or $a_1a_2 \in M$, so that $G[Y]$ has a d.i.m. $M$ only if $za_2 \in M$ or $a_1a_2 \in M$, and by Proposition \[prop:P(H)second\], we can assume that $G-z$ has only two connected components.
Then let us consider the following procedure deleting a critical $S_{2,2,2}$, which is correct and can be executed in polynomial time by Lemma \[lemm:zb1\] $(ii)$ and by the above (and in particular, by the distance properties of $K$ - see Proposition \[prop:P(H)second\] $(ii)$).
\[criticalS222\]
xxxxxxx =\
[**Input:**]{} Subgraph $G[Y]$ and a critical $S_{2,2,2}$, say $H$.\
[**Task:**]{} Return either a proof that $G[Y]$ has no d.i.m., or a subgraph of $G[Y]$, say $G'$\
such that\
$(i)$ $G'$ does not contain some vertices of $H$, and\
$(ii)$ $G'$ has a d.i.m. if and only if $G[Y]$ has a d.i.m.
- Compute the connected component, say $K$, of $G[V \setminus \{z\}]$ containing $H$. Then check whether $G[V(K) \cup \{z\}]$ admits a d.i.m. containing $za_2$ and a d.i.m. containing $a_1a_2$.
- Consider the following exhaustive occurrences:
- [**if**]{} $G[V(K) \cup \{z\}]$ admits no d.i.m. containing $za_2$ and no d.i.m. containing $a_1a_2$ [**then**]{} return “$G[Y]$ has no d.i.m.”
- [**if**]{} $G[V(K) \cup \{z\}]$ admits a d.i.m. containing $za_2$ and no d.i.m. containing $a_1a_2$ [**then**]{} in $G[Y]$: $(i)$ delete all vertices of $V(K) \setminus \{a_1,a_2\}$; $(ii)$ color $z,a_2$ black and color $a_1$ white. Then let $G'$ be the resulting graph.
- [**if**]{} $G[V(K) \cup \{z\}]$ admits no d.i.m. containing $za_2$ and a d.i.m. containing $a_1a_2$ [**then**]{} in $G[Y]$: $(i)$ delete all vertices of $V(K) \setminus \{a_1,a_2\}$; $(ii)$ color $a_1,a_2$ black and color $z$ white. Then let $G'$ be the resulting graph.
- [**if**]{} $G[V(K) \cup \{z\}]$ admits a d.i.m. containing $za_1$ and a d.i.m. containing $a_1a_2$ [**then**]{} in $G[Y]$, delete all vertices of $V(K) \setminus \{a_1,a_2\}$. Then let $G'$ be the resulting graph.
- [**if**]{} a contradiction arises in the vertex coloring [**then**]{} STOP and return “$G[Y]$ has no d.i.m.” [**else**]{} STOP and return $G'$.
\[lemm:final\] For any non-critical $S_{2,2,2}$ $H$, one can detect a white vertex of $H$ or a peripheral triangle with a vertex in $H$ in polynomial time.
[**Proof.**]{} Recall that we assume $N_7 \neq \emptyset$. Lemma \[lemm:final\] follows by Proposition \[obse:p5\] and by Lemmas \[lemm:zb2\] and \[lemm:zb1\].\
Then let us consider the following procedure deleting $S_{2,2,2}$’s in $G[Y]$, which is correct and can be executed in polynomial time by Lemma \[lemm:final\] and since the solution algorithm of Hertz et al. (see Theorem \[DIMpolresults\] $(iii)$) can be executed in polynomial time.
\[delS222\]
xxxxxxxxx =\
[**Input:**]{} Graph $G[Y]$ with a partial coloring of its vertices.\
[**Output:**]{} A d.i.m. of $G[Y]$ $($consistent with the partial coloring$)$ or\
a proof that $G$ has no d.i.m. $($consistent with the partial coloring$)$.
xxxxxxxxxxxxxxxxx\
$(a)$ Set $F:=G[Y]$;\
$(b)$ $F$ contains a critical $S_{2,2,2}$, say $H$ [**do**]{}\
\
Apply Procedure $\ref{criticalS222}$ to $H$;\
it returns that $G[Y]$ has no d.i.m. [**then**]{} return “$G[Y]$ has no d.i.m”\
let $F'$ be the resulting subgraph of $F$: then set $F:= F'$;\
;\
$(c)$ $F$ contains a non-critical $S_{2,2,2}$, say $H$ [**do**]{}\
\
either detect a white vertex $h \in V(H)$ and apply the Vertex Reduction to $h$\
or detect a peripheral triangle of $G$, say $abc$ involving some vertex of $H$,\
and apply the Peripheral Triangle Reduction to $abc$;\
a contradiction arises in the black-white vertex coloring\
STOP with failure\
let $F'$ be the resulting subgraph of $F$: then set $F:= F'$;\
;\
$(d)$ Apply the solution algorithm of Hertz et al. $($see Theorem $\ref{DIMpolresults}$ $(iii))$\
to determine if $F$ $\{$which is $S_{2,2,2}$-free by the above$\}$ has a d.i.m.:\
$F$ has a d.i.m. [**then**]{} STOP and return such a d.i.m.;\
STOP and return “$G[Y]$ has no d.i.m.”;\
[**end**]{}.
A polynomial algorithm for DIM on $S_{2,2,3}$-free graphs
=========================================================
The following procedure is part of the algorithm:
\[DIMwithxy\]
xxxxxxx =\
[**Input:**]{} A connected $S_{2,2,3}$-free graph $G = (V,E)$, which enjoys\
Assumptions $1-6$ of Section $\ref{exclforced}$ and\
an edge $xy \in E$ which is part of a $P_3$ in $G$.\
[**Task:**]{} Return a d.i.m. $M$ with $xy \in M$ $($STOP with success$)$ or\
a proof that $G$ has no d.i.m. $M$ with $xy \in M$ $($STOP with failure$)$.
- Set $M:= \{xy\}$. Determine the distance levels $N_i = N_i(xy)$ with respect to $xy$.
- Check whether $N_1$ is an independent set $($see fact $(\ref{N1subI}))$ and $N_2$ is the disjoint union of edges and isolated vertices $($see fact $(\ref{N2M2S2}))$. If not, then STOP with failure.
- For the set $M_2$ of edges in $N_2$, apply the Edge Reduction for every edge in $M_2$ correspondingly. Moreover, apply the Edge Reduction for each edge $bc$ according to fact $(\ref{triangleaN3bcN4}$) and then for each edge $u_it_i$ according to Lemma $\ref{lemm:structure2}$ $(v)$.
- [**if**]{} $N_4 = \emptyset$ then apply the approach described in Section $\ref{ColoringG[X]}$. Then either return that $G$ has no d.i.m. $M$ with $xy \in M$ or return $M$ as a d.i.m. with $xy \in M$.
- [**if**]{} $N_4 \neq \emptyset$ [**then**]{} for $X := \{x,y\} \cup N_1 \cup N_2 \cup N_3$ and $Y := V \setminus X$ $($according to the results of Sections $\ref{ColoringG[Y]}$ and $\ref{sec:N7nonempty})$ [**do**]{}
- Compute all black-white $xy$-colorings of $G[X]$. If no such feasible $xy$-coloring exists then STOP with failure.
- [**for each**]{} $xy$-coloring of $G[X]$ [**do**]{}
1. Derive a partial coloring of $G[Y]$ by the forcing rules;
2. [**if**]{} a coloring contradiction arises [**then**]{} STOP with failure.
3. Apply Procedure $\ref{delS222}$; [**if**]{} it returns a d.i.m. of $G[Y]$ [**then**]{} STOP with success and return the $xy$-coloring of $G$ derived by the $xy$-coloring of $G[X]$ and by such a d.i.m. of $G[Y]$.
- STOP with failure.
\[theo:procedureDIMxy\] Procedure $\ref{DIMwithxy}$ is correct and runs in polynomial time.
[**Proof.**]{} The correctness of the procedure follows from the structural analysis of $S_{2,2,3}$-free graphs with a d.i.m.
The polynomial time bound follows from the fact that Steps (a) and (b) can clearly be done in polynomial time, Step (c) can be done in polynomial time since the Edge Reduction can be done in polynomial time, and Steps (d) and (e) can be done in polynomial time by the results in Sections $\ref{ColoringG[X]}$, $\ref{ColoringG[Y]}$, and \[sec:N7nonempty\].
\[DIMS223fr\]
xxxxxxx =\
[**Input:**]{} A connected $S_{2,2,3}$-free graph $G = (V,E)$, which enjoys\
Assumptions $1-6$ of Section $\ref{exclforced}$.\
[**Task:**]{} Determine a d.i.m. of $G$ if there is one, or find out that $G$ has no d.i.m.
- Check whether $G$ has a single edge $uv \in E$ which is a d.i.m. of $G$. If yes then select such an edge as output and STOP - this is a d.i.m. of $G$. $\{$Otherwise, every d.i.m. of $G$ would have at least two edges.$\}$
- [**for each**]{} edge $xy \in E$ in a $P_3$ of $G$, carry out Procedure $\ref{DIMwithxy}$;\
[**if**]{} it returns “STOP with failure” for all edges $xy$ in a $P_3$ of $G$ [**then**]{} STOP – $G$ has no d.i.m. [**else**]{} STOP and return a d.i.m. of $G$.
\[theo:procedureDIM\] Algorithm $\ref{DIMS223fr}$ is correct and runs in polynomial time. Thus, DIM can be solved in polynomial time for $S_{2,2,3}$-free graphs.
[**Proof.**]{} The correctness of the procedure follows from the structural analysis of $S_{2,2,3}$-free graphs with a d.i.m. In particular, Step (A) is obviously correct, and for Step (B), recall Observation \[obse:xy-in-P3\].
The time bound follows from the fact that Step (A) can be done in polynomial time, and Step (B) can be done in polynomial time by Theorem \[theo:procedureDIMxy\].
[**Acknowledgment.**]{} We gratefully thank the anonymous reviewers for their comments and corrections. The second author would like to witness that he just tries to pray a lot and is not able to do anything without that - ad laudem Domini.
[99]{}
N. Biggs, Perfect codes in graphs, [*Journal of Combinatorial Theory, Series B*]{} 15 (1973), 289-296.
A. Brandstädt, C. Hundt, and R. Nevries, Efficient Edge Domination on Hole-Free graphs in Polynomial Time, Conference Proceedings LATIN 2010, [*Lecture Notes in Computer Science*]{} 6034 (2010), 650-661.
A. Brandstädt and R. Mosca, Dominating Induced Matchings for $P_7$-Free Graphs in Linear Time, [*Algorithmica*]{} 68 (2014), 998-1018.
A. Brandstädt and R. Mosca, Finding Dominating Induced Matchings in $P_8$-Free Graphs in Polynomial Time, [*Algorithmica*]{} 77 (2017), 1283-1302.
A. Brandstädt and R. Mosca, Dominating Induced Matchings in $S_{1,2,4}$-Free Graphs, CoRR arXiv:1706.09301, 2017. Accepted for [*Discrete Applied Math.*]{}
D.M. Cardoso, N. Korpelainen, and V.V. Lozin, On the complexity of the dominating induced matching problem in hereditary classes of graphs, [*Discrete Applied Math.*]{} 159 (2011), 521-531.
D.L. Grinstead, P.L. Slater, N.A. Sherwani, and N.D. Holmes, Efficient edge domination problems in graphs, [*Information Processing Letters*]{} 48 (1993), 221-228.
A. Hertz, V.V. Lozin, B. Ries, V. Zamaraev, and D. de Werra, Dominating induced matchings in graphs containing no long claw, [*Journal of Graph Theory*]{} 88 (2018), no. 1, 18-39.
N. Korpelainen, V.V. Lozin, and C. Purcell, Dominating induced matchings in graphs without a skew star, [*J. Discrete Algorithms*]{} 26 (2014), 45-55.
C.L. Lu, M.-T. Ko, and C.Y. Tang, Perfect edge domination and efficient edge domination in graphs, [*Discrete Applied Math.*]{} Vol. 119 (3) (2002), 227-250.
C.L. Lu and C.Y. Tang, Solving the weighted efficient edge domination problem on bipartite permutation graphs, [*Discrete Applied Math.*]{} 87 (1998), 203-211.
[^1]: Institut für Informatik, Universität Rostock, A.-Einstein-Str. 22, D-18051 Rostock, Germany, [`a`[email protected]]{}
[^2]: Dipartimento di Economia, Universitá degli Studi “G. D’Annunzio” Pescara 65121, Italy. [`r`[email protected]]{}
|
---
abstract: |
When interacting with their software systems, users may have to deal with problems like crashes, failures, and program instability. Faulty software running in the field is not only the consequence of ineffective in-house verification and validation techniques, but it is also due to the complexity and diversity of the interactions between an application and its environment. Many of these interactions can be hardly predicted at testing time, and even when they could be predicted, often there are so many cases to be tested that they cannot be all feasibly addressed before the software is released.
This Ph.D. thesis investigates the idea of addressing the faults that cannot be effectively addressed in house directly in the field, exploiting the field itself as testbed for running the test cases. An enormous number of diverse environments would then be available for testing, giving the possibility to run many test cases in many different situations, timely revealing the many failures that would be hard to detect otherwise.
author:
-
bibliography:
- 'bibliography.bib'
title: Field Testing of Software Applications
---
field testing, field failures, isolation
Testing Software in the Field
=============================
Achieving high quality is mandatory in modern software applications, but, despite intensive in-house testing sessions, organizations still struggle with releasing dependable software. Faulty applications running in the field are the source of several problems, including higher maintenance costs, reduced customers satisfaction, and ultimately loss of reputation and profits.
Studying the reason why several faults are not detected with in-house testing is of crucial importance to mitigate their occurrence in the field. Although the faults present in the field could be the result of a poor in-house testing process, there are many faults that are objectively hard to detect in house, even using state of the art methodologies and techniques.
This intuition has been confirmed by an analysis that we performed on multiple real software failures experienced in the field by the end-users. Our analysis shown that a significant proportion of the faults that cause field failures have specific characteristics that make them extremely hard to be detected in house. In particular, out of all the failures that we analysed, approximately only 30% of them could be attributed to bad testing practices, while the remaining 70% should be attributed to an interaction between the software under test and the field that is objectively difficult or even impossible to test in house. The impossibility to thoroughly test software systems in house has been already recognized in mature companies. For instance, Netflix engineers inject faults in the field to analyse the impact of faults that cannot be studied in house in a simulated environment [@Basiri:Netflix:ISSRE:2016].
The vision of this Ph.D. thesis is that the end-user environment could be exploited as testbed for running the many test cases that cannot be executed in house, because of both the limited resources available for testing and the challenge of recreating in house the same environment that is available in the field. There are several important benefits related to the capability of exploiting the field as part of the testing process. If the application to be tested is popular enough to be installed in many devices and computers (e.g., hundreds or thousands), these many instances would offer unique opportunities in terms of range of situations and configurations that could be tested in parallel. Moreover, regardless the number of installations of a same application that are available, testing an application in the field gives the unique opportunity of testing the software when used in its real environment, with real data. Finally, test cases could be executed perpetually, not only to test an application that has been just installed, but also to test the software while the environment, the users, and the data evolve.
Successfully deploying the capability of executing test cases while the software is running in the field, namely the capability of doing field testing, even enables the possibility to identify potential failures before they are experienced by the end-users.
Key challenges in Field Testing {#sec:challenges}
===============================
Field testing is inherently different from traditional in-house testing and poses a new set of challenges, the ones about the *test strategy*, that is, when and how to test the software in the field, and the ones about the *non-intrusiveness* of the testing process, that is, how to run the test cases without affecting the user data and user processes, with negligible impact on the user experience.
Test strategy
-------------
We identified four key elements that should be part of a well-defined test strategy.
*Test obligation*. The test obligation defines *which* behaviors should be tested in the field. Since field testing is executed after in-house testing, field testing should focus on complementary aspects, that is, on those cases that have not been tested in house. Moreover, field testing should target those functionalities that cannot be effectively and efficiently tested in house, such as, functionalities that depend on field entities or functionalities whose execution space can combinatorially explode. An example of the former functionality is a routine that uses an external DBMS, which requires interacting with a specific driver and a specific database configuration. An example of the latter functionality is an application for writing documents, which may display some text incorrectly only when using specific characters of a specific size and of a specific font type.
*Test opportunity*. The test opportunity defines *when* the software application running in the field should be tested. When testing is performed in house, the testing procedures are usually started in accordance with the development activities (e.g., a change in the source code). The activation of testing procedures for software running in the field should be based on different aspects. In particular we identified two key elements. The first one is the software state, which could be recognized as relevant for testing. For example, the test cases for a spreadsheet application might be executed when the user opens a spreadsheet with thousands of formulas and millions of data values, assuming that a real spreadsheet of this size is a relevant subject for the testing procedure. The second element is the available resources, which should be sufficient to run the test cases without impacting the user experience. For instance, if the system is already performing a cpu intensive computation, it might be a bad idea to activate an automatic testing procedure.
*Test generation strategy*. The test generation strategy defines *how* to obtain the test cases that should be executed in the field. We foresee two different types of strategies: static and dynamic. Static test generation strategies consist of implementing in house the test suites specifically designed to cover potentially interesting situations in the field. In this case the test suite would be static, and the test cases that must be executed would be selected dynamically based on the state of the execution. Dynamic test generation strategies should instead generate or update test cases directly in the field. This can be done by developers, based on problems reported by end users, or automatically, for example by mutating the available test cases or even producing new tests from scratch.
*Test oracle*. The test oracle defines *what* the expected behavior of the software under test is. This information is necessary to detect failures. There are at least two main approaches to obtain a test oracle that can detect failures, beyond program crashes. An approach consists of defining a generic oracle for the application, covering at least the functionalities that should be tested in the field. An alternative approach is associating oracles with test cases. The former approach is more generic and can serve many purposes, including the detection of failures revealed by automatically generated test cases, but it is more expensive. The latter approach is cheaper to specify because it covers just the specific cases encoded in the test cases, but it is harder to generalize to executions different from the ones represented in the test cases, and thus it might be difficult to reuse for the automatically generated test cases.
Non-intrusiveness
-----------------
We identified two key elements that should be considered about non-intrusiveness.
*Isolation*. A software application running in the field interacts with the elements that are available within its environment (e.g., the resources). Isolating field testing requires the guarantee that the application under test does not produce any disruptive effect while tested, that is, it cannot cause any loss of data and cannot impact on the other processes running in the field. This is a fundamental property to make the field testing technology acceptable by end users.
*Overhead*. Running test cases in the field requires the consumption of additional resources, definitely cpu cycles and memory, but also I/O and access to the network. The field testing procedure must guarantee that any additional resource consumption is rarely and hardly noticeable by the users of the application.
Research Methodology
====================
The research methodology adopted in this Ph.D. thesis consists of three main phases. The first phase, namely *analysis of field failures*, concerns with the analysis of the failures reported from the field by users of software systems. The second phase, namely *identification of test strategies*, concerns with the definition of a test strategy aimed to reveal the faults that can be hardly revealed in house, based on the outcome of the first phase. The third phase, namely *definition of the test infrastructure*, concerns with the design and development of a prototype infrastructure that supports the execution of the test strategies defined in the second phase, guaranteeing the non-intrusiveness of the testing process.
The first phase of the research has been completed, and we are now facing the second and third phases that will be developed in parallel. In the rest of this section, we discuss each phase, and the results that have been obtained, when available. We conclude by sketching the validation plan.
Analysis of Field Failures
--------------------------
To understand the characteristics of field failures, we focused on two main research questions:
1. What are the reasons why software faults are not detected at testing time?
2. What are the field elements typically involved in a field failure?
To answer these questions we manually inspected 119 bug reports[^1] from three different open source systems: three Eclipse plugins [@Subversive; @EGit; @EclipseLink], OpenOffice [@OpenOffice] and Nuxeo [@nuxeo]. We discovered that only $30\%$ of the faults can be attributed to weak testing, while $70\%$ of the faults would be hard to reveal in house for one of the following four reasons, that we defined to answer the first research question:
*Impossible to test (ItT)*: it is impossible to replicate in house the environment that produces the fault. *Lack of information about the application (LoIA)*: the application fails for an undocumented specific case, thus testers have too little information to design a test case that covers the fault in-house.
*Lack of information about the environment (LoIE)*: the application fails for a specific configuration of the environment that in principle should not have any effect on the application.
*Combinatorial explosion (CE)*: the application fails for a specific case out of a huge set of possible cases that cannot be feasibly tested in house.
![RQ1: reasons why faults could not be revealed at testing time[]{data-label="fig:rq1"}](figure1.pdf){width="7.5cm"}
Figure \[fig:rq1\] shows the relative distribution of these four causes for the bug reports that we analyzed, plus the bad testing (BT) cases. Faults related to combinatorial explosion is the major cause of field faults. We observed only two faults that were impossible to test in house. We expect this result to be influenced by the set of systems considered in our analysis. Probably the set of cases impossible to test in house would be higher if considering multi-user distributed systems.
To answer the second research question we checked how often a specific field element, potentially in a specific state, is necessary to observe a failure. We observed that 79% of field failures depend on field elements. The most frequent ones are resources (e.g., files), which appear in 54% of the cases, and operating systems, which appear in 21% of the cases.
These two results jointly show that (1) there are many faults that can be hardly revealed in house for reasons different than a weak testing process, and (2) the testing procedure must exploit elements present in the environment to successfully reveal these faults.
Identification of the Test Strategy
-----------------------------------
In this phase, we exploit the knowledge gained with the initial study to distill a set of *test strategies* for specific classes of field failures. Each test strategy consists of executing a predefined test activity when a given test opportunity is observed. According to our observations, many of the failures can be clustered into similar failure scenarios, which consist of observing specific operations when certain field elements are in a given state. Failure scenarios let us specify the failure opportunities that should be considered when performing field testing.
So far we identified several opportunities, an example is the *modified resources location*. Many applications must access to resources available in the field when executed, such as files and databases. Sometime these resources are moved to a new location by an independent software or by the user. If the application running in the field is not robust against resource relocation, changing the location of resources might threat the correctness of the application.
The test strategy for the modified resource location opportunity consists of generating test suites that cover several usage scenarios for all the resources used by an application. At runtime we periodically check if resources are moved, and, in case a relocation is detected, the test suite exercising the uses of that resource is automatically executed to check if the application can tolerate the new configuration.
We are currently working on more structured definition of test opportunities and corresponding test strategies, with the aim of covering a representative set of failures.
Definition of the Test Infrastructure
-------------------------------------
In order to be able to deploy field testing, we need a suitable infrastructure providing facilities for maintaining the test suites available in the field, and collect the results produced by the execution of the test cases.
Since test strategies require the ability to detect when some test cases should be executed, the infrastructure must include lightweight monitoring capabilities for detecting the test opportunity, and should also be able to select or generate the relevant test cases.
A relevant part of the infrastructure should be devoted to guaranteeing the non-intrusiveness of the field testing process. To achieve isolation, we intend to exploit virtualization technologies and containers, such as Docker [@docker], to efficiently wrap a software application in a complete environment.
A technical solution alternative to virtualization might be based on platforms for N-version executions, which consist of platforms that can run multiple instances of a program in parallel, keeping them synchronised through the use of monitors. Assuming to have enough resources to run multiple instances of a software application, this solution enables efficient field testing activities. An architecture supporting this form of N-version execution has been proposed by Hosek and Cadar [@hosek2015varan].
Validation
----------
In order to evaluate this research, we will apply the defined field testing infrastructure to software applications from different domains: this will tell us if and how the application domain influences the success of field testing. Our measures of effectiveness will be based on the number of faults that can be discovered with the deployed infrastructure, and on the number of tests that are generated and executed in the field. In addition, to assess the non-intrusiveness, we will measure the performance overhead (and more in general the resource consumption caused by the infrastructure), and we will evaluate the impact of the overhead on the user experience by designing studies with human subjects.
Related work
============
The approaches related to this Ph.D. work can be organized into three groups: studies about the characteristics of software failures, approaches to run test cases in the field, and testing and analysis techniques that exploit field data. In the following, we briefly discuss these classes of approaches.
Existing *studies of real software faults* draw conclusions on characteristics such as the distribution of faults types [@Hamill-TrendsInFaults-TSE-2009; @Fan-NuclearFailures-SF-2013], the locality of faults [@Hamill-TrendsInFaults-TSE-2009], the distribution of faults across source files [@Ostrand-FaultDistribution-ISSTA-2002], the root cause, that is, the development phase in which a fault has been introduced, and human error that caused the fault [@Leszak-ClassificationOfDefects-JSS-2002], the relation between fault types, failure detection, and failure severity [@Hamill-FaultTypesDetectionSeverity-SQJ-2014], and the evolution of faults during bug fixing [@Meulen-FaultsFailureBehaviour-ISSE-2004]. However, none of these studies has focused on the causes of field failures and the reasons why failures have not been discovered at testing time, as well as the common characteristics of field failures, which is the starting point of our research.
The problem of running test cases in the field has been preliminarily studied by other researchers. The approach to *field testing* that is closest to the work developed in this thesis is in-vivo testing [@murphy2009quality]. In-vivo testing consists of deploying unit test cases that are executed with a user-defined probability while the application is running to detect failures. However, in-vivo testing does not take into account the specific elements (e.g., resources in the environment) that should be exploited in a field testing strategy to detect the faults that are hard to reveal in house, and does not address the problem of the non-intrusiveness of the testing process, which is assumed to be guaranteed by construction by the tests. Another attempt to move the testing process in the field is the work by Memon et al. [@memon2004skoll]. However, they focus on moving acceptance testing to the field, which is a different task compared to designing a technique for revealing the faults that are hard to reveal in house.
Finally, there are analysis techniques *exploiting field data*. For instance, residual test coverage monitoring [@pavlopoulou1999residual] collects coverage data from the field to complement the results of structural testing obtained in-house. Recently, Ohmann et Al. [@ohmann2016optimizing] proposed a novel approach to limit the amount of instrumentation necessary to collect coverage data, increasing the applicability of residual test coverage in the field.
Statistical program debugging has been also deployed in the field [@jiang2007context; @chilimbi2009holmes]. In these approaches, software applications are lightly instrumented so that they can send execution profiles to a central database. Statistical debugging is then performed on the profiles gathered from the field to identify the likely locations of the faults that have been activated in the field. Although debugging is a different task than testing, the distributed infrastructure built to monitor and collect data from applications running in the field can be useful also to maintain test cases and gather results about the testing process.
Conclusions
===========
Field testing can be a powerful solution for timely discovering the faults that are seldom detected with traditional in-house testing. Although this concept of testing is still in its infancy, the technical elements necessary to achieve it (e.g., testing techniques, virtualization environments, and distributed infrastructures) are mature enough to support this research. This Ph.D. thesis aims at exploring this novel concept of testing, introducing the concepts of test strategies and test opportunities, in addition to defining and developing an appropriate infrastructure for field testing.
*Acknowledgments* This work has been partially supported by the H2020 Learn project, which has been funded under the ERC Consolidator Grant 2014 program (ERC Grant Agreement n. 646867).
[^1]: We actually inspected more than 400 bug reports, but we have been able to extract useful information about the fault for only 119 bug reports.
|
---
abstract: 'The current miniaturization of electronic devices raises many questions about the properties of various materials at nanometre-scales. Recent molecular dynamics computer simulations have shown that small finite nanowires of gold exist as multishelled structures of lasting stability. These classical simulations are based on a well-tested embedded atom potential. Molecular dynamics simulation studies of metallic nanowires should help in developing methods for their fabrication, such as electron-beam litography and scanning tunneling microscopy.'
address: 'Department of Physics, University of Rijeka, Omladinska 14, 51 000 Rijeka, Croatia'
author:
- 'G. Bilalbegović'
date: to be published in Molecular Simulation
title: Multishelled Gold Nanowires
---
Introduction
============
One-dimensional and quasi one-dimensional metallic structures are often used in various electronic devices. Wires of nanometre diameters and micrometre lengths are produced and studied for some time. Recent advances in experimental techniques, such as Scanning Tunneling Microscopy (STM) [@Ohnishi; @Yanson] and electron-beam litography [@Hegger], are giving rise to fabrication of wires with nanometre lengths. Many important results were recently obtained for nanowires of different materials. For example, multishelled nanostructures were found in experiments for carbon [@Iijima], $WS_2$ [@Tenne], $MoS_2$ [@Margulis], and $NiCl_2$ [@Hacohen]. In the jellium model calculation multishelled structures were obtained for sodium nanowires [@Yannouleas].
The results of Molecular Dynamics (MD) simulation [@Goranka] have shown that a gold wire with length $4$ nm and radius of $0.9$ nm, at $T=300$ K, consists of the three coaxial cylindrical shells and the thin core. Here we present an analysis of two additional multiwalled gold nanowires. We also propose that unusual strands of gold atoms recently formed in STM and observed by a transmission electron microscope (TEM) [@Ohnishi] are the image of cylindrical walls of multishelled gold nanowires. An explanation of the thinning process for the STM supported multishelled gold nanowires is given.
Method
======
To simulate metals by the classical MD method one should use many-body potentials. Several implementations of these potentials are available, as for example ones developed within the embedded-atom and effective medium theories [@Daw]. Gold nanowires were simulated using the glue realization of the embedded atom potentials [@Furio]. This potential is well-tested and produces a good agreement with diversity of experimental results for bulk, surfaces, and clusters. In contrast to most other potentials, it reproduces different reconstructions on all low-index gold surfaces [@Furio]. Therefore, it is expected that simulated gold nanowires of more than $\sim 50$ atoms realistically model natural structures. A time step of $7.14 \times 10^{-15}$ s was employed in simulation. The temperature was controlled by rescaling particle velocities.
We started from ideal face centered cubic nanowires with the (111) oriented cross-section at $T=0$ K, and included in the cylindrical MD boxes all particles whose distance from the nanowire axis was smaller than $1.2$ nm for the first nanowire, and $0.9$ nm for the second one. The initial lengths of nanowires were $6$ and $12$ layers, whereas the number of atoms were $689$ and $784$. The samples were first relaxed, then annealed and quenched. To prevent melting and collapse into a drop, instead of usual heating to $1000$ K used in MD simulation of gold nanostructures, our finite nanowires were heated only to $600$ K. Such a procedure gives the atoms a possibility to find local minima and models a constrained dynamical evolution present in fabricated nanowires. The structures were analyzed after a long MD run at $T=300$ K.
Results and Discussion
======================
Figure 1 shows the shape of the MD box for a nanowire of $784$ atoms after $7.1$ ns of simulation at $T=300K$.
Top views of the particle trajectories in the whole MD boxes for two nanowires are shown in Fig. 2 and Fig. 3. While the presence of a multishelled structure is obvious, after $10^6$ time steps of simulation the walls are still not completely homogeneous. Several atoms remain about the walls. Three cylindrical shells exist for the nanowire shown in Fig. 2.
The nanowire presented in Fig. 3 consists of the two coaxial near walls and a large filled core. The filled core is well ordered and its parallel vertical planes are at the spacing of $0.18$ nm. This double-walled structure suggests an application of similar gold nanowires as cylindrical capacitors. Therefore, we calculated the capacitance of finite nanometre-scale cylindrical capacitors and found the values of the order of $0.5$ aF for the sizes for which multishelled nanowires appear in simulations [@Josip].
As always in computer simulations of real materials, it is important to compare results with experiments. Gold nanostructures were the subject of recent STM studies [@Ohnishi; @Yanson]. Unusual strands of gold atoms (down to one row) were simultaneously observed by an electron microscope [@Ohnishi]. The structure of the strands and understanding of the thinning process for these tip-supported nanostructures were left for future studies. The MD trajectory plots of atoms in the vertical slice of the box shown in Fig. 4 resemble strands of gold atoms in Fig. 2 of Ref. [@Ohnishi]. Therefore, we propose that strands formed in STM are the image of the cylindrical walls of multishelled gold nanowires [@Japan]. Figure 4 shows that defects exist on some rows. Gold rows with a defect were sometimes observed by electron microscope (see Fig. 2 (a) in Ref. [@Ohnishi]).
The thinning process for a multishelled nanowire should start from its central part, either the filled thin core (Fig.1 from Ref. [@Goranka]), the central part of a large filled core (Fig. 3 here), or an empty interior cylinder which first shrinks into a thin filled core (Fig. 2 here). When this central part is removed by diffusion of atoms to the tip, the next interior cylindrical wall shrinks into a new core, and then the process repeats. Therefore, the number of rows decreases by one as observed in the experiment [@Ohnishi]. At a final stage of the shrinkage processes an empty cylinder shrinks to one row of atoms. For multishelled nanowires the shrinkage of an internal cylinder is followed by the decrease of the radii of external cylinders and the whole nanostructure thins down with time. A mechanism of plastic deformation cycles of filled nanowires was suggested to explain the shape of necks formed in STM [@Untiedt]. In this model plastic deformation starts from the central cylindrical slab of a filled nanowire which acts as a weakest spot. For multishelled nanowires a such central cylindrical weak spot most often naturally forms. In STM/TEM experiments [@Ohnishi] it was noted that the gap between the dark lines of their Fig. 2 (d) is greatly enlarged. This should be related to the special situation where the multishelled structure is lost and only two rows, i.e., an empty cylinder is present. After that, at a final stage of the shrinkage processes, one atomic chain remains.
Conclusions
===========
MD computer simulation based on the well-established embedded-atom potential shows that finite gold wires of nanometre dimensions are often multishelled. Recently, similar gold nanostructures were formed in STM and observed by TEM [@Ohnishi]. The model of multishelled gold nanowires should be considered in explanation of the conductance measured in STM [@Ohnishi; @Yanson]. Results of computer simulations enable fabrication of similar metallic nanowires which will be used in nanoelectronic and nanomechanical devices.
H. Ohnishi, Y. Kondo, and K. Takayanagi, “Quantized conductance through individual rows od suspended gold atoms”, Nature [**395**]{}, 780 (1998).
A. I. Yanson, G. Rubio Bollinger, H. E. van den Brom, N. Agrait, and J. M. van Ruitenbeek, “Formation and manipulation of a metallic wire of single gold atoms”, Nature [**395**]{}, 783 (1998).
H. Hegger, K. Hecker, G. Reckziegel, A. Freimuth, B. Huckestein, M. Janssen, and R. Tuzinski, “Fractal conductance fluctuations in gold nanowires”, Phys. Rev. Lett. [**77**]{}, 3885 (1996).
S. Iijima, “Helical microtubules of graphite carbon”, Nature [**354**]{}, 56 (1991).
R. Tenne, L. Margulis, M. Genut, and G. Hodes, “Polyhedral and cylindrical structures of tungsten disulphide”, Nature [**360**]{}, 444 (1992).
L. Margulis, G. Salitra, R. Tenne, and M. Tallenker, “Nested fullerene-like structures”, Nature [**365**]{}, 113 (1993).
Y. R. Hacohen, E. Grunbaum, R. Tenne, J. Sloan, and J. L. Hutchison, “Cage structures and nanotubes of $NiCl_2$”, Nature [**395**]{}, 336 (1998).
C. Yannouleas and U. Landman, “On mesoscopic forces and quantized conductance in model metallic nanowires”, J. Phys. Chem. B [**101**]{}, 5780 (1997).
G. Bilalbegovic, Phys. Rev. B [**58**]{}, 15142 (1998), “Structure and stability of finite gold nanowires”.
M. S. Daw, S. M. Foiles, and M. I. Baskes, “The embedded atom method: a review of theory and applications”, Mater. Sci. Rep. [**9**]{}, 251 (1993).
F. Ercolessi, M. Parrinello, and E. Tosatti, Philos. Mag. A [**58**]{}, 213 (1988), “Simulation of gold in the glue model”.
J. Stepanic and G. Bilalbegovic, unpublished.
H. Ohnishi, private communications.
C. Untiedt, G. Rubio, S. Vieira, and N. Agrait, Phys. Rev. B [**56**]{}, 2154 (1997), “Fabrication and characterization of metallic nanowires”.
|
---
abstract: 'Infrared features of gluon propagator, ghost propagator, QCD running coupling and the Kugo-Ojima parameter in lattice Landau gauge QCD are presented. The framework of PMS analysis suggests that there appear infrared, intermediate and ultraviolet regions specified by $\Lambda_{\overline{MS}}$, $\beta_0$ and $\beta_1$. The propagators and the running coupling of $q>1GeV$ are fitted by the $\widetilde{MOM}$ scheme using the factorization scale as the scale parameter. The running coupling data of $q<14GeV$ are fitted by the contour improved perturbation method. The Gribov copy problem is studied in $SU(2)$, $\beta=2.2$ samples by comparing the data gauge fixed by the parallel tempering method and the data gauge fixed by the straightforward gauge fixing. The Gribov noise effect turned out to be about 6%.'
author:
- Sadataka Furui
- Hideo Nakajima
title: |
Infrared Features of Lattice Landau Gauge QCD\
and the Gribov Copy Problem
---
[ address=[School of Science and Engineering, Teikyo University, Utsunomiya 320-8551,Japan]{} ]{}
[ address=[Department of Information science, Utsunomiya University, Utsunomiya 320-8585,Japan ]{} ]{}
Introduction
============
In Landau gauge QCD, it is necessary to restrict the gauge configuration expressed by the link variable $U_{x,\mu}$ to $\Omega_L=\{U|-\partial { D(U)}\ge 0\ ,\ \partial A=0\}$ which is called Gribov region (local minima)[@Gr], where $D$ is covariant derivative. Zwanziger argued that the uniqueness of the gauge field is guaranteed in its subset $\Lambda_L=\{U|\ A=A(U), F_{U}(1)={\rm Min}_gF_{U}(g)\}$, and called the $\Lambda_L$ fundamental modular(FM) region[@Zw]. Here the optimizing function $F_{U}(g)$ is defined in the case of $U-$linear ($A_{x,\mu}=\displaystyle{1\over 2}(U_{x,\mu}-U_{x,\mu}^{\dag})|_{trless\ p.}$) and $\log U$ ($U_{x,\mu}=e^{A_{x,\mu}})$ as, $F_U(g)=\sum_{x,\mu}\left (1- {1\over 3}{\rm Re}\ {\rm tr}U^g_{x,\mu}\right),$ and $F_U(g)=||A^g||^2=\sum_{x,\mu}{\rm tr}
\left({{A^g}_{x,\mu}}^{\dag}A^g_{x,\mu}\right)$, respectively[@NF].
In Landau gauge, QCD running coupling can be extracted from three gluon coupling and/or ghost-gluon coupling. In terms of gluon dressing functiuon $Z_A(q^2)$ and ghost dressing function $G(q^2)$, which are measurable in lattice Landau gauge, renormalization group invariant quatity[@SHA] $\alpha_s(q^2)=g^2 G(q^2)^2 Z_A(q^2)/4\pi$ can be measured.
Colour confinement in infrared QCD is characterized by the Kugo and Ojima parameter $u(0)$ which is defined as $$\displaystyle{1\over V}
\sum_{x,y} e^{-ip(x-y)}\langle {\rm tr}\left({\lambda^a}^{\dag}
D_\mu \displaystyle{1\over -\partial D}[A_\nu,\lambda^b] \right)_{xy}\rangle=
(\delta_{\mu\nu}-{p_\mu p_\nu\over p^2})u^{ab}(p^2),$$ where $u^{ab}(p^2)=\delta^{ab}u(p^2), \ u(0)=-c.$ The sufficient condition of the colour confinement is $u(0)=-1$. We measure the running coupling and the Kugo-Ojima parameter on lattice and study the ambiguity due to Gribov copy.
Numerical Results of Lattice Landau gauge QCD
==============================================
The gluon propagator and the ghost propagator
---------------------------------------------
The gluon propagator of colour $SU(n)$ gauge is defined as $$\begin{aligned}
D_{\mu\nu}(q)&=&{1\over n^2-1}\sum_{x={\bf x},t}e^{-ikx}Tr\langle A_\mu(x)A_\nu(0)^\dagger \rangle \nonumber\\
&=&(\delta_{\mu\nu}-{q_\mu q_\nu\over q^2})D_A(q^2)\end{aligned}$$ and the gluon dressing function as $Z_A(q^2)=q^2 D_A(q^2)$. Our result is shown in Fig.\[gh243248\](a) and is consistent with[@adelade; @orsay1].
The ghost propagator, which is the Fourier transform of an expectation value of the inverse Faddeev-Popov operator ${\cal M}=-\partial D=-\partial^2(1-M)$ $$D_G^{ab}(x,y)=\langle {\rm tr} \langle \lambda^a x|({\cal M}[U])^{-1}|
\lambda^b y\rangle \rangle$$ where the outmost $\langle\rangle$ denotes average over samples $U$, is evaluated as follows. We take plane wave for the source ${\bf b}^{[1]}={\bm\lambda}^b e^{iqx}$ and get the solution of Poisson equation ${\bm \phi}^{[1]}=(-\Delta )^{-1}{\bf b}^{[1]}$. We calculate iteratively $\phi^{[i+1]}=M{\bm \phi}^{[i]}({\bf x})$($i=1,\cdots,k-1$). The iteration was continued until $Max_{\bf x}|{\bf \phi}^{[k]}({\bf x})|/Max_{\bf x}|\sum_{i=1}^{k-1}{\bm \phi}^{[i]}({\bf x})|<0.001\sim 0.01$. The number of iteration $k$ is of the order of 60, in $SU(2)$, $16^4$ lattice, and of the order of 100 in $SU(3)$. We define ${\bm\Phi}^b({\bf x})=\sum_{i=1}^k{\bm \phi}^{[i]}({\bf x})$ and evaluate $\langle \lambda^a e^{iqx},\Phi^b({\bf x})\rangle$ as the ghost propagator from colour $b$ to colour $a$. We also used the conjugate gradient(CG) method and observed that the data agree except at the lowest momentum point of $48^4$. Results of CG method are shown at this momentum point.
![(a)The gluon dressing function as the function of the momentum $q(GeV)$. $\beta=6.0$, $24^4$(triangle), $32^4$(diamond) and $\beta=6.4$, $48^4$(star) in $\log U$ version. The fitted line is that of the $\widetilde{MOM}$ scheme. (b)The ghost propagator as the function of the momentum $q(GeV)$. $\beta=6.0$, $24^4, 32^4$ and $\beta=6.4$, $48^4$ in $\log U$ version. The fitted line is that of the $\widetilde{MOM}$ scheme. []{data-label="gh243248"}](dgl243248.eps)
![(a)The gluon dressing function as the function of the momentum $q(GeV)$. $\beta=6.0$, $24^4$(triangle), $32^4$(diamond) and $\beta=6.4$, $48^4$(star) in $\log U$ version. The fitted line is that of the $\widetilde{MOM}$ scheme. (b)The ghost propagator as the function of the momentum $q(GeV)$. $\beta=6.0$, $24^4, 32^4$ and $\beta=6.4$, $48^4$ in $\log U$ version. The fitted line is that of the $\widetilde{MOM}$ scheme. []{data-label="gh243248"}](gh243248n.eps)
The gluon and the ghost dressing functions are fitted by using the framework of principle of minimal sensitivity(PMS)[@PMS] or the effective charge method[@Gru], in which the running coupling $h(q)$ is parametrized as
$$\begin{aligned}
h(q)&=&y_{\overline{MS}}(q)\{1+y_{\overline{MS}}(q)^2(\bar\beta_2/\beta_0-(\beta_1/\beta_0)^2)\nonumber\\
&+&y_{\overline{MS}}(q)^3\frac{1}{2}(\bar \beta_3/\beta_0-(\beta_1/\beta_0)^3)+\cdots\}.\end{aligned}$$
The parameter $y_{\overline{MS}}(q)$ can be expressed by the parameter $y$ defined as a solution of $$\label{scale}
\beta_0\log\frac{\mu^2}{\Lambda^2}=\frac{1}{y}+\frac{\beta_1}{\beta_0}\log(\beta_0 y)$$ where $\Lambda$ characterizes the scale of the system, and the function $$k(q^2,y)=\frac{1}{y}+\frac{\beta_1}{\beta_0}\log(\beta_0 y)-\beta_0\log(q^2/\Lambda_{\overline{MS}}^2).$$
In PMS, $y$ is treated as a function of $q^2$. However, in this work we fix the scale by the factorization scale $\mu=1.97GeV$ at which renormalizable quantity can be approximately factorizable into scheme independent and dependent parts[@Gru; @orsay1].
In order to be consistent with the $\overline{MS}$ scheme, we define the variable $z$ as $$\begin{aligned}
z&=&-e^{(-1-b t/2c)}=-\frac{1}{e}(\frac{q}{\tilde\Lambda_{\overline{MS}}})^{-b/c}e^{iK\pi}\nonumber\\
&=&-Z(q^2)e^{iK\pi}\end{aligned}$$ where $t=\log (q^2/\tilde\Lambda_{\overline{MS}}^2)$, $c=\beta_1/\beta_0=51/22, b=\beta_0/2=11/2$, $\tilde\Lambda_{\overline{MS}}=(2c/b)^{-c/b}\Lambda_{\overline{MS}}$, $K=-b/2c$[@PMS; @HoMa]. When we fix $y$ by the solution of eq.(\[scale\]) with $\mu=1.97GeV$, ($\widetilde{MOM}$ scheme), we find that in the gluon dressing function the Landau pole at $z=1/e$ remains, and another pole appears at $z\sim0.17$. When $y$ is chosen as $q^2$ dependent, the three regions $0<z<0.17$, $0.17<z<1/e$ and $1/e<z$ can be continuously connected[@vAc], but there is a subtle problem of PMS in low energy[@BroLu] and leave the problem to bridge the three regions to the future.
The gluon dressing function $q^2D_A(q^2)$ and the ghost propagator $D_G(q^2)$ in $\widetilde{MOM}$ scheme are plotted in Fig.\[gh243248\](a) and (b), respectively. They are singular at $q=\tilde\Lambda_{\overline{MS}}\simeq 0.25GeV$ which should be washed away by the non-perturbative effects.
The QCD running coupling and the Kugo-Ojima parameter
-----------------------------------------------------
The QCD running coupling $\alpha_s(q)$ turned out to have a peak of the order of 1 at $q\sim 0.5GeV$ and decreases to a finite value at $q=0$. We fitted the data by the contour improved perturbation series, which is a way to make a resummation of the series of coupling constant.
The effective running coupling in the $\overline{MS}$ scheme is expressed by the series of coupling constant $h^{(n)}$ as[@vAc]. $${\cal R}^n=h^{(n)}(1+A_1h^{(n)}+A_2h^{(n)2}+\cdots+A_nh^{(n)n})\label{req}$$
The result of $\widetilde{MOM}$ scheme using $y=0.01594$ is shown by the solid line in Fig. \[alppert1\](a). The lattice data of $24^4, 32^4$ and $48^4$ and the $\widetilde{MOM}$ scheme agree in $0.5GeV<q<2GeV$ but slightly overestimates in $q>2GeV$.
In contour improved perturbation method, physical quantities ${\cal R}$ are expressed in a series physical quantities ${\cal R}$ are expressed in a series $$\label{pert}
{\cal R}(q^2)={\cal B}_1(q^2)+\sum_{n=1}^\infty A_n{\cal B}_{n+1}(q^2)$$ $${\cal B}_n=\frac{1}{2\pi}\int_{-\pi}^\pi (\frac{-1}{c(1+W(Z(q^2)e^{iK\theta})})^n d\theta$$ Terms in the series (\[pert\]) have alternating sign and $A_3$ is not known. By choosing half of $A_2$ in the series, we can obtain a good fit of the data(Fig.\[alppert1\](b)).
![(a) The running coupling $\alpha_s(q)$ of $\beta=6.0$, $24^4$(box), $32^4$(triangle), $\beta=6.4$, $32^4$(diamond) and $48^4$(star) as a function of momentum $q(GeV)$ and the result of the $\widetilde{MOM}$ scheme. (b) The running coupling $\alpha_s(q)$ as a function of momentum $q(GeV)$ of $\beta=6.4, 48^4$ lattice. The solid line is the result of ${\cal R}^2$ using $\Lambda_{\overline{MS}}=237MeV$. Dotted line is the result of $e^{70/6\beta_0}\Lambda_{\overline{MS}}$ and including half of $A_2$. Dashed line is the result of $\widetilde{MOM}$ scheme. []{data-label="alppert1"}](alp243248n.eps)
![(a) The running coupling $\alpha_s(q)$ of $\beta=6.0$, $24^4$(box), $32^4$(triangle), $\beta=6.4$, $32^4$(diamond) and $48^4$(star) as a function of momentum $q(GeV)$ and the result of the $\widetilde{MOM}$ scheme. (b) The running coupling $\alpha_s(q)$ as a function of momentum $q(GeV)$ of $\beta=6.4, 48^4$ lattice. The solid line is the result of ${\cal R}^2$ using $\Lambda_{\overline{MS}}=237MeV$. Dotted line is the result of $e^{70/6\beta_0}\Lambda_{\overline{MS}}$ and including half of $A_2$. Dashed line is the result of $\widetilde{MOM}$ scheme. []{data-label="alppert1"}](alp48fitn.eps)
The Kugo-Ojima parameter depends slightly on definitions, $U-$linear or $\log U$ and $c$ becomes larger as the lattice size becomes large. However the value saturates at about 0.8.
$\beta$ $L$ $c_1$ $e_1/d$ $h_1$ $c_2$ $e_2/d$ $h_2$
--------- ----- ----------- ---------- ------- ----------- ---------- -------
6.0 16 0.576(79) 0.860(1) -0.28 0.628(94) 0.943(1) -0.32
6.0 24 0.695(63) 0.861(1) -0.17 0.774(76) 0.944(1) -0.17
6.0 32 0.706(39) 0.862(1) -0.15 0.777(46) 0.944(1) -0.16
6.4 32 0.650(39) 0.883(1) -0.23 0.700(42) 0.953(1) -0.25
6.4 48 0.793(61) 0.982(1) -0.19
: The Kugo-Ojima parameter $c$, trace $e/d$ and horizon function deviation $h$ in $U-$linear and $\log U$ version. $\beta=6.0$ and $6.4$.[]{data-label="kugotab"}
The Gribov copy problem
=======================
We applied the parallel tempering(PT) method, which is used in finding global minimum in spin glass systems, to perform the FM gauge fixing of $SU(2)$ $\beta=2.2, 16^4$ lattice configuration and compared with ensemble of 1st copy(Gribov copy obtained by the straightforward gauge fixing)[@lat03]. The Kugo-Ojima parameter $c$ of PT becomes smaller than 1st copy by about $4\%$. Gribov noise in the ghost propagator is similar to Cucchieri’s noise[@cuc] at $\beta=1.6$, i.e. the infrared singularity of PT is weaker than 1st copy by about $6\%$. The SU(2) gluon propagator, ghost propagator and running coupling in $q\geq 1GeV$ region are consistent with results of[@BCLM]. The running coupling in our simulation has a peak at around $q=1GeV$ and is suppressed in the infrared.
We aimed to detect in the lattice dynamics, the signal of Kugo-Ojima confinement criterion derived in the continuum theory, formulated in use of Faddeev-Popov Lagrangian and BRST symmetry. We also noted Zwanziger’s horizon condition, based on the lattice formulation, coincides with Kugo-Ojima criterion[@Zw; @NF]. However, our present data are not satisfactory to prove or disprove the confinement criterion. The colour off-diagonal antisymmetric part of ghost propagator[@dudal; @KoShi] does not appear in Landau gauge, and the off-diagonal symmetric part has vanishing statistical average but has fluctuation proportional to $(qa)^{-4}$.
We are grateful to Daniel Zwanziger for enlightenning discussion. S.F. thanks Kei-Ichi Kondo, Kurt Langfeld, Karel van Acoleyen and David Dudal for valuable information. This work is supported by the KEK supercomputing project No.03-94.
[99]{} T. Kugo and I. Ojima, [Prog. Theor. Phys. Supp.]{} [**66**]{}, 1 (1979).
V.N. Gribov, [**139**]{}[1]{}[(1978)]{}.
D. Zwanziger, [**364**]{} ,[127]{} [(1991)]{}, idem B [**412**]{}, [657]{} (1994).
H.Nakajima and S. Furui, (Proc Suppl.)[**63A-C**]{},[635, 865]{}(1999), (Proc Suppl.)[**83-84**]{},521 (2000), [**119**]{},730(2003); [**680**]{},[151c]{}(2000), hep-lat/0303024,0305010. L. von Smekal, A. Hauck, R. Alkofer, [Ann. Phys.]{}[**267**]{},1 (1998), hep-ph/9707327; H.Nakajima and S. Furui,Lattice2003 proceedings. D.B. Leinweber, J.I. Skullerud, A.G. Williams and C. Parrinello, [**60**]{},[094507]{}[(1999)]{}; ibid [**61**]{},[079901]{}[(2000)]{}. D. Becirevic et al., [**61**]{},[114508]{}[(2000)]{}. A. Cucchieri, [**508**]{},[353]{}[(1997)]{}, hep-lat/9705005. P.M.Stevenson, [**23**]{}, [2916]{}[(1981)]{}. G. Grunberg, [**29**]{}, [2315]{}[(1984)]{}. K. Van Acoleyen and H. Verschelde, [**66**]{},125012(2002),hep-ph/0203211. D.M. Howe and C.J. Maxwell, hep-ph/0204036 v2. S.J. Brodsky and H.J. Lu, [**51**]{},[3652]{}[(1995)]{} J.R.C.Bloch, A. Cucchieri, K.Langfeld and T.Mendes, hep-lat/0209040 v2. D.Dudal et al.,JHEP06(2003)003. K-I. Kondo and T. Shinohara, [**491**]{},[263]{}[(2000)]{}.
|
---
author:
- 'George Panagopoulos, Fragkiskos D. Malliaros, and Michalis Vazirgiannis [^1]'
bibliography:
- 'bib.bib'
title: 'Multi-task Learning for Influence Estimation and Maximization'
---
networks have come to play a substantial role in numerous economical and political events, exhibiting their effect on shaping public opinion. The core component of their potential in micro level is how a user influences another user online, which is depicted in the users’ connections and activity. Formally, social influence is defined as a directed measure between two users and represents how possible is for the target user to adapt the behavior or copy the action of the source user. In viral marketing, influence is used to simulate how information flows through the network and find the optimal set of nodes to start a campaign from, the well-known influence maximization (IM) problem. In IM, the users are represented as nodes and the edges depict a relationship, like following, friendship, coauthorship, etc. A stochastic diffusion model, such as independent cascade and linear threshold [@kempe2003maximizing], governs how an epidemic traverses the users based on their connections. It is used to compute the number of users activated during a diffusion simulation, which is called the *influence spread*. The aim is to find the optimal set of $k$ users that would maximize the influence spread of a diffusion cascade starting from them [@kempe2003maximizing].
One of the main problems in the IM literature is that diffusion models utilize random or uniform influence parameters. Experiments have shown that such approach produces a less realistic influence spread compared to models with empirical parameters [@aral2018social]. Even when the parameters are learned in an empirical manner from historical logs of diffusion cascades, the independent cascade (IC) model is utilized [@continest; @bourigault2016representation; @saito2009learning], which is problematic for two reasons. Apart from severe overfitting due to the massive number of parameters, this approach assumes influence independence throughout related nodes or edges of the same node. This assumption overlooks the network’s assortativity, meaning that an influential node is more prone to effect a susceptible node then a less influential node, even when the edge of the latter to the susceptible is stronger than the edge of the former. This effect comes in contrast with the actual mechanics of influence [@aral2012identifying]. In addition, diffusion models themselves suffer from oversimplifying assumptions overlooking several characteristics of real cascades [@complex2018] which can provide inaccurate estimations compared to actual cascades [@maria2016] while they are highly sensitive to their parameters [@du2014influence]. To address these issues, we propose a method that learns influencer and susceptible embeddings from cascades, and uses them to perform IM without the use of a diffusion model. The first part of our method is INFECTOR (INfluencer Vectors), a multi-task neural network that learns influencer vectors for nodes that initiate cascades and susceptible vectors for those that participate in them. Our focus on the cascades’ initiators is justified from empirical evidence on the predominantly initiating tendency of influencers and it is considerably faster compared to previous influence learning (IL) models [@feng2018inf2vec]. INFECTOR additionally embeds the aptitude of an influencer to create sizable cascades in the norm of her embedding. From this, we derive an estimate of influencers’ spread and use it to diminish the number of the candidate seeds for IM. Following suit from recent model-independent algorithms [@lagree2018algorithms; @vaswani2017model], we overlook the diffusion model and connect each candidate seed (influencer) and every susceptible node with a diffusion probability using the dot product of their respective embeddings, forming a bipartite network. Apart from removing the time-consuming simulations, diffusion probabilities have the advantage of capturing higher-order correlations that diffusion models fail to due to their Markovian nature. Finally, we propose <span style="font-variant:small-caps;">IMINFECTOR</span>, a scalable greedy algorithm that uses a submodular influence spread to compute a seed set, retaining the theoretical guarantee of $1-1/e$.\
To evaluate the performance of the algorithm, the quality of the produced seed set is determined by a set of unseen cascades from future time steps, similar to a train and test split in machine learning. We deem this evaluation strategy more reliable than traditional evaluations based on simulations because it relies on actual traces of influence. <span style="font-variant:small-caps;">IMINFECTOR</span> outperforms previous IM methods with learned edge weights, either in quality or speed in three sizable networks with cascades. This is the first time, to the best of our knowledge, that representation learning is used for IM with promising results. Our contributions can be summarized as follows:
- <span style="font-variant:small-caps;">INFECTOR</span>: a multi-task learning neural network that captures simultaneously the influence between nodes and the aptitude of a node to create massive cascades.
- <span style="font-variant:small-caps;">IMINFECTOR</span>: a model-independent algorithm that uses the combinations and the norms of the learnt representations to produce a seed set for IM.
- A novel, thorough experimental framework with new large scale datasets, seed set size adapted to the network scale and evaluation based on ground truth cascades instead of simulations.
**Reproducibility**: The implementation and links to the datasets can be found online on github[^2].
The basic solution in the problem of IM is a greedy algorithm that builds a seed set by choosing the node that provides the best marginal gain at each iteration, i.e., the maximum increase of the set’s influence spread [@kempe2003maximizing]. The algorithm estimates the influence spread using the live-edge model, a Monte Carlo sampling of edges that estimates the result of simulating a diffusion model. The algorithm is guaranteed to reach a near optimal solution because of the submodularity if the marginal gain. Having said that, the 1000 samplings needed for the sufficient estimation of the influence spread for each candidate seed in each iteration, renders the approach infeasible for real world networks. Several methods have been developed that achieve notable acceleration and retain the theoretical guarantees using sketches [@sketch] or reverse reachable sets [@tim; @imm]. Moreover numerous heuristics have been proposed that, though lacking guarantees, exhibit remarkable success in practice [@simpath; @chen2010scalable]. The main problem of such approaches, as mentioned above, is that the established influence spread may lead to seed sets of poor quality. This can be caused either by the assignment of simplistic influence weights or by the diffusion model’s innate assumptions. To address the issue with the random influence weights, some novel influence-maximization approaches assume multiple rounds of influence maximization can take place over the graph, hence multiple simulations can be used to compute the influence probabilities while balancing between the number of nodes influenced in each round and learning influence for non examined parts of the network. Since this is an inherently exploration-exploitation problem these models are based on multi-armed bandits to use the feedback and update of the parameters [@wen2017online; @sun2018multi; @wu2019factorization]. Though usefull, these algorithms are built based on the diffusion models as well, hence they share their aforementioned deficiencies. Recently more model-independent online learning [@lagree2018algorithms] approaches that utilize regret functions without the use of diffusion models have been proposed [@vaswani2017model], but they suffer from important scalability issues and would not be able to scale in real-world social networks, such as the datasets we examine in this work.
Overall, there have been very few attempts to address influence maximization with learned influence parameters from real past cascades, which serve as benchmark comparisons for our proposed approach [@credit; @goyal2010learning]. These models suffer from overfiting due the number of parameters which is proportional to the number of edges. To reduce the number of parameters, a more practical approach is to express the influence probabilities as a combination of the nodes’ influence and susceptibility embeddings and learn them from cascades [@bourigault2016representation]. This method however relies on IC, and consequantly suffers from the aforementioned influence independence assumption. More recent IL methods are devoid of diffusion models and learn the embeddings based on co-occurrence in cascades [@feng2018inf2vec], similarly to node representation learning. This type of IL has not been used yet for IM. Although more accurate in diffusion prediction, the input of this model consists of node-context pairs derived from the propagation network of the cascade, a realization of the underlying network (e.g., follow edges) based on time-precedence in the observed cascade (e.g., retweets). This requires looping through each node in the cascade and iterating over the subsequent nodes to search for a directed edge in the network, which has a complexity of $\mathcal{O}(c\bar{n}(\bar{n}-1)/2)$, where $c$ is the number of cascades and $\bar{n}$ is the average cascade size. This search is too time-consuming as we analyze more on the experimental section. It should be noted that apart from pure IL, multiple methods have been developed to predict several aspects of a cascade, such as <span style="font-variant:small-caps;">Topo-LSTM</span> and <span style="font-variant:small-caps;">HiDAN</span> to predict the next infected node [@topo_rnn; @wang2019hierarchical], RMTPP, <span style="font-variant:small-caps;">DeepDiffus</span> and CYANRNN for next node and time of infection [@rmtpp; @deepdiffuse; @cyanrnn], <span style="font-variant:small-caps;">DeepCas</span> to predict cascade size [@deepcas], and FOREST to predict next node and size together [@forest]. These methods have advanced end-to-end neural architectures focused each on their respective tasks. However, their hidden representations can not be utilized in the context of IM, because they are derived by a sequence of nodes, hence we can not form individual influence relationships between two distinct nodes. In other words, an IM algorithm requires a node-to-node influence relationship, which is not clearly provided by the aforementioned references. Moreover, methods that do learn node-to-node influence but capitalize on other information such as sentiment of the context [@liu2019ct] can not be utilized as well due to our datasets missing content, but it is a promising future approach.
![Influencer’s initiating versus participating in diffusion cascades. **DNI** stands for distinct nodes influenced, **size** stands for cascade size, and **no** stands for number of cascades started.[]{data-label="fig:initiator_barplot"}](Figures/initiator_bar.pdf){width="50.00000%"}
Our goal in the first part of our methodology is to create a model that learns representations suitable for scalable model-independent IM. Due to the lack of a diffusion model, the network’s structure becomes secondary and online methods tend to rely only on the activity of individual nodes [@lagree2018algorithms; @vaswani2017model]. We follow suit and propose an IL method that focuses mainly on influencers.

Symbol Meaning Type
-------------------------------------------------- ------------------------------------------------------ ----------
$N$ number of nodes scalar
$t_u$ time of node’ $u$ post scalar
$C_u$ cascade initiated by $u$ set
$\mathbf{x}, \mathbf{y}_t$ one-hot embedding of a node vector
$y_c$ cascade length scalar
$X^t$ $\mathbf{x},\mathbf{y}$ pairs extracted from cascade list
$\mathbf{O}$ origin embeddings of INFECTOR matrix
$\mathbf{T}$ target embeddings of INFECTOR matrix
$C$ constant embeddings of INFECTOR vector
$p_{u,v}$ probability of $u$ influencing $v$ scalar
$\mathbf{z}$ hidden layer output vector
$f_t$ softmax function
$f_c$ sigmoid function
$\boldsymbol{\varphi}_t, \boldsymbol{\varphi}_c$ output after non-linear activation vector
$\mathbf{\Phi}$ jacobian of INFECTOR’s output matrix
$L$ loss of INFECTOR vector
$D$ INFECTOR classification probabilities matrix
$\lambda_u$ number of nodes to be influenced by $u$ scalar
$\hat{D_s}$ $D$ sorted for the row of node $s$ matrix
$\sigma'(s)$ influence spread of node set $s$ scalar
$S$ Set of seed nodes set
: Table of symbols.
We start from the context creation process that produces the input to the network. In previous node-to-node IL models, initiating a cascade is considered equally important with participating in a cascade created by someone else [@feng2018inf2vec], thus the context of a node is derived by the nodes occuring after it in a cascade. This process requires the creation of the propagation network, meaning going through every node in the cascade and iterating over the subsequent nodes to search for a directed edge in the network. This has a complexity of $\mathcal{O}\Big(c(\bar{n}(\bar{n}-1)/2))$, where $c$ is the number of cascades and $\bar{n}$ is the average cascade size. Given that the average size of a cascade can surpass 60 nodes, it is a very time consuming for a scalable IM algorithm. To overcome this we focus on the nature of IM, meaning the final chosen seed users will have to exert influence over other nodes, thus we may accelerate the process by capitalizing on the differences between influencers and simple users. Intuitively and empirically, we expect the influencers to exhibit different characteristics in sharing content than the simple/susceptible nodes [@newman2002assortative]. We argue that an influencer’s strength lies more on the cascades she initiates, and evaluate this hypothesis through an exploratory analysis. We utilize the cascades of *Sina Weibo* dataset, a large scale social network accompanied by retweet cascades to validate our hypothesis. The dataset is split in train and test cascades based on their time of occurrence and each cascade represents a tweet and its set of retweets. We keep the 18,652 diffusion cascades from the last month of recording as a test set and the 97,034 from the previous 11 months as a train set. We rank all users that initiated a cascade in the test set based on three measures of success: the number of test cascades they spawn, their cumulative size, and the number of Distinct Nodes Influenced (DNI) [@continest; @complex2018; @panagopoulos2018diffugreedy], which is the set of nodes that participated in these test cascades. We bin the users into three categories based on their success in each metric, and for each category we compute the total cascades the users start in the training set opposed to those they simply participate in. Intuitively, what we want to examine here is whether users that are popular in the present and the future are more prone to participate or start their own cascades. As is visible in Figure \[fig:initiator\_barplot\], users that belong to the top category of the test set in all metrics are far more likely to start a train cascade rather than participate in it. In contrast, moderate or low level users create significantly less novel content, compared to their reshares.
Thus, for our purpose, the context is extracted exclusively for the initiator of the cascade and will be comprised of all nodes that participate in it. Moreover, we will take into account the copying time between the initiator and the reposter, based on empirical observation on the effect of time in influence [@goyal2010learning; @panagopoulos2019influence].To be specific, all cascade initiators are called influencers, an influencer $u$’s context will be created by sampling over all nodes $v$ in a given cascade $c$ that $u$ started, with probability inversely proportional to their copying time. In this way, the faster $v$’s retweet is, the more probable it will appear in the context of $u$, following this formula $P(v|\mathcal{C}_u ) \sim\frac{(t_u-t_v)^{-1}}{\sum_{v' \in \mathcal{C}_u }{(t_u-t_v')^{-1}}}$. It is a general principle to utilize temporal dynamics in information propagation when the network structure is not used [@netrate; @cosine; @continest] and is based on empirical observation on the effect of influence in time [@goyal2010learning]. In our case we do an oversampling of 120% to emphasize the importance of fast copying. This node-context creation has a complexity of $\mathcal{O}(cn)$, which is linear to the cascade’s size and does not require searching in the underlying network. Even more importantly, this type of context sets up the model to compute diffusion probabilities, i.e. influence between nodes with more than one hop distance in the network. The diffusion probabilities (DPs) are the basis for our model-independent approach, and exhibit important practical advantages over influence probabilities, as we analyze further below. We use a multi-task neural network [@caruana1997multitask] to learn simultaneously the aptitude of an influencer to create long cascades as well as the diffusion probabilities between her and the reposters. We chose to extend the typical IL architecture in a multi-task learning setting because ($i$) the problem could naturally be broken in two tasks, and ($ii$) theoretical and applied literature suggests that training linked tasks together improves overall learning [@evgeniou2004regularized; @panagopoulos2017multi]. In our case, given an input node $u$, the first task is to classify the nodes it will influence and the second to predict the size of the cascade it will create. An overview of our proposed INFECTOR model can be seen in Figure \[fig:mtl\_schema\]. There are two types of inputs. The first is the training set comprised of the node-context pairs. As defined in the previous section, given a cascade $t$ with length $m$, we get a set $\mathbf{X}^t=\{(\mathbf{x}^1,\mathbf{y}^1_t),(\mathbf{x}^1,\mathbf{y}^2_t),...,(\mathbf{x}^m,\mathbf{y}^m_t)\}$, where $\mathbf{x}\in \mathbb{R}^I$ and $\mathbf{y}_t \in \mathbb{R}^N$ are one hot encoded nodes, with $I$ the number of influencers in the train set and $N$ the number of nodes in the network. The second is a similar set $X^c$, where instead of a vector e.g. $\mathbf{y}^1_t$, there is a number $\mathbf{y}^1_c$ denoting the length of that cascade, initiated by the $\mathbf{x}^1$. To perform joint learning of both tasks we mix the inputs following the natural order of the data; given a cascade, we first input the influencers-context pairs extracted from it and then the influencers-cascade length pair, as shown in Figure \[fig:mtl\_schema\]. $\mathbf{O} \in \mathbb{R}^{I \times E}$, with $E$ being the embeddings size, represents the source embeddings, $\mathbf{O}_u \in \mathbb{R}^{1 \times E}$ the embeddings of cascade initiator $u$, $\mathbf{T} \in \mathbb{R}^{E \times N}$ the target embeddings and $C \in \mathbb{R}^{E \times 1} $ is a constant vector initialized to 1. Note that $\mathbf{O}_u$ is retrieved by the multiplication of the one-hot vector of $u$ with the embedding matrix $\mathbf{O}$. The first output of the model represents the diffusion probability $p_{u,v}$ of the source node $u$ for a node $v$ in the network. It is created through a softmax function and its loss function is the cross-entropy. The second output aims to regress the cascade length, which has undergone min-max normalization relative to the rest of the cascades in this set, and hence a sigmoid function is used to bound the output at $(0,1)$. The respective equations can be seen in Table \[tab:neurons\]. Here $y_t$ is a one-hot representation of the target node and $y_c$ is the normalized cascade length. We employ a non-linearity instead of a simple regression because without it, the updates induced to the hidden layer from the second output would heavily overshadow the ones from the first. Furthermore, we empirically observed that the update of a simple regression would cause the gradient to explode eventually. Moreover, variable $C$ is a constant vector of ones that is untrainable, in order for the update of the regression loss to change only the hidden layer. This change was motivated by experimental evidence.
*Classify Influenced Node* *Regress Cascade Size*
------------ -------------------------------------------------------------------------------------------------- ---------------------------------------------------------------
**Hidden** $\mathbf{z}_{t,u} = \mathbf{O}_u\mathbf{T}+b_t$ $\mathbf{z}_{c,u} = \mathbf{O}_uC+b_c$
**Output** $\boldsymbol{\varphi}_t = \frac{e^{(\mathbf{z}_{t,u})}}{\sum_{u' \in G}{e^{\mathbf{z}_{t,u'}}}}$ $\boldsymbol{\varphi}_c= \frac{1}{1+e^{-(\mathbf{z}_{c,u})}}$
**Loss** $L_t = \mathbf{y}_t \log(\boldsymbol{\varphi}_t)$ $L_c = ({y_c-\boldsymbol{\varphi}_c})^2$
: The layers of INFECTOR. \[tab:neurons\]
The training happens in an alternating manner, meaning when one output is activated the other is idle, thus only one of them can change the shared hidden layer at a training step. For example, given a node $u$ that starts a cascade of length 4 like in Figure \[fig:mtl\_schema\], $\mathbf{O}_u$ will be updated 4 times based on the error of $\frac{\partial L_t}{\partial \mathbf{O}}$, and one based on $\frac{\partial L_c}{\partial \mathbf{O}}$, using the same same learning rate $e$ for the training step $\tau$: $$\mathbf{O}_{u,\tau+1} =\mathbf{O}_{u,\tau} - e\frac{\partial L}{\mathbf{\partial O}_u}$$
The derivatives from the chain rule are defined as: $$\frac{\partial L}{\partial \mathbf{O}} = \frac{\partial L}{\partial \varphi}\frac{\partial \varphi}{\partial z}\frac{\partial z}{\partial \mathbf{O}}.
\label{eq:dl_ds}$$
The derivatives in Eq. (\[eq:dl\_ds\]) differ between regression and classification. For the loss function of classification $L_c$, we have that $$\frac{\partial \mathbf{z}_c}{\partial \mathbf{O}_u}=\textbf{T}^\top
\label{eq:dzc}$$
$$\begin{split}
\frac{\partial \boldsymbol{\varphi}_t}{\partial \mathbf{z}_t}=
\begin{cases}
\boldsymbol{\varphi}_t^{u} (1-\boldsymbol{\varphi}_t^v), \text{~~~~~$u=v$} \\
-\boldsymbol{\varphi}_t^{u} \boldsymbol{\varphi}_t^v, \text{~~~~~~~~~~~$u\neq v$}
\end{cases}
\end{split}$$
where $u$ is the input node and $v$ each of all the other nodes, represented in different dimensions of the vectors $\boldsymbol{\varphi}_t$. If we create a matrix consisting of $N$ replicates of $\boldsymbol{\varphi}_t$, we can express the Jacobian as this dot product: $$\frac{\partial \boldsymbol{\varphi}_t}{\partial \mathbf{z}_t} = {\mathbf{\Phi}_t}\cdot(I-{\mathbf{\Phi}_t})^\top, ~~~ \mathbf{\Phi}_t = \begin{bmatrix}\boldsymbol{\varphi}_t\\ \vdots\\ \boldsymbol{\varphi}_t\end{bmatrix} \in \mathbb{R}^{N \times N}
\label{eq:dft}$$
$$\frac{\partial L_t}{\partial \boldsymbol{\varphi}_t} = \mathbf{y_t} \Bigg(-\frac{1 }{\mathbf{\boldsymbol{\varphi}_t}}\Bigg) %\in \mathbb{R}^{1 \times N}
\label{eq:dlt}$$
The derivatives for the regression task are: $$\frac{\partial \mathbf{z}_c}{\partial \mathbf{O}_u}=C^\top%=\text{diag}(I)
\label{eq:dzt}$$
$$\begin{aligned}
\begin{split}
\frac{\partial \boldsymbol{\varphi}_c }{\partial \mathbf{z}_c} =
\frac{1}{1+e^{-\mathbf{z}_{c,u}}}\Bigg( 1-\frac{1}{1+e^{-\mathbf{z}_{c,u}} }\Bigg)=
\boldsymbol{\varphi}_c(1-\boldsymbol{\varphi}_c)
\end{split}
\label{eq:dfc}\end{aligned}$$
$$\frac{\partial L_c}{\partial \boldsymbol{\varphi}_c} = 2(y_c-\boldsymbol{\varphi}_c) \frac{(-\boldsymbol{\varphi}_c)}{\partial \boldsymbol{\varphi}_c}=-2(y_c-\boldsymbol{\varphi}_c)
\label{eq:dlc}$$
From Eqs. (\[eq:dl\_ds\]), (\[eq:dzt\]), (\[eq:dzc\]), (\[eq:dfc\]), (\[eq:dft\]), (\[eq:dlc\]), (\[eq:dlt\]) we have:
$$\frac{\partial L}{\partial \mathbf{O}_u} =
\begin{cases}
\frac{\partial L_t}{\partial \mathbf{O}_u}= -\frac{ y_t }{\boldsymbol{\varphi}_t } ({\mathbf{\Phi}_t }\cdot(I-{\mathbf{\Phi}_t})^\top) \mathbf{T}^\top\\
\frac{\partial L_c}{\partial \mathbf{O}_u}=-2(y_c-\boldsymbol{\varphi}_c)(\boldsymbol{\varphi}_c(1-\boldsymbol{\varphi}_c))\mathbf{C}^\top
\end{cases}
\label{eq:gradients}$$
As mentioned above, the aim here is for the embedding of an influencer (hidden layer $\mathbf{O}$) to not only capture who she influences but also her overall aptitude to create lengthy cascades. The embeddings undergo:
- Updates of certain dimensions using the upper formula from Eq. to form the diffusion probabilities with the output layer $T$.
- Scalar updates using the lower formula from Eq. , which increase or decrease the overall norm of the embedding analogously to the size of the cascade the initiator creates.
The main difference between INFECTOR and similar node-to-node IL methods is that it computes diffusion probabilities and does not require the underlying social network, in contrast to influence probabilities which are assigned to edges of the network [@bourigault2016representation; @feng2018inf2vec; @saito2009learning]. Intuitively, DP is the probability of the susceptible node appearing in a diffusion started by the influencer, independently of the two nodes’ distance in the network. This means that the underlying influence paths from the seed to the infected node is included *implicitly*, which changes drastically the computation of influence spread. For example, in a setting with diffusion models and influence probabilities, a node $u$ might be able to influence another node $z$ by influencing node $v$ between them, as shown in Figure \[fig:toy\_influence\]. However, in our case, if $u$ could indeed induce the infection of $z$ in a direct or indirect manner, it would be depicted by the diffusion probability $p_{u,z}$.
![Example of higher order influence showing the difference between influence and diffusion probabilities. \[fig:toy\_influence\]](Figures/high_influence-crop.pdf){width=".21\textwidth"}
This approach captures the case when $v$ appears in the cascades of $u$ and $z$ appears in the cascades of $v$ but not in $u$’s. This is a realistic scenario that occurs when $v$ reshares various types of content, in which case content from $u$ might diffuse in different directions than $z$. Hence, it would be wrong to assume that $u$’s infection would be able to eventually cause $z$’s. Typical IM algorithms fail to capture such higher order correlations because diffusion models act in a Markovian manner and can spread the infection from $u$ to $z$. Apart from capturing higher order correlations, DPs will allow us to overcome the aforementioned problems induced in IM by the diffusion models as well as surpass the computational bottleneck of the repeated simulations. Please note that our method’s final purpose is influence maximization, which is why we focus so much on the activity of influencers and overlook the rest of the nodes. If we aimed for a purely prediction task, such as cascade size or next infected node prediction, it is not clear that our learning mechanism would be effective.
In the second part of the methodology, we aim to perform fast and accurate IM using the representations learnt by <span style="font-variant:small-caps;">INFECTOR</span> and their properties. Initially, we will use the combination of the representations which form the*diffusion probability* of every directed pair of nodes. The DPs derived from the network can define a *matrix*: $$\mathbf{D} = \begin{bmatrix}f_t(\mathbf{O}_1 T)\\\vdots\\ f_t(\mathbf{O}_I T)\end{bmatrix}
\label{eq:d_matrix}$$ which consists of the nodes that initiate train cascades (influencers) in one dimension and all susceptible nodes in the other. Even though influencers are fewer than the total nodes, $\mathbf{D}$ can still be too memory-demanding for real-world datasets. To overcome this, we can keep the top $P\%$ influencers based on the norm of their influencer embedding $|\mathbf{O}_u|$ to reduce space, depending on the device. Recall that, the embeddings are trained such that their norm captures the influencers’ potential to create lengthy cascades, because $\mathbf{O}_uC=\sum_{i}^E\mathbf{O}_u(i) =|O|$, since $C$ is constant.
Subsequently, $\mathbf{D}$ can be interpreted as a bipartite network where the left side nodes are the candidate seeds for IM, and each of them can influence every node in the right side, where the rest of the network resides. Since all edges are directed from left to right, no paths with length more than 1 exist. This means that the probability of an edge can only define the infection of one node, and it is independent of the infection of the rest — hence, we do not require a diffusion model to estimate the spread. In this setting, one can either model the influence spread based on the total probability of each node getting influenced, which results in a non-convex function, or approach the problem as weighted bipartite matching. Both these cases are analyzed further in Appendix \[sec:appendix\]. To define an effective new influence spread, we will use again the embedding of candidate seed $u$’s to compute the fraction of all $N$ nodes it is expected to influence based on its $|\mathbf{O}_u|$: $$\lambda_u =\Bigg\lceil N \frac{||\mathbf{O}_u||_2}{\sum_{u' \in \mathcal{I}} ||\mathbf{O}_{u'}||_2} \Bigg \rceil
\label{eq:exp_spread}$$ where $\mathcal{I}$ is the set of candidate seeds. The term resembles the norm of $u$ relative to the rest of the influencers and the size of the network. It is basically computing the amount of the network $u$ will influence. Since we know a seed $s$ can influence a certain number of nodes, we can use the diffusion probabilities to identify the top $\lambda_s$ nodes it connects to. Moreover, we have to take into account the values of the DPs too, to refrain from selecting nodes with big $\lambda_s$ but overall small probability of influencing nodes. Consequently, the influence spread is defined by the sum of the top $\lambda_s$ DPs: $$\sigma'(s) = \sum_j^{\lambda_s}{\mathbf{\hat{D}}_{s,j}}
\label{eq:new_sigma}$$ where $\mathbf{\hat{D}}_s$ is the DPs of seed $s$ sorted in descending order. Once added to the seed set, the seed’s influence set is considered infected and is removed from $\mathbf{D}$. Thus a seed’s spread will never get bigger in two subsequent rounds, and we can employ the CELF trick [@celf] to accelerate our solution. To retain the theoretical guarantees of the greedy algorithm $(1-1/e)$, we need to prove the monotonicity and submodularity of the influence spread that we optimize in each iteration. To do this, we need to separate the influence spread of the seed set $S$ throughout the different steps of the algorithm. Let $i(s)$ be the step before node $s$ was inserted in $S$, $S^i$ be seed set at step $i$, $\mathbf{D^{i}}$ the DP matrix at step $i$ and $u$ the current candidate seed.
The influence spread is monotonic increasing.
$$\begin{aligned}
\begin{split}
\sigma'(S^{i(u)}\cup \{u\}) = \sum_{s\in S^{i(u)}\cup \{u\}}\sum_j^{\lambda_s}{\mathbf{\hat{D}}^{i(s)}_{s,j}} = \\
\sum_{s\in S^{i(u)}}\sum_j^{\lambda_s}{\mathbf{\hat{D}}^{i(s)}}_{s,j}+
\sum_j^{\lambda_u}{\mathbf{\hat{D}}^{i(u)}_{u,j}} =\\
\sigma'(S^{i(u)}) + \sum_j^{\lambda_u}{\mathbf{\hat{D}}^{i(u)}_{u,j}} \geq \sigma'(S^{i(u)})
\end{split}\end{aligned}$$
The influence spread is submodular.
$$\begin{aligned}
\begin{split}
\sigma'(S^{i(u)-1}\cup \{u\}) - \sigma'(S^{i(u)-1}) = \\
\sum_{s\in S^{i(u)-1} \cup \{u\}}\sum_j^{\lambda_s}{\mathbf{\hat{D}}^{i(s)}_{s,j}} - \sum_{s\in S^{i(u)-1}}\sum_j^{\lambda_s}{\mathbf{\hat{D}}^{i(s)}_{s,j}} = \sum_j^{\lambda_u}{\mathbf{\hat{D}}^{i(u)-1}_{u,j}}
\end{split}
\label{eq:sub1}\end{aligned}$$
$$\sigma'(S^{i(u)} \cup \{u\}) - \sigma'(S^{i(u)}) =\sum_j^{\lambda_u}{\mathbf{\hat{D}}^{i(u)}_{u,j}}
\label{eq:sub2}$$
$\mathbf{\hat{D}}^{i(u)}_{u,j}, \forall j \in \lambda_u$ are the top DPs of $u$ for nodes that are left uninfected from the seeds in $S^{i(u)}$. Note here that, $\mathbf{\hat{D}}^{i(u)}_{u,j}, \forall j \in \lambda_u$ are the top DPs of $u$ for nodes that are left uninfected from the seeds in $S^{i(u)}$. So, the influence spread of $u$ is reverse analogous to the influence spread of the seed set up to that step: $$\begin{aligned}
\begin{split}
S^{i(u)-1}\subseteq S^{i(u)} \Leftrightarrow \sum_{s\in S^{i(u)-1}}\sum_j^{\lambda_s}{\mathbf{\hat{D}}^{i(s)}_{s,j}} \leq \sum_{s\in S^{i(u)}}\sum_j^{\lambda_s}{\mathbf{\hat{D}}^{i(s)}_{s,j}} \Leftrightarrow\\
\sum_j^{\lambda_u}{\mathbf{\hat{D}}^{i(u)-1}_{u,j}} \geq \sum_j^{\lambda_u}{\mathbf{\hat{D}}^{i(u)}_{u,j}}
\end{split}
\label{eq:sub}\end{aligned}$$
The algorithm is given at Algorithm \[alg:iminfector\] and is an adaptation of CELF using the proposed influence spread and updates on $\mathbf{D}$. We keep a queue with the candidate seeds and their attributes (line 2) and the uninfected nodes (line 3). The attributes of a candidate seed are its influence set (line 5) and influence spread (line 6), as defined by eq. (\[eq:new\_sigma\]). We sort based on the influence spread (line 8) and proceed to include in the seed set the candidate seed with the maximum marginal gain, which is $\Omega$ after the removal of the infected nodes. Once a seed is chosen, the nodes it influences are marked as infected (line 14). The DPs of the candidate seeds are reordered after the removal of the infected nodes (line 17) and the new marginal gain is computed (line 18).
An alternative to Eq. (\[eq:exp\_spread\]) would be to keep the actual percentage of nodes influenced by each seed, i.e., $||\mathbf{O}_u||_2/N$. This approach, however, suffers from a practical problem. In real social networks, influencers can create cascades with tens or even hundreds of thousands of nodes. In this case, $|\lambda_u|$ can be huge, so when our algorithm is asked to come up with a seed set of e.g., 100 nodes, matrix $\mathbf{D}$ will be emptied in the first few iterations that will include seeds covering the whole network. This would constrain our algorithm to small seed set sizes, in which case simple ranking metrics tend to be of equal strength with IM, as we argue further on the evaluation methodology. Hence, we use a normalization that alleviates moderately the large differences between influence spreads and allows for bigger seed set sizes.
**Input:** Probability diffusion matrix $D$, expected influence spread $\lambda$, seed set size $\ell$\
**Output:** Seed set $\mathcal{S}$
set $q \gets [~],\mathcal{S} \gets \emptyset $ $F = 0:\text{dim}(D)[1]$ $R \gets \text{argsort}(D[s,F])[0:\lambda[s]]$ $\Omega \gets \text{sum}(D[s,R])$ $q.\text{append}([s,\Omega,R,0])$ $q \gets \text{sort}(q,1)$ $u \gets q[0]$ $\mathcal{S}.\text{add}(u[0])$ $R \gets u[2]$ $F \gets F- R$ $q.\text{delete}(u)$ $R \gets\text{argsort}(D[u[0],F])[0:\lambda[u[0]]]$ $\Omega \gets \text{sum}(D[u[0],R])$ $q[0] \gets [u[0],\Omega,R,|\mathcal{U}|]$ $q \gets \text{sort}(q,1)$ **return** $\mathcal{S}$
To give a concrete example of how the algorithm works, let’s imagine a simple network with 8 nodes, as depicted in Figure \[fig:spread1\], with the respective weights from each candidate seed $S$ to each susceptible node $N$. In the first step, $\lambda$ of a seed $S$ defines how many of the $S$’s top susceptible nodes should be taken into account in the computation of $\sigma'$. In this case, we keep the top 3 for $S3$ and top 2 for $S2$ and $S1$. The top 3 for $S3$ correspond to $\{N1,N3,N4\}$, as indicated by the green edges in the bipartite network, whose sum gives a $\sigma'=0.9$. For $S2$, we could either take $\{N1,N2\}$ or $\{N1,N3\}$, both giving a $\sigma'=0.6$, which is lower than $S3$, and the same applies for $S1$. Thus $S1$ is chosen as the seed for step 1 and the influenced nodes are removed.
In the next step, matrix $\mathbf{D}$ is updated to remove the influenced nodes and $\sigma'$ is recomputed. In this case, there are only two nodes left for each seed, so their influence spread is computed by the sum of all their edges, since their $\lambda=2$. An important point here is that while in the first step $S2$ had a stronger $\sigma'$ than $S1$, $S1$ is stronger now, due to the removal of $N1$, which was strongly susceptible to the influence of $S2$, but infected in the previous round by $S3$. Thus, the chosen seed in step 2 is $S1$.
[**Running time complexity.**]{} The first step of the algorithm is to compute the influence spread for all candidate seeds $I$ in $\mathbf{D}$, which takes $I \cdot N\log N$, because we sort the DPs of each candidate seed to take the top $\lambda$. The sorting of $q$ which has $\mathcal{O}(I \cdot \log I)$ before the algorithm’s iterations start adds constant time to the total complexity. With that in mind, the worst case complexity is analogous to $\mathcal{O}(\ell \cdot I(I \log I) \cdot (N \log N) )$, similar to CELF, but with sorting on $N$ in every evaluation of the influence spread. In practice, in every iteration, $N$ is diminished by an average of $\lambda$ because of the removed influence set, so the final logarithmic term is much smaller. Finally, due to the nature of CELF, much fewer influence spread evaluations than $I$ take place in reality (line 16 in the algorithm), which is why our algorithm is so fast in practice.
Datasets
--------
It is obvious that we can not use networks from the IM literature because we require the existence of ground truth diffusion cascades. Although such open datasets are still quite rare, we managed to assemble three, two social and one bibliographical, to provide an estimate of performance in different sizes and information types. Table \[tab:fields\] gives an overview of the datasets.
*Digg* *MAG* *Sina Weibo*
---------------------- ----------- ------------ --------------
**Nodes** 279,631 1,436,158 1,170,689
**Edges** 2,251,166 15,928,078 225,877,808
**Cascades** 3,553 181,020 115,686
**Avg Cascade size** 847 29 148
: Summary of the datasets used.\[tab:fields\]
- *Digg*: A directed network of a social media where users follow each other and a vote to a post allows followers to see the post, thus it is treated as a retweet [@digg].
- *MAG Computer Science*: We follow suit from [@deepinf] and define a network of authors with undirected co-authorship edges, where a cascade happens when an author writes a paper and other authors cite it. In other words, a co-authorship is perceived as a friendship, a paper as a post and a citation as a repost. In this case, we the employ Microsoft Academic Graph [@sinha2015overview] and filter to keep only papers that belong to computer science. We remove cascades with length less than 10.
- *Sina Weibo*: A directed follower network where a cascade is defined by the first tweet and the list of retweets [@weibo]. We remove from the network nodes that do not appear in the cascades i.e. they are practically inactive, in order to make more fair the comparison between structural and diffusion-based methods, since the evaluation relies on unseen diffusions.
Baseline Methods
----------------
Most traditional IM algorithms, such as greedy [@kempe2003maximizing] and CELF [@celf], do not scale to the networks we have used for evaluation. Thus, we employ only the most scalable approaches:
- <span style="font-variant:small-caps;">K-Cores</span>: The top nodes in terms of their core number, as defined by the undirected $k$-core decomposition. This metric is extensively used for influencer identification and it is considered the most effective structural metric for this task [@malliaros2016locating; @core-vldb20].
- <span style="font-variant:small-caps;">AVG Cascade Size</span>: The top nodes based on the average size of their cascades in the train set. This is a straightforward ranking of the nodes that have proven effective in the past [@bakshy2011everyone].
- <span style="font-variant:small-caps;">SimPath</span>: A heuristic that capitalizes on the locality of influence pathways to reduce the cost of simulations in the influence spread [@simpath]. (The *threshold* is set to 0.01).
- <span style="font-variant:small-caps;">IMM</span>: An algorithm that derives reverse reachable sets to accelerate the computation of influence spread while retaining theoretical guarantees [@imm]. It is considered state-of-the-art in efficiency and effectiveness. (Parameter $\epsilon$ is set to 0.1).
- <span style="font-variant:small-caps;">Credit Distribution</span>: This model uses cascade logs *and* the edges of the network to assign influence credits and derive a seed set [@credit]. (Parameter $\lambda$ is set to 0.001).
- <span style="font-variant:small-caps;">IMINFECTOR</span>: Our proposed model based solely on cascades, with embedding size equal to 50, trained for 5 epochs with a learning rate of 0.1. The reduction percentage $P$ is set to 10 for *Weibo* and *MAG*, and 40 for *Digg*.
Comparing network-based methods such as <span style="font-variant:small-caps;">SimPath</span> and <span style="font-variant:small-caps;">IMM</span> with methods that use both the network and cascades, is not a fair comparison as the latter capitalize on more information. To make this equitable, each network-based method is coupled with two influence learning (IL) methods that provide the IM methods with influence weights on the edges, using the diffusion cascades:
- <span style="font-variant:small-caps;">Data-based (DB)</span>: Given that the follow edge $u\rightarrow v$ exists in the network, the edge probability is set to $A_{u \rightarrow v}/{A_u}$, i.e., the number of times $v$ has copied (e.g., retweeted) $u$, relative to the total activity (e.g., number of posts) of $u$ [@goyal2010learning]. Any edge with zero probability is removed, and the edges are normalized such that the sum of weighted out-degree for each node is 1.
- <span style="font-variant:small-caps;">Inf2vec</span>: A shallow neural network performing IL based on the co-occurence of nodes in diffusion cascades and the underlying network [@feng2018inf2vec]. It was proven more effective than similar IL methods [@bourigault2016representation; @saito2009learning] in the tasks of next node prediction and diffusion simulation.
Evaluation Methodology
----------------------
We split each dataset into train and test cascades based on their time of occurrence. The methods utilize the train cascades and/or the underlying network to define a seed set. The train cascades amount for the first 80% of the whole set and the rest is left for testing. The evaluation is twofold: computational time and seed set quality, similar to previous literature in influence maximization. We evaluate the quality of the predicted seed set using the number of *Distinct Nodes Influenced* (DNI)[@complex2018; @continest; @panagopoulos2018diffugreedy], which is the combined set of nodes that appear in the test cascades that are initiated from each one of the chosen seeds. To be specific, each predicted seed in the seed set has started some cascades in the test set. We compute the set of all infected nodes (DNI) by simply adding every node appearing in these test cascades, in a unified set. The size of this set indicates a measure of how successful was that seed set on the test set. To understand the choice of DNI we need to take into consideration that each seed can create several test cascades. For example, another metric we could use is the sum of the average cascade length of each seed’s test cascades [@dynadiffuse]. With this approach though, we fail to counter for the overlap between different seeds’ spreads, and hence we can end up with influential seeds whose influence spreads overlaps a lot, resulting in an eventually smaller number of infected nodes. As mentioned above, DNI maintains a set of all distinct nodes participating in cascades started by each of the seeds, hence tackling the potential overlap issue. Overall, the most significant advantage over other standard IM evaluations is that it relies on actual spreading data instead of simulations. Although not devoid of assumptions, it is the most objective measure of a seed set’s quality, given the existence of empirical evidence.
{width="\textwidth"}
{width="100.00000%"}
Since our datasets differ significantly in terms of size, we have to use different seed set size for each one. For *MAG* which has 205,839 initiators in the train set, we test it on 10,000, *Weibo* with 26,158 is tested on 1,000, and *Digg* with 537 has a seed set size of 50. This modification is crucial to the objective evaluation of the methods because small seed sets favor simpler methods. For example, the top 100 authors based on <span style="font-variant:small-caps;">K-cores</span> in *MAG* would unavoidably work well because they are immensely successful. However, increasing the seed set size allows the effect of influence overlap to take place, and eventually, simplistic methods fall short. The experiments were run on a machine with Intel(R) Xeon(R) W-2145 CPU @ 3.70GHz, 252GB ram, and an Nvidia Titan V. Any method that required more memory or more than seven days to run, it is deemed unable to scale.
Results
-------
Figure \[fig:times\] shows the computational time of the examined methods, separated based on different parts of the model, and Figure \[fig:spreading\] shows the estimated quality of each seed set. In terms of quality, we can see that <span style="font-variant:small-caps;">IMINFECTOR</span> surpasses the benchmarks in two of the three datasets. In *Digg*, <span style="font-variant:small-caps;">Credit Distribution</span> performs better but is almost 10 times slower, because of the average cascade size in the dataset. All methods could scale in *MAG* except for <span style="font-variant:small-caps;">SimPath</span> with <span style="font-variant:small-caps;">Inf2vec</span> weights, because in contrast to DB, <span style="font-variant:small-caps;">Inf2vec</span> retains all the edges of the original *MAG* network, for which <span style="font-variant:small-caps;">SimPath</span> takes more than one week to run. Moreover, <span style="font-variant:small-caps;">SimPath</span> with both types of weights is not able to scale in the *Weibo* dataset, due to the number of edges. <span style="font-variant:small-caps;">Credit Distribution</span> also fails to scale in *Weibo* due to the network’s and cascade size. <span style="font-variant:small-caps;">IMM</span> with <span style="font-variant:small-caps;">Inf2vec</span> can not scale due to the high demand in memory. It can still scale with DB weights (close to 1M nodes), but it performs purely. <span style="font-variant:small-caps;">IMM</span>’s low performance comes in contrast with its known accuracy. This disparity highlights the immense difference between data-driven means of evaluation and the traditional simulations of diffusion models which have been used in the literature due to the absence of empirical data. <span style="font-variant:small-caps;">IMM</span> optimizes diffusion simulations as part of its solution, so it is reasonable to outperform other methods in this type of evaluation. However, it performs purely in a metric based on real influence traces. This can be attributed largely to the aforementioned problems of diffusion models and their miscalculation of the influence spread.
In terms of IL methods, DB outperformed <span style="font-variant:small-caps;">Inf2vec</span> because it removes edges with no presence in the cascades, diminishing significantly the size of the networks, some even close to 90%. However, it also had a severe shortcoming most notable in the social networks. Due to the general lack of consistent reposters and the huge amount of posts, the total activity of a node (the denominator in edge weight) was much larger than the number of reposts by a follower (numerator). Hence, the edge weights were too small with insignificant differences, alleviating the computation of the simulated influence spread. This effect is not so strong in the bibliographical network, because authors tend to cite the same popular authors more consistently. A side observation is the heavy computational burden of <span style="font-variant:small-caps;">Inf2vec</span>. It accounts for most of the computational time (blue color), which justifies the context creation mechanism of INFECTOR, while the accuracy validates our hypothesis that influencers’ success lies more on the cascades they initiate rather their reposts.
In general, we see that <span style="font-variant:small-caps;">IMINFECTOR</span> provides a fair balance between computational efficiency and accuracy. Most importantly, it exhibits such performance using *only the cascades*, while the rest baselines use both, the network and cascades. Being unaffected by the network size, <span style="font-variant:small-caps;">IMINFECTOR</span> scales with the average cascade size and the number of cascades, which makes it suitable for real world applications where the networks are too big and scalability is of utmost importance.
In this paper, we have proposed <span style="font-variant:small-caps;">IMINFECTOR</span>, a model-independent method to perform influence maximization using representations learned from diffusion cascades. The machine learning part learns pairs of representations from diffusion cascades, while the algorithmic part uses the representations’ norms and combinations to extract a seed set. The algorithm outperformed several methods based on a data-driven evaluation in three large scale datasets.
A potential future direction is to examine deeper and more complex architectures for INFECTOR to capture more intricate relationships. From the algorithmic part, we can experiment with bipartite matching algorithms, given sufficient resources. Overall, the main purpose of this work is primarily to examine the application of representation learning in the problem of influence maximization and secondarily to highlight the importance of data-driven evaluation (e.g., DNI). In addition, we want to underline the strength of model-independent methods and evaluations, given the recent findings on the severe shortcomings of diffusion models. We hope this will pave the way for more studies to tackle influence maximization with machine learning means.
Acknowledgment {#acknowledgment .unnumbered}
==============
[^1]: Manuscript received XXX; revised XXX.
[^2]: https://github.com/GiorgosPanagopoulos/IMINFECTOR
|
---
abstract: |
[**Abstract.**]{} The aim of this work is to enumerate alternating sign matrices (ASM) that are quasi-invariant under a quarter-turn. The enumeration formula (conjectured by Duchon) involves, as a product of three terms, the number of unrestricted ASM’s and the number of half-turn symmetric ASM’s.
[**Résumé.**]{} L’objet de ce travail est d’énumérer les matrices à signes alternants (ASM) quasi-invariantes par rotation d’un quart-de-tour. La formule d’énumération, conjecturée par Duchon, fait apparaître trois facteurs, comprenant le nombre d’ASM quelconques et le nombre d’ASM invariantes par demi-tour.
address: |
LaBRI, Université Bordeaux 1, CNRS\
351 cours de la Libération, 33405 Talence cedex, FRANCE
author:
- 'Jean-Christophe Aval, Philippe Duchon[^1]'
title: 'Enumeration of alternating sign matrices of even size (quasi)-invariant under a quarter-turn rotation'
---
Introduction {#sec:intro}
============
An [*alternating sign matrix*]{} is a square matrix with entries in $\{-1,0,1\}$ and such that in any row and column: the non-zero entries alternate in sign, and their sum is equal to $1$. Their enumeration formula was conjectured by Mills, Robbins and Rumsey [@MRR], and proved by Zeilberger [@zeil], and almost simultaneously by Kuperberg [@kup1]. Kuperberg used a bijection between the ASM’s and the states of a statistical square-ice model, for which he studied and computed the partition function. He also used this method in [@kup] to obtain many enumeration or equinumeration results for various symmetry classes of ASM’s, most of them having been conjectured by Robbins [@robbins]. Among these results can be found the following remarkable one.
\[theo:kup\] [*(Kuperberg).*]{} The number $A_{\textsc{QT}}(4N)$ of ASM’s of size $4N$ invariant under a quarter-turn (QTASM’s) is related to the number $A(N)$ of (unrestricted) ASM’s of size $N$ and to the number $A_{\textsc{HT}}(2N)$ of ASM’s of size $2N$ invariant under a half-turn (HTASM’s) by the formula: $$\label{eq:kup}
A_{\textsc{QT}}(4N)=A_{\textsc{HT}}(2N) A(N)^2.$$
More recently, Razumov and Stroganov [@RS] applied Kuperberg’s strategy to settle the following result relative to QTASM’s of odd size, also conjectured by Robbins [@robbins] .
\[theo:RS\] [*(Razumov, Stroganov).*]{} The numbers of QTASM’s of odd size are given by the following formulas, where $A_{\textsc{HT}}(2N+1)$ is the number of HTASM’s of size $2N+1$: $$\begin{aligned}
A_{\textsc{QT}}(4N-1) & = & A_{\textsc{HT}}(2N-1) A(N)^2\label{eq:QT_m1}\\
A_{\textsc{QT}}(4N+1) & = & A_{\textsc{HT}}(2N+1) A(N)^2\label{eq:QT_p1}.
\end{aligned}$$
It is easy to observe (and will be proved in Section \[sec:qQTASM\]) that the set of QTASM’s of size $4N+2$ is empty. But, by slightly relaxing the symmetry condition at the center of the matrix, Duchon introduced in [@duchon] the notion of ASM’s quasi-invariant under a quarter turn (the definition will be given in Section \[sec:qQTASM\]) whose class is non-empty in size $4N+2$. Moreover, he conjectured for these qQTASM’s an enumeration formula that perfectly completes the three previous enumeration results on QTASM. It is the aim of this paper to establish this formula.
\[theo:form\] The number $A_{\textsc{QT}}(4N+2)$ of qQTASM of size $4N+2$ is given by: $$\label{eq:form}
A_{\textsc{QT}}(4N+2)=A_{\textsc{HT}}(2N+1) A(N)A(N+1).$$
This paper is organized as follows: in Section \[sec:qQTASM\], we define qQTASM’s; in Section \[sec:icemodel\], we recall the definitions of square ice models, precise the parameters and the partition functions that we shall study, and give the formula corresponding to equation [(\[eq:form\])]{} at the level of partition functions; Section \[sec:proofs\] is devoted to the proofs.
ASM’s quasi-invariant under a quarter-turn {#sec:qQTASM}
==========================================
The class of ASM’s invariant under a rotation by a quarter-turn (QTASM) is non-empty in size $4N-1$, $4N$, and $4N+1$. But this is not the case in size $4N+2$.
There is no QTASM of size $4N+2$.
Let us suppose that $M$ is a QTASM of even size $2L$. Now we use the fact that the size of an ASM is given by the sum of its entries, and the symmetry of $M$ to write: $$2L=\sum_{1\le i,j\le2L}M_{i,j}=4\times\sum_{1\le i,j\le L}M_{i,j}$$ which implies that the size of $M$ has to be a multiple of $4$.
Duchon introduced in [@duchon] a notion of ASM’s quasi-invariant under a quarter-turn, by slightly relaxing the symmetry condition at the center of the matrix. The definition is more simple when considering the height matrix associated to the ASM, but can also be given directly.
An ASM $M$ of size $4N+2$ is said to be [*quasi-invariant under a quarter-turn*]{} (qQTASM) if its entries satisfy the quarter-turn symmetry $$M_{4N+2-j+1,4N+2-i+1}=M_{i,j}$$ except for the four central entries $(M_{2N,2N},M_{2N,2N+1},M_{2N+1,2N},M_{2N+1,2N+1})$ that have to be either $(0,-1,-1,0)$ or $(1,0,0,1)$.
We give below two examples of qQTASM’s of size $6$, with the two possible patterns at the center. $$\left(
\begin{array}{cccccc}
0&0&0&1&0&0\cr
0&0&1&0&0&0\cr
1&0&0&-1&1&0\cr
0&1&-1&0&0&1\cr
0&0&0&1&0&0\cr
0&0&1&0&0&0\cr
\end{array}
\right)
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\left(
\begin{array}{cccccc}
0&0&1&0&0&0\cr
0&1&-1&0&1&0\cr
0&0&1&0&-1&1\cr
1&-1&0&1&0&0\cr
0&1&0&-1&1&0\cr
0&0&0&1&0&0\cr
\end{array}
\right)$$
In the next section, we associate square ice models to ASM’s with various types of symmetry.
Square ice models and partition functions {#sec:icemodel}
=========================================
Notations
---------
Using Kuperberg’s method we introduce square ice models associated to ASM’s, HTASM’s and (q)QTASM’s. We recall here the main definitions and refer to [@kup] for details and many examples.
Let $a\in\mathbb{C}$ be a global parameter. For any complex number $x$ different from zero, we denote $\overline{x}=1/x$, and we define: $$\sigma(x) = x - \overline{x}.$$
If $G$ is a tetravalent graph, an [*ice state*]{} of $G$ is an orientation of the edges such that every tetravalent vertex has exactly two incoming and two outgoing edges.
A parameter $x\neq 0$ is assigned to any tetravalent vertex of the graph $G$. Then this vertex gets a weight, which depends on its orientations, as shown on Figure \[fig:poids\_6V\].
(3,3) (1.5,1.5)[ (-1,0)(1,0) (0,-1)(0,1) (-.5,-.5)[$x$]{} ]{}
$=$
(19,5) (2,2.5)[ (-1,0)[[(0,0)(.75,0)(0,0)(1,0)]{}]{} (1,0)[[(0,0)(-.75,0)(0,0)(-1,0)]{}]{} (0,0)[[(0,0)(0,.75)(0,0)(0,1)]{}]{} (0,0)[[(0,0)(0,-.75)(0,0)(0,-1)]{}]{} (0,-1.2)[$\sigma(a^2)$]{} (0,-2.2)[$1$]{}]{} (5,2.5)[ (0,0)[[(0,0)(-.75,0)(0,0)(-1,0)]{}]{} (0,0)[[(0,0)(.75,0)(0,0)(1,0)]{}]{} (0,-1)[[(0,0)(0,.75)(0,0)(0,1)]{}]{} (0,1)[[(0,0)(0,-.75)(0,0)(0,-1)]{}]{} (0,-1.2)[$\sigma(a^2)$]{} (0,-2.2)[$-1$]{}]{} (8,2.5)[ (0,0)[(0,0)(-.75,0)(0,0)(-1,0)]{} (0,0)[(0,0)(0,.75)(0,0)(0,1)]{} (1,0)[(0,0)(-.75,0)(0,0)(-1,0)]{} (0,-1)[(0,0)(0,.75)(0,0)(0,1)]{} (0,-1.2)[$\sigma(ax)$]{} (0,-2.2)[$0$]{}]{} (11,2.5)[ (0,0)[(0,0)(.75,0)(0,0)(1,0)]{} (0,0)[(0,0)(0,-.75)(0,0)(0,-1)]{} (-1,0)[(0,0)(.75,0)(0,0)(1,0)]{} (0,1)[(0,0)(0,-.75)(0,0)(0,-1)]{} (0,-1.2)[$\sigma(ax)$]{} (0,-2.2)[$0$]{}]{} (14,2.5)[ (-1,0)[(0,0)(.75,0)(0,0)(1,0)]{} (0,0)[(0,0)(.75,0)(0,0)(1,0)]{} (0,-1)[(0,0)(0,.75)(0,0)(0,1)]{} (0,0)[(0,0)(0,.75)(0,0)(0,1)]{} (0,-1.2)[$\sigma(a\overline{x})$]{} (0,-2.2)[$0$]{}]{} (17,2.5)[ (0,0)[(0,0)(-.75,0)(0,0)(-1,0)]{} (1,0)[(0,0)(-.75,0)(0,0)(-1,0)]{} (0,0)[(0,0)(0,-.75)(0,0)(0,-1)]{} (0,1)[(0,0)(0,-.75)(0,0)(0,-1)]{} (0,-1.2)[$\sigma(a\overline{x})$]{} (0,-2.2)[$0$]{}]{}
It is sometimes easier to assign parameters, not to each vertex of the graph, but to the lines that compose the graph. In this case, the weight of a vertex is defined as:
$$\psset{unit=.5cm}
\begin{pspicture}[.45](2,2)
\psline(1,0)(1,2)\psline(0,1)(2,1)
\rput[r](-.2,1){$x$}\rput[t](1,-.2){$y$}
\end{pspicture}\ =\
\begin{pspicture}[.45](2,2)
\psline(0,1)(2,1)\psline(1,0)(1,2)
\rput(.25,.25){$x\overline{y}$}
\end{pspicture}$$
When this convention is used, a parameter explicitly written at a vertex replaces the quotient of the parameters of the lines.
We will put a dotted line to mean that the parameter of a line is different on the two sides of the dotted line. We will also use divalent vertices, and in this case the two edges have to be both in, or both out, and the corresponding weight is $1$: $$\psset{unit=.5cm}
\begin{pspicture}[.5](2,1)
\psline(0,.5)(2,.5)\rput(1,.5){\psdot[dotsize=.3]}
\end{pspicture} =
\begin{pspicture}[.5](5,1)
\rput(0,.5){\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}\rput(2,.5){\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}\rput[t](1,0){$1$}
\rput(4,.5){\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}\rput(4,.5){\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}\rput[t](4,0){$1$}
\rput(1,.5){\psdot[dotsize=.2]}
\rput(4,.5){\psdot[dotsize=.2]}
\end{pspicture}$$
The partition function of a given ice model is then defined as the sum over all its states of the product of the weights of the vertices.
To simplify notations, we will denote by $X_{N}$ the vector of variables $(x_1,\dots, x_N)$. We use the notation $X\backslash x$ to denote the vector $X$ without the variable $x$.
Partition functions for classes of ASM’s
----------------------------------------
We give in Figures \[fig:Z\], \[fig:ZHT\], and \[fig:ZQT\] the ice models corresponding to the classes of ASM’s that we shall study, and their partition functions. The bijection between (unrestricted) ASM’s and states of the square ice model with “domain wall boundary” is now well-known ([@kup]), and the bijections for the other symmetry classes may be easily checked in the same way. The correspondence between orientations of the ice model and entries of ASM’s is given in Figure \[fig:poids\_6V\].
$$Z(N;x_1,\dots, x_{N}, x_{N+1},\dots, x_{2N}) =
\psset{unit=.5cm}
\begin{pspicture}[.5](7,7)
\rput(1,0){{\rput(1,1){{\setcounter{ISL}{6}\addtocounter{ISL}{-1}\setcounter{ISH}{6}\addtocounter{ISH}{-1}\multido{\i=0+1}{6}{\psline(\i,0)(\i,\theISH)}\multido{\i=0+1}{6}{\psline(0,\i)(\theISL,\i)}}}\rput(0,1){{\multido{\i=0+1}{6}{\rput(0,\i){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}}}}\rput(1,0){{\multido{\i=0+1}{6}{\rput(\i,1){{\psline{->}(0,0)(0,-.75)\psline(0,0)(0,-1)}}}}}\rput(6,1){{\multido{\i=0+1}{6}{\rput(1,\i){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}}}}\rput(1,6){{\multido{\i=0+1}{6}{\rput(\i,0){{\psline{->}(0,0)(0,.75)\psline(0,0)(0,1)}}}}}}}
\rput[r](1,1){$x_{1}$}\rput[r](1,2){$x_{2}$}\rput[r](1,6){$x_{N}$}
\rput[t](2,-.1){$x_{N+1}$}\rput[t](3,-.1){}\rput[t](7,-.1){$x_{2N}$}
\end{pspicture}$$
$$\begin{aligned}
Z_{\textsc{HT}}(2N; x_1,\dots,x_{N-1},x_{N},\dots,x_{2N-1},x,y)& =&
\psset{unit=.4cm}
\begin{pspicture}[.5](9,9)
\rput(1,0){{\setcounter{ISdoublesize}{4}\addtocounter{ISdoublesize}{4}\rput(1,1){{\setcounter{ISL}{4}\addtocounter{ISL}{-1}\setcounter{ISH}{\theISdoublesize}\addtocounter{ISH}{-1}\multido{\i=0+1}{4}{\psline(\i,0)(\i,\theISH)}\multido{\i=0+1}{\theISdoublesize}{\psline(0,\i)(\theISL,\i)}}}\rput(0,1){{\multido{\i=0+1}{\theISdoublesize}{\rput(0,\i){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}}}}\rput(1,0){{\multido{\i=0+1}{4}{\rput(\i,1){{\psline{->}(0,0)(0,-.75)\psline(0,0)(0,-1)}}}}}\rput(1,\theISdoublesize){{\multido{\i=0+1}{4}{\rput(\i,0){{\psline{->}(0,0)(0,.75)\psline(0,0)(0,1)}}}}}\rput(4,4){\rput(0,.5){{\degrees[360]\multido{\n=.5+1.0}{4}{\rput(0,0){\psarc(0,0){\n}{-90}{90}}}}}}\rput(4,4){\psline[linestyle=dotted](.25,.5)(.75,.5)}}}
\rput[r](1,1){$x_{1}$}\rput[r](1,3){$x_{N-1}$}\rput[r](1,5){$y$}
\rput[r](1,4){$x$}
\rput[t](2,-.1){$x_{N}$}\rput[t](3,-.1){}\rput[t](5,-.1){$x_{2N-1}$}
\end{pspicture}\\
\psset{unit=.4cm}
\begin{pspicture}[.5](10,10)
\rput(1,0){{\setcounter{ISdoublesize}{4}\addtocounter{ISdoublesize}{4}\addtocounter{ISdoublesize}{1}\rput(1,1){{\setcounter{ISL}{4}\addtocounter{ISL}{-1}\setcounter{ISH}{\theISdoublesize}\addtocounter{ISH}{-1}\multido{\i=0+1}{4}{\psline(\i,0)(\i,\theISH)}\multido{\i=0+1}{\theISdoublesize}{\psline(0,\i)(\theISL,\i)}}}\rput(0,1){{\multido{\i=0+1}{\theISdoublesize}{\rput(0,\i){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}}}}\rput(1,0){{\multido{\i=0+1}{4}{\rput(\i,1){{\psline{->}(0,0)(0,-.75)\psline(0,0)(0,-1)}}}}}\rput(1,\theISdoublesize){{\multido{\i=0+1}{4}{\rput(\i,0){{\psline{->}(0,0)(0,.75)\psline(0,0)(0,1)}}}}}\rput(4,0){\psline(1,1)(1,4)\rput(1,1){{\psline{->}(0,0)(0,-.75)\psline(0,0)(0,-1)}}}\psarc(4,4){1}{0}{90}\rput(4,4){{\degrees[360]\multido{\n=1+1}{4}{\rput(1,1){\psarc(0,0){\n}{-90}{90}}}}}\rput(4,1){{\multido{\i=0+1}{4}{\rput(0,\i){\psline(0,0)(1,0)}}}}\rput(4,4){\rput(0,2){{\multido{\i=0+1}{4}{\rput(0,\i){\psline(0,0)(1,0)}}}}}\rput(4,4){\psline[linestyle=dotted](.25,.25)(1.25,1.25)}}}
\rput[r](1,1){$x_{1}$}\rput[r](1,2){$x_{2}$}\rput[r](1,4){$x_{N}$}
\rput[r](1,5){$x$}
\rput[t](2,-.1){$x_{N+1}$}\rput[t](3,-.1){}\rput[t](5,-.1){$x_{2N}$}
\rput[t](6,-.1){$y$}
\end{pspicture} & = &
Z_{\textsc{HT}}(2N+1;x_1,\dots,x_{N},x_{N+1},\dots,x_{2N},x,y)
\end{aligned}$$
$$\begin{aligned}
Z_{\textsc{QT}}(4N;x_1,\dots,x_{2N-1},x,y) & =\ \ \ \ &
\psset{unit=.4cm}
\begin{pspicture}[.4](11,11)
\rput(1,0){{\setcounter{QTsize}{6}\addtocounter{QTsize}{-1}\rput(1,1){{\setcounter{ISL}{\theQTsize}\addtocounter{ISL}{-1}\setcounter{ISH}{\theQTsize}\addtocounter{ISH}{-1}\multido{\i=0+1}{\theQTsize}{\psline(\i,0)(\i,\theISH)}\multido{\i=0+1}{\theQTsize}{\psline(0,\i)(\theISL,\i)}}}\rput(0,1){{\multido{\i=0+1}{6}{\rput(0,\i){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}}}}\rput(1,0){{\multido{\i=0+1}{6}{\rput(\i,1){{\psline{->}(0,0)(0,-.75)\psline(0,0)(0,-1)}}}}}\rput(\theQTsize,1){{\multido{\i=0+1}{\theQTsize}{\rput(0,\i){\psline(0,0)(1,0)}}}}\rput(1,\theQTsize){{\multido{\i=0+1}{\theQTsize}{\rput(\i,0){\psline(0,0)(0,1)}}}}\rput(\theQTsize,\theQTsize){\psarc(0,0){1}{0}{90}}\psline(1,6)(\theQTsize,6)\psline(6,1)(6,\theQTsize)\rput(6,6){{\degrees[360]\multido{\n=1+1}{\theQTsize}{\rput(0,0){\psarc(0,0){\n}{-90}{180}}}}}\rput(6,6){\SpecialCoor\multido{\i=1+1}{\theQTsize}{\rput(\i;45){\psdots[dotstyle=*](0,0)}}}\rput(\theQTsize,\theQTsize){\SpecialCoor\psdots[dotstyle=*](1;45)\psline[linestyle=dotted](.5;45)(1.5;45)}}}
\rput[r](1,1){$x_1$}\rput[r](1,2){$x_2$}\rput[r](1,5){$x_{2N-1}$}
\rput[r](1,6){$x$}\rput[t](7,-.1){$y$}
\end{pspicture}\\
\psset{unit=.4cm}
\begin{pspicture}[.4](12,10)
\rput(0,0){{\setcounter{QTsize}{7}\addtocounter{QTsize}{-1}\rput(1,1){{\setcounter{ISL}{\theQTsize}\addtocounter{ISL}{-1}\setcounter{ISH}{\theQTsize}\addtocounter{ISH}{-1}\multido{\i=0+1}{\theQTsize}{\psline(\i,0)(\i,\theISH)}\multido{\i=0+1}{\theQTsize}{\psline(0,\i)(\theISL,\i)}}}\rput(0,1){{\multido{\i=0+1}{7}{\rput(0,\i){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}}}}\rput(1,0){{\multido{\i=0+1}{7}{\rput(\i,1){{\psline{->}(0,0)(0,-.75)\psline(0,0)(0,-1)}}}}}\rput(\theQTsize,1){{\multido{\i=0+1}{\theQTsize}{\rput(0,\i){\psline(0,0)(1,0)}}}}\rput(1,\theQTsize){{\multido{\i=0+1}{\theQTsize}{\rput(\i,0){\psline(0,0)(0,1)}}}}\rput(\theQTsize,\theQTsize){\psarc(0,0){1}{0}{90}}\psline(1,7)(\theQTsize,7)\psline(7,1)(7,\theQTsize)\rput(7,7){{\degrees[360]\multido{\n=1+1}{\theQTsize}{\rput(0,0){\psarc(0,0){\n}{-90}{180}}}}}\rput(7,7){\SpecialCoor\multido{\i=1+1}{\theQTsize}{\rput(\i;45){\psdots[dotstyle=*](0,0)}}}\rput(\theQTsize,\theQTsize){\SpecialCoor\psline[linestyle=dotted](.5;45)(1.5;45)}}}
\rput[r](0,1){$x_1$}\rput[r](0,2){$x_2$}\rput[r](0,6){$x_{2N}$}
\rput[r](0,7){$x$}\rput[t](7,-.1){$y$}
\end{pspicture}
& = & Z_{\textsc{QT}}(4N+2; x_1,\dots, x_{2N},x,y)\end{aligned}$$
With these notations, Theorem \[theo:form\] will be a consequence of the following one which addresses the concerned partition functions.
\[theo:main\] When $a=\omega_6=\exp(i\pi/3)$, one has for $N\ge 1$: $$\label{eq:main1}
Z_{\textsc{QT}}(4N;X_{2N-1},x,y)=\ss Z_{\textsc{HT}}(2N;X_{2N-1},x,y)Z(N;X_{2N-1},x)Z(N;X_{2N-1},y)$$ and $$\label{eq:main2}
Z_{\textsc{QT}}(4N+2;X_{2N},x,y)=\ss Z_{\textsc{HT}}(2N+1;X_{2N},x,y)Z(N;X_{2N})Z(N+1;X_{2N},x,y).$$
Equation [(\[eq:main2\])]{} is new; equation [(\[eq:main1\])]{} is due to Kuperberg [@kup] for the case $x=y$. To see that Theorem \[theo:main\] implies Theorem \[theo:form\] (and Theorem \[theo:kup\]), we just have to observe that when $a=\omega_6$ and all the variables are set to $1$, then the weight at each vertex is $\si(a)=\si(a^2)$ thus the partition function reduces (up to multiplication by $\si(a)^{\rm number\ of\ vertices}$) to the number of states.
Proofs {#sec:proofs}
======
In this extended abstract, we shall only give the main ideas of the proofs. Most of them are greatly inspired from [@kup]. To prove Theorem \[theo:main\], the method is to identify both sides of equations [(\[eq:main1\])]{} and [(\[eq:main2\])]{} as Laurent polynomials, and to produce as many specializations of the variables that verify the equalities, as needed to imply these equations in full generality.
Laurent polynomials
-------------------
Since the weight of any vertex is a Laurent polynomial in the variables $x_i$, $x$ and $y$, the partition functions are Laurent polynomials in these variables. Moreover they are centered Laurent polynomials, their lowest degree is the negative of their highest degree (called the half-width of the polynomial). In order to divide by two the number of non-zero coefficients (hence the number of required specializations) in $x$, we shall deal with Laurent polynomials of given parity in this variable. To do so, we group together the states with a given orientation (indicated as subscripts in the following notations) at the edge where the parameters $x$ and $y$ meet.
So let us consider the partition functions ${Z_{\textsc{QT}}^{{\begin{pspicture}(0.2,0.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline{->}(0,.2)(.2,.2)
\psline{->}(.2,0)(.2,.2)
\end{pspicture}}}}(4N;X_{2N-1},x,y)$ and ${Z_{\textsc{QT}}^{{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline{->}(.2,.2)(.2,0)
\psline{->}(.2,.2)(0,.2)
\end{pspicture}}}}(4N;X_{2N-1},x,y)$, respectively odd and even parts of $Z_{\textsc{QT}}(4N;X_{2N-1},x,y)$ in $x$; ${Z_{\textsc{QT}}^{{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(.2,0)(.2,.15)\psarc(.15,.15){.05}{0}{90}\psline{->}(.15,.2)(0,.2)\end{pspicture}}}}(4N+2;X_{2N},x,y)$ and ${Z_{\textsc{QT}}^{{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(0,.2)(.15,.2)\psarc(.15,.15){.05}{0}{90}\psline{->}(.2,.15)(.2,0)\end{pspicture}
}}}(4N+2;X_{2N},x,y)$, respectively odd and even parts of $Z_{\textsc{QT}}(4N+2;X_{2N},x,y)$in $x$; ${Z_{\textsc{HT}}^{{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}\psline(0,0)(.1,0)\psarc(.1,.1){.1}{270}{90}\psline{->}(.1,.2)(0,.2)\end{pspicture}}}}(2N;X_{2N-1},x,y)$ and ${Z_{\textsc{HT}}^{{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}\psline(0,.2)(.1,.2)\psarc(.1,.1){.1}{270}{90}\psline{->}(.1,0)(0,0)\end{pspicture}}}}(2N;X_{2N-1},x,y)$, respectively parts with the parity of $N$ and of $N-1$ of $Z_{\textsc{HT}}(2N;X_{2N-1},x,y)$ in $x$; and ${Z_{\textsc{HT}}^{{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(.2,0)(.2,.15)\psarc(.15,.15){.05}{0}{90}\psline{->}(.15,.2)(0,.2)\end{pspicture}}}}(2N+1;X_{2N},x,y)$ and ${Z_{\textsc{HT}}^{{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(0,.2)(.15,.2)\psarc(.15,.15){.05}{0}{90}\psline{->}(.2,.15)(.2,0)\end{pspicture}
}}}(2N+1;X_{2N},x,y)$, respectively parts with the parity of $N-1$ and of $N$ of $Z_{\textsc{HT}}(2N+1;X_{2N},x,y)$ in $x$.
With these notations, the equations [(\[eq:main1\])]{} and [(\[eq:main2\])]{} are equivalent to the following: $$\begin{aligned}
\!\!\!\!\!\! \sigma(a){Z_{\textsc{QT}}^{{\begin{pspicture}(0.2,0.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline{->}(0,.2)(.2,.2)
\psline{->}(.2,0)(.2,.2)
\end{pspicture}}}}(4N;X_{2N-1},x,y) &\!\! =\!\! &
{Z_{\textsc{HT}}^{{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}\psline(0,0)(.1,0)\psarc(.1,.1){.1}{270}{90}\psline{->}(.1,.2)(0,.2)\end{pspicture}}}}(2N;X_{2N-1},x,y) Z(N;X_{2N-1},x)
Z(N;X_{2N-1},y), \label{eq:main1.1}\\
\!\!\!\!\!\! \sigma(a){Z_{\textsc{QT}}^{{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline{->}(.2,.2)(.2,0)
\psline{->}(.2,.2)(0,.2)
\end{pspicture}}}}(4N;X_{2N-1},x,y) &\!\! =\!\! &
{Z_{\textsc{HT}}^{{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}\psline(0,.2)(.1,.2)\psarc(.1,.1){.1}{270}{90}\psline{->}(.1,0)(0,0)\end{pspicture}}}}(2N;X_{2N-1},x,y) Z(N;X_{2N-1},x)
Z(N;X_{2N-1},y), \label{eq:main1.2}\\
\!\!\!\!\!\! \sigma(a){Z_{\textsc{QT}}^{{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(0,.2)(.15,.2)\psarc(.15,.15){.05}{0}{90}\psline{->}(.2,.15)(.2,0)\end{pspicture}
}}}(4N+2;X_{2N},x,y) &\!\! =\!\! &
{Z_{\textsc{HT}}^{{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(0,.2)(.15,.2)\psarc(.15,.15){.05}{0}{90}\psline{->}(.2,.15)(.2,0)\end{pspicture}
}}}(2N+1;X_{2N},x,y)
Z(N+1;X_{2N},x,y) Z(N;X_{2N}), \label{eq:main2.1}\\
\!\!\!\!\!\! \sigma(a){Z_{\textsc{QT}}^{{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(.2,0)(.2,.15)\psarc(.15,.15){.05}{0}{90}\psline{->}(.15,.2)(0,.2)\end{pspicture}}}}(4N+2;X_{2N},x,y) &\!\! =\!\! &
{Z_{\textsc{HT}}^{{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(.2,0)(.2,.15)\psarc(.15,.15){.05}{0}{90}\psline{->}(.15,.2)(0,.2)\end{pspicture}}}}(2N+1;X_{2N},x,y)
Z(N+1;X_{2N},x,y) Z(N;X_{2N}). \label{eq:main2.2}\end{aligned}$$
\[lemm:Laurent\] Both left-hand side and right-hand side of equations (\[eq:main1.1\]-\[eq:main2.2\]) are centered Laurent polynomials in the variable $x$, odd or even, of respective half-widths $2N-1$, $2N-2$, $2N$, and $2N-1$. Thus to prove each of these identities we have to exhibit specializations of $x$ for which the equality is true, and in number strictly exceeding the half-width.
To compute the half-width of these partition functions, just count the number of vertices in the ice models, and take note that non-zero entries of the ASM (the first two orientations of Figure \[fig:poids\_6V\]) give constant weight $\si(a^2)$. Also, a line whose orientation changes between endpoints must have an odd (hence non-zero) number of these $\pm 1$ entries.
Symmetries
----------
To produce many specializations from one, we shall use symmetry properties of the partition functions. The crucial tool to prove this is the Yang-Baxter equation that we recall below.
\[lemm:YB\][*\[Yang-Baxter equation\]*]{} If $xyz=\overline{a}$, then $$\begin{pspicture}[.5](2,2)
\SpecialCoor
\rput(1,1){
\psarc(-1.732,0){2}{330}{30}
\psarc(.866,1.5){2}{210}{270}
\psarc(.866,-1.5){2}{90}{150}
\rput(1;60){$x$}
\rput(1;180){$y$}
\rput(1;300){$z$}
}
\end{pspicture}
=
\begin{pspicture}[.5](2,2)
\rput(1,1){
\SpecialCoor
\psarc(1.732,0){2}{150}{210}
\psarc(-.866,1.5){2}{270}{330}
\psarc(-.866,-1.5){2}{30}{90}
\rput(1;240){$x$}
\rput(1;0){$y$}
\rput(1;120){$z$}
}
\end{pspicture}.
\label{eq:YB}$$
The following lemma gives a (now classical) example of use of the Yang-Baxter equation.
\[lemm:echange\_lignes\] $$\psset{unit=.5cm}
\begin{pspicture}[.5](5.5,2)
\rput[r](1,.5){$x$}\rput[r](1,1.5){$y$}
\rput(1,.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}\rput(1,1.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\psline(2,0)(2,2)\psline(3,0)(3,2)\psline(4.5,0)(4.5,2)
\psline(2,.5)(3.5,.5)\psline(2,1.5)(3.5,1.5)
\psline(4,.5)(4.5,.5)\psline(4,1.5)(4.5,1.5)
\rput(3.75,1){$\dots$}
\rput(5.5,.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\rput(5.5,1.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\end{pspicture} =
\begin{pspicture}[.5](5.5,2)
\rput[r](1,.5){$y$}\rput[r](1,1.5){$x$}
\rput(1,.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}\rput(1,1.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\psline(2,0)(2,2)\psline(3,0)(3,2)\psline(4.5,0)(4.5,2)
\psline(2,.5)(3.5,.5)\psline(2,1.5)(3.5,1.5)
\psline(4,.5)(4.5,.5)\psline(4,1.5)(4.5,1.5)
\rput(3.75,1){$\dots$}
\rput(5.5,.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\rput(5.5,1.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\end{pspicture}.
\label{eq:symetrie_Z}$$
We multiply the left-hand side by $\sigma(a\overline{z})$, with $z=\overline{a}x\overline{y}$. We get $$\begin{aligned}
\psset{unit=.5cm}
\sigma(a\overline{z})
\begin{pspicture}[.5](5.5,2)
\rput[r](1,.5){$x$}\rput[r](1,1.5){$y$}
\rput(1,.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}\rput(1,1.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\psline(2,0)(2,2)\psline(3,0)(3,2)\psline(4.5,0)(4.5,2)
\psline(2,.5)(3.5,.5)\psline(2,1.5)(3.5,1.5)
\psline(4,.5)(4.5,.5)\psline(4,1.5)(4.5,1.5)
\rput(3.75,1){$\dots$}
\rput(5.5,.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\rput(5.5,1.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\end{pspicture} & = & \psset{unit=.5cm}
\begin{pspicture}[.5](6.5,2)
\rput[r](1,.5){$y$}\rput[r](1,1.5){$x$}
\rput(1,.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}\rput(1,1.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\rput(2,.5){{\psbezier(0,0)(.4,0)(.6,1)(1,1)\psbezier(0,1)(.4,1)(.6,0)(1,0)}}\rput[r](2.4,1){$z$}
\rput(1,0){
\psline(2,0)(2,2)\psline(3,0)(3,2)\psline(4.5,0)(4.5,2)
\psline(2,.5)(3.5,.5)\psline(2,1.5)(3.5,1.5)
\psline(4,.5)(4.5,.5)\psline(4,1.5)(4.5,1.5)
\rput(3.75,1){$\dots$}
\rput(5.5,.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\rput(5.5,1.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}}
\end{pspicture} \\
& = &
\psset{unit=.5cm}
\begin{pspicture}[.5](5.5,2)
\rput[r](1,.5){$y$}\rput[r](1,1.5){$x$}
\rput(1,.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}\rput(1,1.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\psline(2,0)(2,2)
\rput(2,.5){{\psbezier(0,0)(.4,0)(.6,1)(1,1)\psbezier(0,1)(.4,1)(.6,0)(1,0)}}\rput[r](2.4,1){$z$}
\psline(3,0)(3,2)\psline(4.5,0)(4.5,2)
\psline(3,.5)(3.5,.5)\psline(3,1.5)(3.5,1.5)
\rput(3.75,1){$\dots$}
\psline(4,.5)(4.5,.5)\psline(4,1.5)(4.5,1.5)
\rput(5.5,.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\rput(5.5,1.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\end{pspicture} \\
& = &
\psset{unit=.5cm}
\begin{pspicture}[.5](6.5,2)
\rput[r](1,.5){$y$}\rput[r](1,1.5){$x$}
\rput(1,.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}\rput(1,1.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\psline(2,0)(2,2)
\psline(3,0)(3,2)
\psline(2,.5)(3.5,.5)\psline(2,1.5)(3.5,1.5)
\rput(3.75,1){$\dots$}
\psline(4,.5)(4.5,.5)\psline(4,1.5)(4.5,1.5)
\rput(4.5,.5){{\psbezier(0,0)(.4,0)(.6,1)(1,1)\psbezier(0,1)(.4,1)(.6,0)(1,0)}}\rput[r](4.9,1){$z$}
\psline(5.5,0)(5.5,2)
\rput(6.5,.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\rput(6.5,1.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\end{pspicture} \\
& = &
\psset{unit=.5cm}
\begin{pspicture}[.5](6.5,2)
\rput[r](1,.5){$y$}\rput[r](1,1.5){$x$}
\rput(1,.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}\rput(1,1.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\psline(2,0)(2,2)\psline(3,0)(3,2)\psline(4.5,0)(4.5,2)
\psline(2,.5)(3.5,.5)\psline(2,1.5)(3.5,1.5)
\psline(4,.5)(4.5,.5)\psline(4,1.5)(4.5,1.5)
\rput(3.75,1){$\dots$}
\rput(4.5,.5){{\psbezier(0,0)(.4,0)(.6,1)(1,1)\psbezier(0,1)(.4,1)(.6,0)(1,0)}}\rput[l](5.1,1){$z$}
\rput(6.5,.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\rput(6.5,1.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\end{pspicture} \\
& = &
\psset{unit=.5cm}
\begin{pspicture}[.5](5.5,2)
\rput[r](1,.5){$y$}\rput[r](1,1.5){$x$}
\rput(1,.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}\rput(1,1.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\psline(2,0)(2,2)\psline(3,0)(3,2)\psline(4.5,0)(4.5,2)
\psline(2,.5)(3.5,.5)\psline(2,1.5)(3.5,1.5)
\psline(4,.5)(4.5,.5)\psline(4,1.5)(4.5,1.5)
\rput(3.75,1){$\dots$}
\rput(5.5,.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\rput(5.5,1.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\end{pspicture} \sigma(a\overline{z})
\end{aligned}$$
The same method, together with the easy transformation $$\psset{unit=.5cm}
\begin{pspicture}[.45](2.5,1)
{\psbezier(0,0)(.4,0)(.6,1)(1,1)\psbezier(0,1)(.4,1)(.6,0)(1,0)}\psarc(1,.5){.5}{270}{90}
\rput[r](.3,.5){$z$}
\end{pspicture} = \left(\sigma(az)+\sigma(a^2)\right)
\left(
\begin{pspicture}[.45](1,1)
\rput(0,1){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}\rput(1,0){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\end{pspicture}\ +\
\begin{pspicture}[.45](1,1)
\rput(0,0){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}\rput(1,1){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\end{pspicture}\right)
\label{eq:boucle}$$ gives the following lemma.
\[lemm:echange\_boucle\] $$\begin{aligned}
\psset{unit=.5cm}
\begin{pspicture}[.5](5.7,2)
\rput[r](1,.5){$x$}\rput[r](1,1.5){$y$}
\rput(1,.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}\rput(1,1.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\psline(2,0)(2,2)\psline(3,0)(3,2)\psline(4.5,0)(4.5,2)
\psline(2,.5)(3.5,.5)\psline(2,1.5)(3.5,1.5)
\psline(4,.5)(4.5,.5)\psline(4,1.5)(4.5,1.5)
\psarc(4.5,1){.5}{270}{90}
\psline[linestyle=dotted](4.7,1)(5.7,1)
\rput(3.75,1){$\dots$}
\end{pspicture} &=&
\psset{unit=.5cm}
\frac{\sigma(a^2)+\sigma(x\overline{y})}{\sigma(a^2y\overline{x})}
\begin{pspicture}[.5](5.7,2)
\rput[r](1,.5){$y$}\rput[r](1,1.5){$x$}
\rput(1,.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}\rput(1,1.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\psline(2,0)(2,2)\psline(3,0)(3,2)\psline(4.5,0)(4.5,2)
\psline(2,.5)(3.5,.5)\psline(2,1.5)(3.5,1.5)
\psline(4,.5)(4.5,.5)\psline(4,1.5)(4.5,1.5)
\psarc(4.5,1){.5}{270}{90}
\psline[linestyle=dotted](4.7,1)(5.7,1)
\rput(3.75,1){$\dots$}
\end{pspicture} \label{eq:echange_boucle}\\
\psset{unit=.5cm}
\begin{pspicture}[.5](5.7,2)
\rput[r](1,.5){$x$}\rput[r](1,1.5){$y$}
\rput(1,.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}\rput(1,1.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\psline(2,0)(2,2)\psline(3,0)(3,2)\psline(4.5,0)(4.5,2)
\psline(2,.5)(3.5,.5)\psline(2,1.5)(3.5,1.5)
\psline(4,.5)(4.5,.5)\psline(4,1.5)(4.5,1.5)
\rput(4.5,1.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\rput(5.5,.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\rput(3.75,1){$\dots$}
\end{pspicture} &=&
\psset{unit=.5cm}
\frac{\sigma(x\overline{y})}{\sigma(a^2y\overline{x})}
\begin{pspicture}[.5](5.7,2)
\rput[r](1,.5){$y$}\rput[r](1,1.5){$x$}
\rput(1,.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}\rput(1,1.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\psline(2,0)(2,2)\psline(3,0)(3,2)\psline(4.5,0)(4.5,2)
\psline(2,.5)(3.5,.5)\psline(2,1.5)(3.5,1.5)
\psline(4,.5)(4.5,.5)\psline(4,1.5)(4.5,1.5)
\rput(5.5,1.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\rput(4.5,.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\rput(3.75,1){$\dots$}
\end{pspicture} +
\frac{\sigma(a^2)}{\sigma(a^2y\overline{x})}
\begin{pspicture}[.5](5.7,2)
\rput[r](1,.5){$y$}\rput[r](1,1.5){$x$}
\rput(1,.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}\rput(1,1.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\psline(2,0)(2,2)\psline(3,0)(3,2)\psline(4.5,0)(4.5,2)
\psline(2,.5)(3.5,.5)\psline(2,1.5)(3.5,1.5)
\psline(4,.5)(4.5,.5)\psline(4,1.5)(4.5,1.5)
\rput(4.5,1.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\rput(5.5,.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\rput(3.75,1){$\dots$}
\end{pspicture} \label{eq:echange_boucle_a}\\
\psset{unit=.5cm}
\begin{pspicture}[.5](5.7,2)
\rput[r](1,.5){$x$}\rput[r](1,1.5){$y$}
\rput(1,.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}\rput(1,1.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\psline(2,0)(2,2)\psline(3,0)(3,2)\psline(4.5,0)(4.5,2)
\psline(2,.5)(3.5,.5)\psline(2,1.5)(3.5,1.5)
\psline(4,.5)(4.5,.5)\psline(4,1.5)(4.5,1.5)
\rput(4.5,.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\rput(5.5,1.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\rput(3.75,1){$\dots$}
\end{pspicture} &=&
\psset{unit=.5cm}
\frac{\sigma(x\overline{y})}{\sigma(a^2y\overline{x})}
\begin{pspicture}[.5](5.7,2)
\rput[r](1,.5){$y$}\rput[r](1,1.5){$x$}
\rput(1,.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}\rput(1,1.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\psline(2,0)(2,2)\psline(3,0)(3,2)\psline(4.5,0)(4.5,2)
\psline(2,.5)(3.5,.5)\psline(2,1.5)(3.5,1.5)
\psline(4,.5)(4.5,.5)\psline(4,1.5)(4.5,1.5)
\rput(5.5,.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\rput(4.5,1.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\rput(3.75,1){$\dots$}
\end{pspicture} +
\frac{\sigma(a^2)}{\sigma(a^2y\overline{x})}
\begin{pspicture}[.5](5.7,2)
\rput[r](1,.5){$y$}\rput[r](1,1.5){$x$}
\rput(1,.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}\rput(1,1.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\psline(2,0)(2,2)\psline(3,0)(3,2)\psline(4.5,0)(4.5,2)
\psline(2,.5)(3.5,.5)\psline(2,1.5)(3.5,1.5)
\psline(4,.5)(4.5,.5)\psline(4,1.5)(4.5,1.5)
\rput(4.5,.5){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}
\rput(5.5,1.5){{\psline{->}(0,0)(-.75,0)\psline(0,0)(-1,0)}}
\rput(3.75,1){$\dots$}
\end{pspicture} \label{eq:echange_boucle_b}
\end{aligned}$$
We use Lemmas \[lemm:echange\_lignes\] and \[lemm:echange\_boucle\] to obtain symmetry properties of the partition functions, that we summarize below, where $m$ denotes either $2N$ or $2N+1$.
\[lemm:sym\] The functions $Z(N;X_{2N})$ and $Z_{\textsc{HT}}(2N+1;X_{2N},x,y)$ are symmetric separately in the two sets of variables $\{x_i,\ i\le N\}$ and $\{x_i,\ i\ge N+1\}$, the function $Z_{\textsc{HT}}(2N;X_{2N-1},x,y)$ is symmetric separately in the two sets of variables $\{x_i,\ i\le N-1\}$ and $\{x_i,\ i\ge N\}$, and the functions $Z_{\textsc{QT}}(2m;X_{N-1},x,y)$ are symmetric in their variables $x_i$.
Moreover, $Z_{\textsc{QT}}(4N+2;\dots)$ is symmetric in its variables $x$ and $y$, and we have a pseudo-symmetry for $Z_{\textsc{QT}}(4N;\dots)$ and $Z_{\textsc{HT}}(2N;\dots)$: $$\begin{aligned}
Z_{\textsc{QT}}(4N;X_{2N-1},x,y) & = &
\frac{\sigma(a^2)+\sigma(x\overline{y})}{\sigma(a^2y\overline{x})}
Z_{\textsc{QT}}(4N;X_{2N-1},y,x),
\label{eq:sym_QT4N_xy}\\
Z_{\textsc{HT}}(2N;X_{2N-1},x,y) & = &
\frac{\sigma(a^2)+\sigma(x\overline{y})}{\sigma(a^2y\overline{x})}
Z_{\textsc{HT}}(2N;X_{2N-1},y,x).
\label{eq:sym_HT2N_xy}
\end{aligned}$$
For $Z(N;\dots)$ and $Z_{\textsc{HT}}(m;\dots)$, the symmetry in two “consecutive” variables $x_i$ and $x_{i+1}$ is a direct consequence of Lemma \[lemm:echange\_lignes\]. For $Z_{\textsc{QT}}(2m;\dots)$, we again apply Lemma \[lemm:echange\_lignes\] together with the easy observations: $$\label{eq:2dots}
\psset{unit=5mm}
\begin{pspicture}[.5](2,2)
\psline(1,0)(1,2)\psline(0,1)(2,1)
\end{pspicture}
\ =\
\begin{pspicture}[.5](2,2)
\psline(1,0)(1,2)\psline(0,1)(2,1)
\psdot[dotsize=.2](.5,1)
\psdot[dotsize=.2](1,1.5)
\psdot[dotsize=.2](1,.5)
\psdot[dotsize=.2](1.5,1)
\end{pspicture}
\ \ \ \ {\rm and}\ \ \ \
\begin{pspicture}[.5](2,2)
\psline(0,1)(2,1)
\end{pspicture}
\ =\
\begin{pspicture}[.5](2,2)
\psline(0,1)(2,1)
\psdot[dotsize=.2](.5,1)
\psdot[dotsize=.2](1.5,1)
\end{pspicture}$$ which allow us to bring the Yang-Baxter triangle through the divalent vertices of Figure \[fig:ZQT\].
For the (pseudo-)symmetries in $(x,y)$, let us deal with $Z_{\textsc{QT}}(4N;\dots)$, the other cases being similar or simpler. We use equation [(\[eq:2dots\])]{} to put together the lines of parameter $x$ and $y$:
$$\begin{aligned}
Z_{\textsc{QT}}(4N;X_{2N-1},x,y) & = &
\psset{unit=.3cm}\begin{pspicture}[.4](10,13)
\rput(0,0){{\setcounter{QTsize}{6}\addtocounter{QTsize}{-1}\rput(1,1){{\setcounter{ISL}{\theQTsize}\addtocounter{ISL}{-1}\setcounter{ISH}{\theQTsize}\addtocounter{ISH}{-1}\multido{\i=0+1}{\theQTsize}{\psline(\i,0)(\i,\theISH)}\multido{\i=0+1}{\theQTsize}{\psline(0,\i)(\theISL,\i)}}}\rput(0,1){{\multido{\i=0+1}{6}{\rput(0,\i){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}}}}\rput(1,0){{\multido{\i=0+1}{6}{\rput(\i,1){{\psline{->}(0,0)(0,-.75)\psline(0,0)(0,-1)}}}}}\rput(\theQTsize,1){{\multido{\i=0+1}{\theQTsize}{\rput(0,\i){\psline(0,0)(1,0)}}}}\rput(1,\theQTsize){{\multido{\i=0+1}{\theQTsize}{\rput(\i,0){\psline(0,0)(0,1)}}}}\rput(\theQTsize,\theQTsize){\psarc(0,0){1}{0}{90}}\psline(1,6)(\theQTsize,6)\psline(6,1)(6,\theQTsize)\rput(6,6){{\degrees[360]\multido{\n=1+1}{\theQTsize}{\rput(0,0){\psarc(0,0){\n}{-90}{180}}}}}\rput(6,6){\SpecialCoor\multido{\i=1+1}{\theQTsize}{\rput(\i;45){\psdots[dotstyle=*](0,0)}}}\rput(\theQTsize,\theQTsize){\SpecialCoor\psdots[dotstyle=*](1;45)\psline[linestyle=dotted](.5;45)(1.5;45)}}}
\rput[r](0,6){$x$}\rput[t](6,-.1){$y$}
\end{pspicture}\\
& = &
\psset{unit=.3cm}
\begin{pspicture}[.4](12,12)
\rput(1,1){{\setcounter{ISL}{7}\addtocounter{ISL}{-1}\setcounter{ISH}{5}\addtocounter{ISH}{-1}\multido{\i=0+1}{7}{\psline(\i,0)(\i,\theISH)}\multido{\i=0+1}{5}{\psline(0,\i)(\theISL,\i)}}}
\rput(0,1){{\multido{\i=0+1}{5}{\rput(0,\i){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}}}}
\rput(1,0){{\multido{\i=0+1}{7}{\rput(\i,1){{\psline{->}(0,0)(0,-.75)\psline(0,0)(0,-1)}}}}}
\rput[t](6,-.1){$y$}
\rput[t](7,-.1){$x$}
\psarc(6.5,5){.5}{0}{180}
\psline[linestyle=dotted](6.5,5.2)(6.5,6.2)
\multido{\i=1+1}{5}{
\psline(\i,5)(\i,6)
\psarc(6,6){\i}{90}{180}
\psarc(7,6){\i}{270}{90}
}
\multido{\i=7+1}{5}{\psline(6,\i)(7,\i)}
\rput(7,6){\SpecialCoor\multido{\i=1+1}{5}{\psdots[dotstyle=*](\i;45)}}
\end{pspicture}\end{aligned}$$
and then apply Lemma \[lemm:echange\_boucle\].
It should be clear that we have analogous properties for the even and odd parts of the partition functions. The next (and last) symmetry property, proved by Stroganov [@IKdet], appears when the parameter $a$ equals the special value $\omega_6=\exp(i\pi/3)$. An elementary proof of this result has recently been given in [@aval].
\[lemm:symZomega6\] When $a=\omega_6$, the partition function $Z(N;X_{2N})$ is symmetric in [**all**]{} its variables.
Specializations, recurrences
----------------------------
The aim of this section is to give the value of the partition functions in some specializations of the variable $x$ or $y$. The first result is due to Kuperberg, the others are very similar.
[*\[specialization of $Z$; Kuperberg\]*]{} \[lemm:specZ\] If we denote $$\begin{aligned}
A(x_{N+1},X_{2N}\backslash \{x_1,x_{N+1}\}) & = &
\prod_{2\leq k\leq N} \sigma(a x_k \overline{x}_{N+1})
\prod_{N+1\leq k\leq 2N} \sigma(a^2 x_{N+1} \overline{x}_k),\\
\overline{A}(x_{N+1},X_{2N}\backslash \{x_1,x_{N+1}\}) & = &
\prod_{2\leq k\leq N} \sigma(a x_{N+1} \overline{x}_k)
\prod_{N+1\leq k\leq 2N} \sigma(a^2 x_k \overline{x}_{N+1}),\end{aligned}$$ then we have: $$\begin{aligned}
Z(N;{\bf \overline{a}x_{N+1}},X_{2N}\backslash x_1) & = &
\overline{A}(x_{N+1},X_{2N}\backslash \{x_1,x_{N+1}\}) Z(N-1;X_{2N}\backslash \{x_1,x_{N+1}\}), \label{eq:Zbax}\\
Z(N;{\bf ax_{N+1}},X_{2N}\backslash x_1) & = &
A(x_{N+1},X_{2N}\backslash \{x_1,x_{N+1}\})
Z(N-1; X_{2N}\backslash \{x_1,x_{N+1}\}). \label{eq:Zax}
\end{aligned}$$
We recall the method to prove equation [(\[eq:Zbax\])]{}. We observe that when $x_1=\bar a x_{N+1}$, the parameter of the vertex at the crossing of the two lines of parameter $x_1$ and $x_{N+1}$ is $\bar a$. Thus the weight of this vertex is $\si(a\ba)=\si(1)=0$ unless the orientation of this vertex is the second on Figure \[fig:poids\_6V\]. But this orientation implies the orientation of all vertices in the row $x_1$ and in the column $x_{N+1}$, as shown on Figure \[fig:recZ\]. The non-fixed part gives the partition function $Z$ in size $N-1$, without parameters $x_1$ and $x_{N+1}$, and the weights of the fixed part gives the factor $\overline{A}(\dots)$.
(-1,-1)(19,8) (0,0)
(1,1)
(0,1)(1,0)(6,1)(1,6)
(0,1)[$x_1=\overline{a}x_{N+1}$]{}(0,6)[$x_{N}$]{} (0,2)[$x_2$]{} (1,-.1)[$x_{N+1}$]{}(6,-.1)[$x_{2N}$]{}
(12,0)
(1,1)
(0,1)(1,0)(6,1)(1,6)
(0,6)[$x_1=ax_{1}$]{}(0,5)[$x_{N}$]{}(0,1)[$x_2$]{} (1,-.1)[$x_{N+1}$]{}(6,-.1)[$x_{2N}$]{}
The case of [(\[eq:Zax\])]{} is similar, after using Lemma \[lemm:sym\] to put the line $x_{N+1}$ at the top of the grid.
We will need the following application of the Yang-Baxter equation, which allows, under certain condition, a line with a change of parameter to go through a grid.
\[lemm:tour\_grille\] $$\psset{unit=.3cm}
\begin{pspicture}[.5](9,8)
\multido{\i=2+1}{6}{\psline(\i,0)(\i,8)}
\multido{\i=2+1}{5}{\psline(0,\i)(9,\i)}
\psline(1,7)(7,7)\psline(8,1)(8,6)
\psarc(7,6){1}{0}{90}
\psline[linestyle=dotted](7.2,6.2)(8.5,7.5)
\rput[r](1,7){$x$}\rput[t](8,.9){$\overline{a}x$}
\end{pspicture}
=
\begin{pspicture}[.5](9,8)
\multido{\i=2+1}{6}{\psline(\i,0)(\i,8)}
\multido{\i=2+1}{5}{\psline(0,\i)(9,\i)}
\psline(1,7)(1,2)\psline(2,1)(8,1)
\psarc(2,2){1}{180}{270}
\psline[linestyle=dotted](1.8,1.8)(.5,.5)
\rput[l](8,1){$x$}\rput[b](1,7){$\overline{a}x$}
\end{pspicture}
\label{eq:tour_grille}$$
We iteratively apply Lemma \[lemm:YB\] on the rows, and row by row: $$\begin{aligned}
\psset{unit=.3cm}
\begin{pspicture}[.5](10,4)
\psline(0,2)(10,2)
\multido{\i=2+2}{4}{\psline(\i,0)(\i,4)}
\psline(1,3)(8,3)\psline(9,2)(9,1)\psarc(8,2){1}{0}{90}
\psline[linestyle=dotted](8.2,2.2)(9.5,3.5)
\rput[r](1,3){$x$}\rput[t](9,.9){$\overline{a}x$}
\end{pspicture}
& = &
\psset{unit=.3cm}
\begin{pspicture}[.5](10,4)
\psline(0,2)(10,2)
\multido{\i=2+2}{4}{\psline(\i,0)(\i,4)}
\psline(1,3)(6,3)\psline(8,1)(9,1)\psarc(6,2){1}{0}{90}
\psarc(8,2){1}{180}{270}
\psline[linestyle=dotted](6.2,2.2)(7.5,3.5)
\psline[linestyle=dotted](7.8,1.8)(6.5,.5)
\rput[r](1,3){$x$}
\rput[l](9,1){$x$}
\rput[r](6.8,1.5){$\overline{a}x$}
\end{pspicture}\\
& = &
\psset{unit=.3cm}
\begin{pspicture}[.5](10,4)
\psline(0,2)(10,2)
\multido{\i=2+2}{4}{\psline(\i,0)(\i,4)}
\psline(1,3)(4,3)\psline(6,1)(9,1)\psarc(4,2){1}{0}{90}
\psarc(6,2){1}{180}{270}
\psline[linestyle=dotted](4.2,2.2)(5.5,3.5)
\psline[linestyle=dotted](5.8,1.8)(4.5,.5)
\rput[r](1,3){$x$}
\rput[l](9,1){$x$}
\rput[r](4.8,1.5){$\overline{a}x$}
\end{pspicture}\\
& = &
\psset{unit=.3cm}
\begin{pspicture}[.5](10,4)
\psline(0,2)(10,2)
\multido{\i=2+2}{4}{\psline(\i,0)(\i,4)}
\psline(1,3)(4,3)\psline(6,1)(9,1)\psarc(4,2){1}{0}{90}
\psarc(6,2){1}{180}{270}
\psline[linestyle=dotted](4.2,2.2)(5.5,3.5)
\psline[linestyle=dotted](5.8,1.8)(4.5,.5)
\rput[r](1,3){$x$}
\rput[l](9,1){$x$}
\rput[r](4.8,1.5){$\overline{a}x$}
\end{pspicture}\\
& = &
\psset{unit=.3cm}
\begin{pspicture}[.5](10,4)
\psline(0,2)(10,2)
\multido{\i=2+2}{4}{\psline(\i,0)(\i,4)}
\psline(1,3)(1,2)\psline(2,1)(9,1)\psarc(2,2){1}{180}{270}
\psline[linestyle=dotted](1.8,1.8)(.5,.5)
\rput[b](1,3){$\overline{a}x$}\rput[l](9,1){$x$}
\end{pspicture}.
\end{aligned}$$
[*\[specialization of $Z_{\textsc{HT}}$\]*]{} \[lemm:specZHT\] If we denote $$\begin{aligned}
A_{H}^{1}(x_1,X_{2N}\backslash x_1) & = &
\prod_{1\leq k\leq N} \sigma(a^2 x_1 \overline{x}_k)
\prod_{N+1\leq k\leq 2N} \sigma(a x_k\overline{x}_1),\\
\overline{A}_{H}^{1}(x_1,X_{2N}\backslash x_1) & = &
\prod_{1\leq k\leq N} \sigma(a^2x_k\overline{x}_1)
\prod_{N+1\leq k\leq 2N} \sigma(ax_1\overline{x}_k),\\
A_{H}^{0}(x_{N},X_{2N-1}\backslash x_{N}) & = &
\prod_{1\leq k\leq N-1}\sigma(a x_k \overline{x}_N)
\prod_{N\leq k\leq 2N-1} \sigma(a^2 x_N \overline{x}_k),\\
\overline{A}_{H}^{0}(x_{N},X_{2N-1}\backslash x_{N}) & = &
\prod_{1\leq k\leq N-1} \sigma(ax_N\overline{x}_k)
\prod_{N\leq k\leq 2N-1} \sigma(a^2x_k\overline{x}_N),
\end{aligned}$$ then for $\star={\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(0,.2)(.15,.2)\psarc(.15,.15){.05}{0}{90}\psline{->}(.2,.15)(.2,0)\end{pspicture}
},{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(.2,0)(.2,.15)\psarc(.15,.15){.05}{0}{90}\psline{->}(.15,.2)(0,.2)\end{pspicture}},{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}\psline(0,0)(.1,0)\psarc(.1,.1){.1}{270}{90}\psline{->}(.1,.2)(0,.2)\end{pspicture}},{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}\psline(0,.2)(.1,.2)\psarc(.1,.1){.1}{270}{90}\psline{->}(.1,0)(0,0)\end{pspicture}}$ and $\square={\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}\psline(0,.2)(.1,.2)\psarc(.1,.1){.1}{270}{90}\psline{->}(.1,0)(0,0)\end{pspicture}},{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}\psline(0,0)(.1,0)\psarc(.1,.1){.1}{270}{90}\psline{->}(.1,.2)(0,.2)\end{pspicture}},{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(0,.2)(.15,.2)\psarc(.15,.15){.05}{0}{90}\psline{->}(.2,.15)(.2,0)\end{pspicture}
},{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(.2,0)(.2,.15)\psarc(.15,.15){.05}{0}{90}\psline{->}(.15,.2)(0,.2)\end{pspicture}}$ respectively, we have $$\begin{aligned}
\!\!\!\!\!\!\!\!\!\!\!\!Z_{\textsc{HT}}^{\star}(2N+1;X_{2N},x,\mathbf{ax_1}) &\!\!\!\!=\!\!\!\!&
A_{H}^{1}(x_1,X_{2N}\backslash x_{1})
Z_{\textsc{HT}}^{\square}(2N;X_{2N}\backslash x_1, x_1,x),
\label{equa:Zht_impair_ax}\\
\!\!\!\!\!\!\!\!\!\!\!\!Z_{\textsc{HT}}^{\square}(2N+1;X_{2N},x,{\bf \overline{a}x_1}) &\!\!\!\!=\!\!\!\!&
\overline{A}_{H}^{1}(x_1,X_{2N}\backslash x_1)
Z_{\textsc{HT}}^{\star}(2N;X_{2N}\backslash x_1,x,x_1),
\label{equa:Zht_impair_bax}\\
\!\!\!\!\!\!\!\!\!\!\!\!Z_{\textsc{HT}}^{\star}(2N;X_{2N-1},x,{\bf ax_N}) &\!\!\!\!=\!\!\!\!&
\sigma(ax\overline{x}_{N}) A_{H}^{0}(x_{N},X_{2N-1}\backslash x_{N})
Z_{\textsc{HT}}^{\square}(2N-1;X_{2N-1}\backslash x_N,x,x_N),
\label{equa:Zht_pair_ax}\\
\!\!\!\!\!\!\!\!\!\!\!\!Z_{\textsc{HT}}^{\square}(2N;X_{2N-1},{\bf \overline{a}x_N},y) &\!\!\!\!=\!\!\!\!&
\sigma(ax_N\overline{y})
\overline{A}_{H}^{0}(x_{N},X_{2N-1}\backslash x_{N})
Z_{\textsc{HT}}^{\star}(2N-1;X_{2N-1}\backslash x_N,y,x_N).
\label{equa:Zht_pair_bax}
\end{aligned}$$
The proof is similar to the previous one, with the difference that before looking at fixed edges, we need to multiply the partition function by a given factor; we interpret this operation by a modification of the graph of the ice model, and apply Lemma \[lemm:tour\_grille\]. It turns out that in each case, the additional factors are exactly cancelled by the weights of fixed vertices.
To prove (\[equa:Zht\_impair\_ax\]), we multiply the left-hand side by $$\prod_{N+1\leq k\leq 2N} \sigma(a^2 x_k \overline{y}),$$ which is equivalent to adding to the line of parameter $y$ a new line $\overline{a}y$ just below the grid; Lemma \[lemm:tour\_grille\] transforms the graph of Figure \[fig:Zht\_impair\_ax\](a) into the graph of Figure \[fig:Zht\_impair\_ax\](b). When we put $y=ax_1$, we get the indicated fixed edges, which gives as partition function $$\prod_{N+1\leq k\leq 2N} \sigma^2(ax_k\overline{x}_1)
\prod_{1\leq k\leq N} \sigma(a^2x_1\overline{x}_k)
Z_{\textsc{HT}}(2N;X_{2N}\backslash x_1, x_1,x).$$
(-1,-1)(9,11) (1,1)
(1,0) (1,1) (0,2) (1,10) (5,2)(5,5) (4,2)[1]{}[270]{}[0]{} (4,5)[1]{}[0]{}[90]{} (4,5)[(.5;45)(1.5;45)]{} (4,2)[(.5;315)(1.5;315)]{} (0,2)[$x_1$]{} (0,5)[$x_{N}$]{} (0,6)[$x$]{} (0,1)[$\overline{a}y$]{} (1,-.1)[$x_{N+1}$]{} (4,-.1)[$x_{2N}$]{} (5.1,1.2)[$y$]{} (4.5,-1.5)[(a)]{}
(-1,-1)(10,11) (2,1)
(1,0) (2,1) (2,9) (2,10) (0,1) (1,2) (1,6) (2,4)[1]{}[90]{}[180]{} (5,5.5)[4.5]{}[0]{}[90]{} (2,4)[(.5;135)(1.5;135)]{} (5,5.5)[(.2;0)(1;2)]{} (0,1)[$x_1$]{} (0,4)[$x_N$]{} (1,6)[$x$]{} (1,-.1)[$y$]{} (2.5,-.1)[$x_{N+1}$]{} (5,-.1)[$x_{2N}$]{} (.8,5)[$\overline{a}y=x_1$]{} (5,-1.5)[(b)]{}
Since $a^2x_k\overline{y}=ax_k\overline{x}_1$, the equation simplifies. To conclude, we observe that if we start with an edge going out from the crossing $x/x_{2N}$ (function $Z_{\textsc{HT}}^{{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(0,.2)(.15,.2)\psarc(.15,.15){.05}{0}{90}\psline{->}(.2,.15)(.2,0)\end{pspicture}
}}$) we get at the end the same orientation (function $Z_{\textsc{HT}}^{{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}\psline(0,.2)(.1,.2)\psarc(.1,.1){.1}{270}{90}\psline{->}(.1,0)(0,0)\end{pspicture}}}$).
[*\[specialization of $Z_{\textsc{QT}}$\]*]{} \[lemm:specZQT\] If we denote $$\begin{aligned}
\overline{A}_{Q}(x_1,X_{m-1}\backslash x_1) & = &
\prod_{1\leq k\leq m-1} \sigma(a^2 x_k\overline{x}_1)
\sigma(ax_1 \overline{x}_k),\\
A_{Q}(x_1;X_{m-1}\backslash x_1) & = &
\prod_{1\leq k\leq m-1} \sigma(a^2 x_1 \overline{x}_k)
\sigma(a x_k \overline{x}_1),\end{aligned}$$ then for $\star = {\begin{pspicture}(0.2,0.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline{->}(0,.2)(.2,.2)
\psline{->}(.2,0)(.2,.2)
\end{pspicture}},{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline{->}(.2,.2)(.2,0)
\psline{->}(.2,.2)(0,.2)
\end{pspicture}},{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(0,.2)(.15,.2)\psarc(.15,.15){.05}{0}{90}\psline{->}(.2,.15)(.2,0)\end{pspicture}
},{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(.2,0)(.2,.15)\psarc(.15,.15){.05}{0}{90}\psline{->}(.15,.2)(0,.2)\end{pspicture}}$ and $\square={\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(.2,0)(.2,.15)\psarc(.15,.15){.05}{0}{90}\psline{->}(.15,.2)(0,.2)\end{pspicture}},{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(0,.2)(.15,.2)\psarc(.15,.15){.05}{0}{90}\psline{->}(.2,.15)(.2,0)\end{pspicture}
},{\begin{pspicture}(0.2,0.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline{->}(0,.2)(.2,.2)
\psline{->}(.2,0)(.2,.2)
\end{pspicture}},{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline{->}(.2,.2)(.2,0)
\psline{->}(.2,.2)(0,.2)
\end{pspicture}}$ respectively, we have: $$\begin{aligned}
\!\!\!\!\!\! Z_{\textsc{QT}}^{\star}(2m;X_{m-1},\mathbf{\overline{a}x_1},y) &=&
\sigma(ax_1\overline{y}) \overline{A}_{Q}(x_1,X_{m-1})
Z_{\textsc{QT}}^{\square}(2m-2;X_{m-1}\backslash x_1, y,x_1),
\label{eq:Zqt_spe_bax}\\
\!\!\!\!\!\! Z_{\textsc{QT}}^{\square}(2m;X_{m-1},x,\mathbf{ax_1}) & = &
\sigma(ax\overline{x}_1) A_{Q}(x_1;X_{m-1}\backslash x_1)
Z_{\textsc{QT}}^{\star}(2m-2;X_{m-1}\backslash x_1, x_1,x).
\label{eq:Zqt_spe_ax}\end{aligned}$$
Similar to the proof of Lemma \[lemm:specZHT\].
\[rema:sym\] By using the (pseudo-)symmetry in $(x,y)$, we may transform any specialization of the variable $y$ into a specialization of the variable $x$. Moreover, by using Lemma \[lemm:sym\] and (when $a=\omega_6$) Lemma \[lemm:symZomega6\], we obtain for $Z$, $Z_{\textsc{HT}}$ and $Z_{\textsc{QT}}$, $2N$ specializations. Now we have to compare them.
Special value of the parameter $a$; conclusion
----------------------------------------------
When $a=\omega_6=\exp(i\pi/3)$, two new ingredients may be used. The first one is Lemma \[lemm:symZomega6\], as mentioned in Remark \[rema:sym\]. The second one is that with this special value of $a$: $$\label{spec}
\si(a)=\si(a^2) \ \ \ \ \ \ \ \si(a^2x)=-\si(\ba x)=\si(a\bx).$$ which implies that the products appearing in Lemmas \[lemm:specZ\], \[lemm:specZHT\] and \[lemm:specZQT\] may be written in a more compact way: $$\begin{aligned}
A(x_{N+1},X_{2N}\backslash\{x_1,x_{N+1}\}) & = & \sigma(a)
\prod_{k\neq 1,N+1} \sigma(a x_k \overline{x}_{N+1}),\\
\overline{A}(x_{N+1},X_{2N}\backslash \{x_1,x_{N+1}\}) & = &
\sigma(a) \prod_{k\neq 1,N+1} \sigma(a x_{N+1} \overline{x}_k),\\
A_{H}^{1}(x_1,X_{2N}\backslash x_1) & = &
\prod_{1\leq k\leq 2N} \sigma(a x_{k} \overline{x}_1),\\
\overline{A}_{H}^{1}(x_1,X_{2N}\backslash x_1) & = &
\prod_{1\leq k\leq 2N} \sigma(a x_1 \overline{x}_k),\\
A_{H}^{0}(x_N,X_{2N-1}\backslash x_N) & = &
\prod_{1\leq k\leq 2N-1} \sigma(a x_{k} \overline{x}_{N}),\\
\overline{A}_{H}^{0}(x_N,X_{2N-1}\backslash x_{N}) &=&
\prod_{1\leq k\leq 2N-1} \sigma(a x_{N} \overline{x}_k),\\
\overline{A}_{Q}(x_1,X_{m-1}\backslash x_1) & = &
\prod_{1\leq k\leq m-1} \sigma^2(a x_1 \overline{x}_k),\\
A_{Q}(x_1,X_{m-1}\backslash x_1) & = &
\prod_{1\leq k\leq m-1} \sigma^2(a x_k \overline{x}_1).\end{aligned}$$
Thus we get by comparing: $$\begin{aligned}
A(x_i, X_{2N}\backslash x_i, x)
A_{H}^{1}(x_i,X_{2N}\backslash x_i)& = &\sigma(a x
\overline{x}_i) A_{Q}(x_i,X_{2N}\backslash x_i)\\
\overline{A}(x_i,X_{2N}\backslash x_i,x)
\overline{A}_{H}^{1}(x_i,X_{2N}\backslash x_i) & = &
\sigma(a x_i \overline{x})
\overline{A}_{Q}(x_i,X_{2N}\backslash x_i),\end{aligned}$$ whence (\[eq:main1.1\]) and (\[eq:main1.2\]) imply that (\[eq:main2.1\]) and (\[eq:main2.2\]) are true (in size $4N+2$) for the $2N$ specializations $x=a^{\pm 1}x_i$ ($1\leq i\leq
N$). It is enough to prove (\[eq:main2.2\]) (Laurent polynomials of half-width $2N-1$), but we still need one specialization to get (\[eq:main2.1\]) (half-width $2N$).
For (\[eq:main1.1\]) and (\[eq:main1.2\]), we observe the same kind of simplification $$A(x_i, X_{2N-1}\backslash x_i) \sigma(a x \overline{x}_i)
A_{H}^{0}(x_i,X_{2N-1}\backslash x_i) = \sigma(a x \overline{x}_i)
A_{Q}(x_i,X_{2N-1}\backslash x_i),$$ whence (\[eq:main2.2\]) and (\[eq:main2.1\]) for the size $4N-2$ imply that (\[eq:main1.1\]) and (\[eq:main1.2\]) are true for the $N$ specializations $x=ax_i$, $N\leq i\leq 2N-1$. We obtain in the same way the coincidence for the $N$ specializations $x=\overline{a}x_i$, $N\leq i\leq
2N-1$. Thus we have $2N$ specialiations of $x$: it is enough both for (\[eq:main1.1\]) (half-width $2N-1$), and for (\[eq:main1.2\]) (half-width $2N-2$).
At this point, we have *almost* proved
((\[eq:main1.1\]) and (\[eq:main1.2\]), in size $4N$) $\Longrightarrow$ ((\[eq:main2.1\]) and (\[eq:main2.2\]), in size $4N+2$) $\Longrightarrow$ ((\[eq:main1.1\]) and (\[eq:main1.2\]), in size $4N+4$);
*almost*, because we still need [*one*]{} specialization for (\[eq:main2.1\]).
We get this missing specialization, not directly for ${Z_{\textsc{QT}}^{{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(0,.2)(.15,.2)\psarc(.15,.15){.05}{0}{90}\psline{->}(.2,.15)(.2,0)\end{pspicture}
}}}$, ${Z_{\textsc{QT}}^{{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(.2,0)(.2,.15)\psarc(.15,.15){.05}{0}{90}\psline{->}(.15,.2)(0,.2)\end{pspicture}}}}$, ${Z_{\textsc{HT}}^{{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(0,.2)(.15,.2)\psarc(.15,.15){.05}{0}{90}\psline{->}(.2,.15)(.2,0)\end{pspicture}
}}}$ and ${Z_{\textsc{HT}}^{{\begin{pspicture}(.2,.2)\psset{linewidth=.4pt,arrowsize=2.5pt}
\psline(.2,0)(.2,.15)\psarc(.15,.15){.05}{0}{90}\psline{->}(.15,.2)(0,.2)\end{pspicture}}}}$, but for the original series $Z_{\textsc{QT}}(4N+2;X_{2N},x,y)$ and $Z_{\textsc{HT}}(2N+1;X_{2N},x,y)$: indeed if we set $x=ay$ we may apply Lemma \[lemm:tour\_grille\].
$$\psset{unit=5mm}
\begin{pspicture}[.5](-1,-1)(9,9)
\rput(0,0){{\setcounter{QTsize}{5}\addtocounter{QTsize}{-1}\rput(1,1){{\setcounter{ISL}{\theQTsize}\addtocounter{ISL}{-1}\setcounter{ISH}{\theQTsize}\addtocounter{ISH}{-1}\multido{\i=0+1}{\theQTsize}{\psline(\i,0)(\i,\theISH)}\multido{\i=0+1}{\theQTsize}{\psline(0,\i)(\theISL,\i)}}}\rput(0,1){{\multido{\i=0+1}{5}{\rput(0,\i){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}}}}\rput(1,0){{\multido{\i=0+1}{5}{\rput(\i,1){{\psline{->}(0,0)(0,-.75)\psline(0,0)(0,-1)}}}}}\rput(\theQTsize,1){{\multido{\i=0+1}{\theQTsize}{\rput(0,\i){\psline(0,0)(1,0)}}}}\rput(1,\theQTsize){{\multido{\i=0+1}{\theQTsize}{\rput(\i,0){\psline(0,0)(0,1)}}}}\rput(\theQTsize,\theQTsize){\psarc(0,0){1}{0}{90}}\psline(1,5)(\theQTsize,5)\psline(5,1)(5,\theQTsize)\rput(5,5){{\degrees[360]\multido{\n=1+1}{\theQTsize}{\rput(0,0){\psarc(0,0){\n}{-90}{180}}}}}\rput(5,5){\SpecialCoor\multido{\i=1+1}{\theQTsize}{\rput(\i;45){\psdots[dotstyle=*](0,0)}}}\rput(\theQTsize,\theQTsize){\SpecialCoor\psline[linestyle=dotted](.5;45)(1.5;45)}}}
\rput[r](0,1){$x_1$}
\rput[r](0,4){$x_{2N}$}
\rput[r](0,5){$ay$}
\rput[t](5,-.1){$y$}
\end{pspicture}
=
\psset{unit=5mm}
\begin{pspicture}[.5](-1,-1)(9,9)
\rput(2,2){{\setcounter{ISL}{4}\addtocounter{ISL}{-1}\setcounter{ISH}{4}\addtocounter{ISH}{-1}\multido{\i=0+1}{4}{\psline(\i,0)(\i,\theISH)}\multido{\i=0+1}{4}{\psline(0,\i)(\theISL,\i)}}}
\rput(2,1){{\multido{\i=0+1}{4}{\rput(\i,1){{\psline{->}(0,0)(0,-.75)\psline(0,0)(0,-1)}}}}}
\rput(2,0){{\multido{\i=0+1}{4}{\rput(\i,1){{\psline{->}(0,0)(0,-.75)\psline(0,0)(0,-1)}}}}}
\rput(1,2){{\multido{\i=0+1}{4}{\rput(0,\i){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}}}}
\rput(0,2){{\multido{\i=0+1}{4}{\rput(0,\i){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}}}}
\multido{\i=2+1}{4}{\rput(\i,1){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}}
\multido{\i=3+1}{4}{\rput(1,\i){{\psline{->}(0,0)(0,-.75)\psline(0,0)(0,-1)}}}
\multido{\i=2+1}{4}{\psline(\i,5)(\i,5.5)\psline(5,\i)(5.5,\i)}
\multido{\n=.5+1.0}{4}{\psarc(5.5,5.5){\n}{270}{180}}
\SpecialCoor
\rput(2,2){\psline[linestyle=dotted](.5;225)(1.5;225)}
\rput(5.5,5.5){\psdots[dotsize=1mm](.5;45)(1.5;45)(2.5;45)(3.5;45)}
\psarc(2,2){1}{180}{270}
\rput[b](1,6){$y$}
\rput[r](0,2){$x_1$}
\rput[r](0,5){$x_{2N}$}
\rput[l](6,1){$ay$}
\end{pspicture}$$ $$Z_{\textsc{QT}}(4N+2;X_{2N},{\bf ay},y) =
\sigma(a) \prod_{1\leq k\leq 2N} \sigma(a x_k \overline{y}) \sigma(a^2 y \overline{x}_k) Z_{\textsc{QT}}(4N;X_{2N}\backslash x_{2N}, x_{2N},x_{2N})$$ $$\psset{unit=5mm}
\begin{pspicture}[.5](-1,-1)(9,9)
\rput(0,0){{\setcounter{ISdoublesize}{4}\addtocounter{ISdoublesize}{4}\addtocounter{ISdoublesize}{1}\rput(1,1){{\setcounter{ISL}{4}\addtocounter{ISL}{-1}\setcounter{ISH}{\theISdoublesize}\addtocounter{ISH}{-1}\multido{\i=0+1}{4}{\psline(\i,0)(\i,\theISH)}\multido{\i=0+1}{\theISdoublesize}{\psline(0,\i)(\theISL,\i)}}}\rput(0,1){{\multido{\i=0+1}{\theISdoublesize}{\rput(0,\i){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}}}}\rput(1,0){{\multido{\i=0+1}{4}{\rput(\i,1){{\psline{->}(0,0)(0,-.75)\psline(0,0)(0,-1)}}}}}\rput(1,\theISdoublesize){{\multido{\i=0+1}{4}{\rput(\i,0){{\psline{->}(0,0)(0,.75)\psline(0,0)(0,1)}}}}}\rput(4,0){\psline(1,1)(1,4)\rput(1,1){{\psline{->}(0,0)(0,-.75)\psline(0,0)(0,-1)}}}\psarc(4,4){1}{0}{90}\rput(4,4){{\degrees[360]\multido{\n=1+1}{4}{\rput(1,1){\psarc(0,0){\n}{-90}{90}}}}}\rput(4,1){{\multido{\i=0+1}{4}{\rput(0,\i){\psline(0,0)(1,0)}}}}\rput(4,4){\rput(0,2){{\multido{\i=0+1}{4}{\rput(0,\i){\psline(0,0)(1,0)}}}}}\rput(4,4){\psline[linestyle=dotted](.25,.25)(1.25,1.25)}}}
\rput[r](0,1){$x_1$}
\rput[r](0,4){$x_N$}
\rput[r](0,5){$ ay$}
\rput[t](1,-.1){$x_{N+1}$}
\rput[t](4,-.1){$x_{2N}$}
\rput[t](5,-.1){${y}$}
\end{pspicture}
=
\psset{unit=5mm}
\begin{pspicture}[.5](-.5,-1)(9,10)
\rput(2,1){{\setcounter{ISdoublesize}{4}\addtocounter{ISdoublesize}{4}\rput(1,1){{\setcounter{ISL}{4}\addtocounter{ISL}{-1}\setcounter{ISH}{\theISdoublesize}\addtocounter{ISH}{-1}\multido{\i=0+1}{4}{\psline(\i,0)(\i,\theISH)}\multido{\i=0+1}{\theISdoublesize}{\psline(0,\i)(\theISL,\i)}}}\rput(0,1){{\multido{\i=0+1}{\theISdoublesize}{\rput(0,\i){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}}}}\rput(1,0){{\multido{\i=0+1}{4}{\rput(\i,1){{\psline{->}(0,0)(0,-.75)\psline(0,0)(0,-1)}}}}}\rput(1,\theISdoublesize){{\multido{\i=0+1}{4}{\rput(\i,0){{\psline{->}(0,0)(0,.75)\psline(0,0)(0,1)}}}}}\rput(4,4){\rput(0,.5){{\degrees[360]\multido{\n=.5+1.0}{4}{\rput(0,0){\psarc(0,0){\n}{-90}{90}}}}}}\rput(4,4){\psline[linestyle=dotted](.25,.5)(.75,.5)}}}
\rput(3,0){{\multido{\i=0+1}{4}{\rput(\i,1){{\psline{->}(0,0)(0,-.75)\psline(0,0)(0,-1)}}}}}
\rput(.5,2){{\multido{\i=0+1}{4}{\rput(0,\i){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}}}}
\multido{\i=2+1}{4}{\psline(1.5,\i)(2,\i)}
\psline(2.5,1)(3,1)
\multido{\i=3+1}{4}{\rput(\i,1){{\psline{->}(0,0)(.75,0)\psline(0,0)(1,0)}}}
\multido{\i=3+1}{4}{\rput(1.5,\i){{\psline{->}(0,0)(0,-.75)\psline(0,0)(0,-1)}}}
\SpecialCoor
\psarc(2.5,2){1}{180}{270}
\rput(2.5,2){\psline[linestyle=dotted](.5;225)(1.5;225)}
\rput[r](.5,2){$x_1$}
\rput[r](.5,5){$x_{N}$}
\rput[b](1.5,6){$y$}
\rput[t](3,-.1){$x_{N+1}$}
\rput[t](6,-.1){$x_{2N}$}
\rput[l](7,1){$ay$}
\end{pspicture}$$ $$Z_{\textsc{HT}}(2N+1;X_{2N},{\bf ay},y) =
\left(\prod_{1\leq k\leq N} \sigma(a x_k \overline{y})
\prod_{N+1\leq k\leq 2N} \sigma(a^2 y \overline{x}_k)\right)
Z_{\textsc{HT}}(2N; X_{2N}\backslash x_{N}, x_N,x_N)$$
This way, we get another point where (\[eq:main2\]) is true, and thus, because we already have (\[eq:main2.2\]), by difference we obtain that (\[eq:main2.1\]) holds for $y=\overline{a}x$.
This completes the proof of Theorem \[theo:main\].
[10]{}
, [*On the symmetry of the partition function of some square ice models*]{}, to appear in Theor. Math. Phys, [arXiv:0903.0777]{}.
, [*On the link pattern distribution of quarter-turn symmetric FPL configurations*]{}, Proceedings of FPSAC 2008, 12p.
, [*Another proof of the alternating sign matrices conjecture*]{}, Internat. Math. Research Not. ,[**1996**]{} (1996), 139–150.
, [*Symmetry classes of alternating sign matrices under one roof*]{}, Ann. Math. [**156**]{} (2002) 835–866.
, [*Alternating sign matrices and descending plane partitions*]{}, J. Combin. Th. Ser. A [**34**]{} (1983), 340–359.
, [*Enumeration of quarter-turn symmetric alternating sign matrices of odd order*]{}, Theoret. Math. Phys. [**149**]{} (2006) 1639–1650.
, [*Symmetry classes of alternating sign matrices*]{}, [arXiv:math.CO/0008045]{}.
, [*A new way to deal with Izergin-Korepin determinant at root of unity*]{}, [arXiv:math-ph/0204042]{}.
, [*Proof of the alternating sign matrix conjecture*]{}, Electronic J. Combinatorics [**3**]{}, No. 2 (1996) , R13, 1–84.
[^1]: Both authors are supported by the ANR project MARS (BLAN06-2$\_$0193)
|
---
abstract: 'Function computation in directed acyclic networks is considered, where a sink node wants to compute a target function with the inputs generated at multiple source nodes. The network links are error-free but capacity-limited, and the intermediate network nodes perform network coding. The target function is required to be computed with zero error. The computing rate of a network code is measured by the average number of times that the target function can be computed for one use of the network. We propose a cut-set bound on the computing rate using an equivalence relation associated with the inputs of the target function. Our bound holds for general target functions and network topologies. We also show that our bound is tight for some special cases where the computing capacity can be characterized.'
author:
- '[^1]'
title: |
Upper Bound on Function Computation in\
Directed Acyclic Networks
---
Introduction
============
We consider function computation in a directed acyclic network, where a *target function* $f$ is intended to be calculated at a sink node, and the input symbols of the target function are generated at multiple source nodes. As a special case, network communication is just the computation of the *identity function*.[^2] Network function computation naturally arises in sensor networks [@Giridhar05] and Internet of Things, and may find applications in big data processing.
Various models and special cases of this problem have been studied in literature (see the summarizations in [@Appuswamy11; @Kowshik12; @Ramam13]). We are interested in the following *network coding model* for function computation. Specifically, we assume that the network links have limited (unit) capacity and are error-free. Each source node generates multiple input symbols, and the network codes perform vector network coding by using the network multiple times.[^3] An intermediate network node can transmit the output of a certain fixed function of the symbols it receives. Here all the intermediated nodes are considered with unbounded computing ability. The target function is required to be computed correctly for all possible inputs. We are interested in the *computing rate* of a network code that computes the target function, i.e., the average number of times that the target function can be computed for one use of the network. The maximum achievable computing rate is called the *computing capacity*.
When computing the identity function, the problem becomes the extensively studied network coding [@flow; @linear], and it is known that in general linear network codes are sufficient to achieve the multicast capacity [@linear; @alg]. For linear target functions over a finite field, a complete characterization of the computing capacity is not available for networks with one sink node. Certain necessary and sufficient conditions have been obtained such that linear network codes are sufficient to calculate a linear target function [@Ramam13; @Appuswamy14]. But in general, linear network codes are not sufficient to achieve the computing capacity of linear target functions [@Rai12].
Networks with a single sink node are discussed in this paper, while both the target function and the network code can be non-linear. In this scenario, the computing capacity is known when the network is a multi-edge tree [@Appuswamy11] or when the target function is the identity function. For the general case, various bounds on the computing capacity based on cut sets have been studied [@Appuswamy11; @Kowshik12]. But we find that the upper bounds claimed in [@Appuswamy11; @Kowshik12] are not valid. Specifically, the proof of [@Appuswamy11 Theorem II.1] has an error[^4]: The condition provided in the beginning of the second paragraph is not always necessary, which is illustrated by an example given in this paper. We show that the computing capacity of our example is strictly larger than the two upper bounds claimed in [@Appuswamy11; @Kowshik12].
Towards a general upper bound, we define an equivalence relation associated with the inputs of the target function (but does not depend on the network topology) and propose a cut-set bound on the computing capacity using this equivalence relation. Our bound holds for general target functions and general network topologies in the network coding model. We also show that our bound is tight when the network is a multi-edge tree or when the target function is the identity function.
In the remainder of this paper, Section \[sec:main-results\] formally introduces the network computing model. The upper bound of the computing rate is given in Theorem \[theo1\], and is proved in Section \[sec:proof-main-theorem\]. Section \[sec:disc-main-result\] compares with the previous results and discusses the tightness of our upper bound.
Main Results {#sec:main-results}
============
In this section, we will first introduce the network computing model. Then we will define cut sets and discuss some special cases of the function computation problem. Last we head to the main theorem about the cut-set bound for function computation.
Function-Computing Network Codes {#sec:funct-comp-netw}
--------------------------------
Let $G=(\mathcal{V},\mathcal{E})$ be a directed acyclic graph (DAG) with a finite vertex set $\mathcal{V}$ and an edge set $\mathcal{E}$, where multi-edges between a certain pair of nodes are allowed. A [ *network*]{} over $G$ is denoted as $\mathcal{N}=(G,S,\rho)$, where $S\subset \mathcal{V}$ is called the [*source nodes*]{} and $\rho\in
\mathcal{V}\backslash S$ is called the [*sink node*]{} $\rho$. Let $s=|S|$, and without loss of generality (WLOG), let $S=\left\{
1,2,\dots,s \right\}$. For an edge $e=(u,v)$, we call $u$ the tail of $e$ (denoted by ${\mathrm{tail}}(e)$) and $v$ the head of $e$ (denoted by ${\mathrm{head}}(e)$). Moreover, for each node $u\in\mathcal{V}$, let ${\mathcal{E}_{\mathrm{i}}}(u) =
\{e\in \mathcal{E}: {\mathrm{head}}(e)=u\}$ and ${\mathcal{E}_{\mathrm{o}}}(u) =
\{e\in\mathcal{E}:{\mathrm{tail}}(e)=u)$ be the set of incoming edges and the set of outgoing edges of $u$, respectively. Fix an order of the vertex set $\mathcal{V}$ that is consistent with the partial order induced by the directed graph $G$. This order naturally induces an order of the edge set $\mathcal{E}$, where edges $e>e'$ if either ${\mathrm{tail}}(e)>{\mathrm{tail}}(e')$ or ${\mathrm{tail}}(e)={\mathrm{tail}}(e')$ and ${\mathrm{head}}(e)>{\mathrm{head}}(e')$. WLOG, we assume that ${\mathcal{E}_{\mathrm{i}}}(j)=\emptyset$ for all source nodes $j\in S$, and ${\mathcal{E}_{\mathrm{o}}}(\rho)=\emptyset$. We will illustrate in Section \[sec:tightness-results\] how to apply our results on a network with ${\mathcal{E}_{\mathrm{i}}}(j)\neq \emptyset$ for certain $j\in S$.
The network defined above is used to compute a function, where multiple inputs are generated at the source nodes and the output of the function is demanded by the sink node. The computation units with unbounded computing ability are allocated at all the network nodes. However, the computing capability of the network will be bounded by the network transmission capability. Denote by $\mathcal{B}$ a finite alphabet. We assume that each edge can transmit a symbol in $\mathcal{B}$ reliably for each use.
Denote by $\mathcal{A}$ and $\mathcal{O}$ two finite alphabets. Let $f:\mathcal{A}^s\to \mathcal{O}$ be the [*target function*]{}, which is the function to be computed via the network and whose $i$[th]{} input is generated at the $i$th source node. We may use the network to compute the function multiple times. Suppose that the $j$th source node consecutively generates $k$ symbols in $\mathcal{A}$ denoted by $x_{1j},x_{2j},\dots,x_{kj}$, and the symbols generated by all the source nodes can be given as a matrix $x=(x_{ij})_{k\times s}$. We denote by $x_j$ the $j$th column of $x$, and denote by $x^i$ the $i$th row of $x$. In other words, $x_j$ is the vector of the symbols generated at the $j$th source node, and $x^i$ is the input vector of the $i$th computation of the function $f$. Define for $x\in
\mathcal{A}^{k\times s}$ $$f^{(k)}(x) = \left(f(x^1),f(x^2),\ldots,f(x^k)\right)^{\top}.$$ For convenience, we denote by $x_{J}$ the submatrix of $x$ formed by the columns indexed by $J\subset S$, and denote by $x^I$ the submatrix of $x$ formed by the rows indexed by $I\subset\left\{ 1,2,\dots,k \right\}$. We equate $\mathcal{A}^{1\times s}$ with $\mathcal{A}^s$ in this paper.
For two positive integers $n$ and $k$, a $(n,k)$ (function-computing) network code over network $\mathcal{N}$ with target function $f$ is defined as follows. Let $x\in \mathcal{A}^{k\times s}$ be the matrix formed by symbols generated at the source nodes. The purpose of the code is to compute $f^{(k)}(x)$ by transmitting at most $n$ symbols in $\mathcal{A}$ on each edge in $\mathcal{E}$. Denote the symbols transmitted on edge $e$ by $g^{e}(x) \in \mathcal{B}^n$. For a set of edges $E\subset\mathcal{E}$ we define $$g^{E}(x)=\left( g^{e}(x)|e\in E \right)$$ where $g^{e_1}(x)$ comes before $g^{e_2}(x)$ whenever $e_1 < e_2$. The $(n,k)$ network code contains the encoding function for each edge $e$, define: $$h^{e}: \left\{
\begin{array}{ll}
\mathcal{A}^k \rightarrow \mathcal{B}^n, & \mathrm{if }\ u\in S; \\
\prod_{e'\in {\mathcal{E}_{\mathrm{i}}}(\mathrm{tail}(e))} \mathcal{B}^n \rightarrow
\mathcal{B}^n, & \mathrm{otherwise.}
\end{array}
\right.$$ Functions $h^e, e\in \mathcal{E}$ determine the symbols transmitted on the edges. Specifically, if $e$ is an outgoing edge of the $i$th source node, then $$g^{e}(x) = h^{e}(x_i);$$ if $e$ is an outgoing edge of $u\in \mathcal{V}\setminus
(S\cup\{\rho\})$, then $$g^{e}(x) = h^{e}\left(g^{{\mathcal{E}_{\mathrm{i}}}(u)}(x)\right).$$
The $(n,k)$ network code also contains a decoding function $$\varphi: \prod_{e'\in {\mathcal{E}_{\mathrm{i}}}(\rho)} \mathcal{B}^n \rightarrow
\mathcal{O}^k.$$ Define $$\psi(x) = \varphi\left(g^{{\mathcal{E}_{\mathrm{i}}}(\rho)}(x)\right).$$ If the network code [*computes*]{} $f$, i.e., $\psi(x)=f^{(k)}(x)$ for all $x\in\mathcal{A}^{k\times s}$, we then call $\frac{k}{n}\log_{|\mathcal{B}|}|\mathcal{A}|$ an [*achievable*]{} computing rate, where we multiply $\frac{k}{n}$ by $\log_{|\mathcal{B}|}|\mathcal{A}|$ in order to normalize the computing rate for target functions with different input alphabets. The [*computing capacity*]{} of network $\mathcal{N}$ with respect to a target function $f$ is defined as $$\mathcal{C}(\mathcal{N},f)=\sup\left\{
\frac{k}{n} \log_{|\mathcal{B}|}|\mathcal{A}|
\Big|\frac{k}{n}\log_{|\mathcal{B}|}|\mathcal{A}|\textrm{ is achievable} \right\}.$$
Cut Sets and Special Cases
--------------------------
For two nodes $u$ and $v$ in $\mathcal{V}$, denote the relation $u\rightarrow v$ if there exists a directed path from $u$ to $v$ in $G$. If there is no directed path from $u$ to $v$, we say $u$ is *separated* from $v$. Given a set of edges $C\subseteq
\mathcal{E}$, $I_C$ is defined to be the set of source nodes which are separated from the sink node $\rho$ if $C$ is deleted from $\mathcal{E}$. Set $C$ is called a [ *cut set*]{} if $I_C\neq \emptyset$, and the family of all cut sets in network $\mathcal{N}$ is denoted as $\Lambda(\mathcal{N})$. Additionally, we define the set $K_C$ as $$K_C=\left\{ i\in S| \exists v,t\in \mathcal{V}, i\rightarrow v, (v,t)\in C \right\}.$$ It is reasonable to assume that $u\rightarrow \rho$ for all $u\in\mathcal{V}$. Then one can easily see that $K_C$ is the set of source nodes from which there exists a path to the sink node through $C$. Define $J_C=K_C\backslash I_C$.
The problem also becomes simple when $s=1$.
\[prop:1\] For a network $\mathcal{N}$ with a single source node and any target function $f:\mathcal{A}\rightarrow \mathcal{O}$, $$\mathcal{C}(\mathcal{N},f) = \min_{C\in \Lambda(\mathcal{N})} \frac{|C|}{\log_{|\mathcal{A}|}|f[\mathcal{A}]|},$$ where $f[\mathcal{A}]$ is the image of $f$ on $\mathcal{O}$.
We first show that the right hand side in the above equation could be achieved. Let $$M=\min_{C\in \Lambda(\mathcal{N})}|C|.$$ Fix integers $k$ and $n$ such that $k/n \log_{|\mathcal{B}|}|\mathcal{A}| \leq
\mathcal{C}(\mathcal{N},f)$, which implies $$\label{eq:1}
|f[\mathcal{A}]|^k \leq |\mathcal{B}|^{nM}.$$ By the max-flow min-cut theorem of directed acyclic graphs, there must exist $M$ edge-disjoint paths from the source node to the sink node, and each of them can transmit one symbol in $\mathcal{B}$. We can apply the following network computing code to compute $f^{(k)}$: The source node first computes $f^{(k)}(x)$, and then encodes $f^{(k)}(x)$ into a unique sequence $y$ in $\mathcal{B}^{nM}$ using a one-to-one mapping $m$, whose existence is guaranteed by . The network then forwards $y$ from the source node to the sink node by $n$ uses of the $M$ paths. The sink node decodes by $m^{-1}(y)$.
We then prove the converse. Suppose that we have a $(n,k)$ code with $k/n \log_{|\mathcal{B}|}|\mathcal{A}| >
\mathcal{C}(\mathcal{N},f)$, which implies $$\label{eq:2}
|f[\mathcal{A}]|^k > |\mathcal{B}|^{nM}.$$ Fix a cut set $C$ with $|C|=M$. It can be shown that $$\psi(x) = \psi^C(g^C(x)),$$ for certain function $\psi^C$ (see Lemma \[lem2\] in Section \[sec:proof-main-theorem\]). By and the pigeonhole principle, there must exist $x,x'\in\mathcal{A}^{k\times s}$ such that
[rCl]{} f\^[(k)]{}(x) & & f\^[(k)]{}(x’),\
g\^C(x) & = & g\^C(x’),
where the second equality implies $\psi(x)=\psi(x')$. Thus this network code cannot compute both $f^{(k)}(x)$ and $f^{(k)}(x')$ correctly. The proof is completed.
Upper Bounds
------------
In this paper, we are interested in the general upper bound on $\mathcal{C}(\mathcal{N},f)$. The first upper bound is induced by Proposition \[prop:1\]
For a network $\mathcal{N}$ with target function $f$, $$\mathcal{C}(\mathcal{N},f) \leq \min_{C\in
\Lambda(\mathcal{N}): I_C=S}
\frac{|C|}{\log_{|\mathcal{A}|}|f[\mathcal{A}^s]|}.$$ \[prop:2\]
Build a network $\mathcal{N}'$ by joining all the source nodes of $\mathcal{N}$ into a single “super” source node. Since a code for network $\mathcal{N}$ can be naturally converted to a code for network $\mathcal{N}'$ (where the super source node performs the operations of all the source nodes in $\mathcal{N}$), we have $$\mathcal{C}(\mathcal{N},f) \leq
\mathcal{C}(\mathcal{N}',f).$$ The proof is completed by applying Proposition \[prop:1\] on $\mathcal{N}'$ and $\Lambda(\mathcal{N}') = \{C\in \Lambda(\mathcal{N}): I_C=S\}$.
The above upper bound only uses the image of function $f$. We propose an enhanced upper bound by investigating an equivalence relation on the input vectors of $f$. We will compare this equivalence relation with similar definitions proposed in [@Appuswamy11; @Kowshik12] in the next section.
\[def:ec\] For any function $f:\mathcal{A}^s\to \mathcal{O}$, any two disjoint index sets $I,J\subseteq S$, and any $a,b\in \mathcal{A}^{|I|},
c\in\mathcal{A}^{|J|}$, we say $a\mathop\equiv\limits^{(c)}
b|_{I,J}$ if for every $x,y\in \mathcal{A}^{1\times s}$, we have $f(x)=f(y)$ whenever $x_I=a$, $y_I=b, x_J=y_J=c$ and $x_{S\backslash
(I\cup J)}=y_{S\backslash(I\cup J)}$. Two vectors $a$ and $b$ satisfying $a\mathop\equiv\limits^{(c)}b|_{I,J}$ are said to be *[$(I,J,c)$-equivalent]{}. When $J=\emptyset$ in the above definition, we use the convention that $c$ is an empty matrix.*
Note that the equivalence as defined above does not depend on the structure of the network. However, it will soon be clear that with a network, the division of equivalence classes naturally leads to an upper bound of the network function-computing capacity based on cut sets.
For every $f$, $I$, $J$ and $c\in\mathcal{A}^{|J|}$, let $W^{(c)}_{I,J,f}$ denote the total number of equivalence classes induced by $\mathop\equiv\limits^{(c)}|_{I,J}$. Given a network $\mathcal{N}$ and a cut set $C$, let $W_{C,f}=\max_{c\in \mathcal{A}^{|J_C|}}W_{I_C,J_C,f}^{(c)}$. Our main result is stated as following. The proof of the theorem is presented in Section \[sec:proof-main-theorem\]).
\[theo1\] If $\mathcal{N}$ is a network and $f$ is a target function, then $$\mathcal{C}(\mathcal{N},f)\le
\min_{C\in\Lambda(\mathcal{N})}\dfrac{|C|}{\log_{|\mathcal{A}|}W_{C,f}}
:= {\textrm{min-cut}}(\mathcal{N},f).$$
Discussion of Upper Bound {#sec:disc-main-result}
=========================
In this section, we first give an example to illustrate the upper bound. We compare our result with the existing ones, and proceed by a discussion about the tightness of the bound.
An Illustration of the Bound
----------------------------
First we give an example to illustrate our result. Consider the network $\mathcal{N}_1$ in Fig. \[fig:1\] with the object function $f(x_1,x_2,x_3)=x_1x_2+x_3$, where $\mathcal{A}=\mathcal{B} = \mathcal{O}=\{0,1\}$.
(-3, 0) node\[vertex\] (1) \[label=above:$ 1$\] ; (0, 0) node\[vertex\] (2) \[label=above:$ 2$\] ; (3, 0) node\[vertex\] (3) \[label=above:$ 3$\] ; (-1.5, -2) node\[vertex\] (v) \[label=below:$v$\] ; (1.5,-2) node\[vertex\] (r) \[label=below:$\rho$\] ; (1) – (v) node \[midway, left\] [$e_1$]{}; (2) – (v) node \[near start, left\] [$e_2$]{}; (3) – (v) node \[near start, right\] [$e_3$]{}; (1) – (r) node \[near start, left\] [$e_4$]{}; (2) – (r) node \[near start, right\] [$e_5$]{}; (3) – (r) node \[midway, right\] [$e_6$]{}; (v) – (r) node \[midway, below\] [$e_7$]{};
Let us first compare the upper bounds in Theorem \[theo1\] and Proposition \[prop:2\]. Let $C_0=\left\{ e_6,e_7 \right\}$. Here we have
- $|C_0|=2$, $I_{C_0}=\left\{3 \right\}$, $J_{C_0}=\left\{ 1, 2
\right\};$ and
- For any given inputs of nodes $1$ and $2$, different inputs from node $3$ generate different outputs of $f$. Therefore $W_{I_{C_0},J_{C_0},f}^{(c)}=2$ for any $c\in \mathcal{A}^{2}$ and hence $W_{C_0,f}=2$.
By Theorem \[theo1\], we have $$\mathcal{C}(\mathcal{N}_1,f)\leq {\textrm{min-cut}}(\mathcal{N}_1,f)\leq\dfrac{|C_0|}{\log_{|\mathcal{A}|}W_{C_0,f}}=2.$$ While Proposition \[prop:2\] induces that
[rCl]{} ( \_1,f ) & & \_[C(\_1):I\_C=S]{}\
& = & \_[C(\_1):I\_C=S]{} |C|\
& = & 4,
where the first equality follows from $f[\mathcal{A}^s]=\left\{ 0,1
\right\}$, and the second equality follows from $$\min_{C\in\Lambda(\mathcal{N}_1):I_C=S}|C|=|\left\{ e_4,e_5,e_6,e_7 \right\}|=4.$$ Therefore, Theorem \[theo1\] gives a strictly better upper bound than Proposition \[prop:2\].
The upper bound in Theorem \[theo1\] is actually tight in this case. We claim that there exists a $(1,2)$ network code that computes $f$ in $\mathcal{N}_1$. Consider an input matrix $x = (x_{ij})_{2\times 3}$. Node $i$ sends $x_{1i}$ to node $v$ and sends $x_{2i}$ to node $\rho$ for $i=1,2,3$ respectively, i.e., for $i=1,2,3$ $$g^{e_i} = x_{1i}, \quad g^{e_{i+3}} = x_{2i}.$$ Node $v$ then computes $f(x^1)=x_{11}x_{12}+x_{13}$ and sends it to node $\rho$ via edge $e_7$. Node $\rho$ receives $f(x^1)$ from $e_7$ and computes $f(x^2)=x_{21}x_{22}+x_{23}$ using the symbols received from edges $e_4, e_5$ and $e_6$.
Comparison with Previous Works
------------------------------
Upper bounds on the computing capacity have been studied in [@Appuswamy11; @Kowshik12] based on a special case of the equivalence class defined in Definition \[def:ec\]. However, we will demonstrate that the bounds therein do not hold for the example we studied in the last subsection.
In Definition \[def:ec\], when $J=\emptyset$, we will say $a\equiv
b|_{I}$, or $a$ and $b$ are $I$-equivalent. That is $a\equiv b|_{I}$ if for every $x,y\in \mathcal{A}^{1\times s}$ with $x_I=a$, $y_I=b$ and $x_{S\backslash I}=y_{S\backslash I}$, we have $f(x)=f(y)$. For target function $f$ and $I\subset S$, denote by $R_{I,f}$ the total number of equivalence classes induced by $\equiv|_{I}$. For a cut $C\in \Lambda(\mathcal{N})$, let $R_{C,f} = R_{I_C,f}$.
For $J\subseteq S$, let ${\mathbf{i}}_J:J\rightarrow
\{1,\ldots,|J|\}$ be the one-to-one mapping preserving the order on $J$, i.e., ${\mathbf{i}}_J(i)<{\mathbf{i}}_J(j)$ if and only if $i<j$.
Then we have the following lemma:
Let $I,J$ be disjoint subsets of $S$ and $J'\subseteq J$. Then for $a,b\in\mathcal{A}^{|I|}$ and $c\in\mathcal{A}^{|J|}$, we have that $a\mathop\equiv\limits^{(c')}b|_{I,J'}$ implies $a\mathop\equiv\limits^{(c)}b|_{I,J}$ where $c'=c_{{\mathbf{i}}_{J}(J')}$. In particular, $a\equiv b|_I$ implies $a\mathop\equiv\limits^{(c)}b|_{I,J}$ for all $J\subseteq S\setminus I$ and $c\in\mathcal{A}^{|J|}$. \[lem:red\]
Assume that $a\mathop\equiv\limits^{(c')}b|_{I,J'}$ where $I\subseteq S, J\subseteq S\setminus I, J'\subseteq J$, $a,b\in\mathcal{A}^{|I|}$, $c\in\mathcal{A}^{|J|}$ and $c'=c_{{\mathbf{i}}_{J}(J')}$. We want to prove that $a\mathop\equiv\limits^{(c)}b|_{I,J}$.
It suffices to show that $f(x)=f(y)$ for all $x,y\in\mathcal{A}^s$ satisfying $x_I=a,y_I=b,x_{J}=y_{J}=c,x_{S\setminus(I\cup J)}=y_{S\setminus(I\cup J)}$. We know that $x_{J'}=(x_{J})_{{\mathbf{i}}_{J}(J')}=c_{{\mathbf{i}}_{J}(J')}=c'$ by definition of the function ${\mathbf{i}}_J$. Therefore $x_I=a,y_I=b,x_{J'}=y_{J'}=c'$ and $x_{S\setminus(I\cup J')}=y_{S\setminus(I\cup J')}$, which implies $f(x)=f(y)$. The proof is finished.
\[lem:red2\] Let $I \subset I' \subset S$ and $J=I'\setminus I$. For all $c\in\mathcal{A}^{|J|}$, $a\equiv b|_I$ implies $a'
\mathop\equiv b'|_{I'}$ where $a'_{{\mathbf{i}}_{I'}(I)} = a$, $b'_{{\mathbf{i}}_{I'}(I)} = b$ and $a'_{{\mathbf{i}}_{I'}(J)} =
b'_{{\mathbf{i}}_{I'}(J)}
= c$.
We have $a\equiv b|_I$ implies $a\mathop\equiv\limits^{(c)}b|_{I,J}$ by Lemma \[lem:red\], and $a'\equiv b'|_{I'}$ is equivalent to $a\mathop\equiv\limits^{(c)}b|_{I,J}$ by definition.
\[col:red\] Fix network $\mathcal{N}$ and function $f$. Then, i) for any $C\in
\Lambda(\mathcal{N})$, we have $R_{C,f}\geq W_{C,f}$; ii) for any $C,C'\in
\Lambda(\mathcal{N})$ with $C'\subset C$ and $I_{C'}=I_C$, we have $W_{C',f}\geq W_{C,f}$.
Let $I=I_C$, $J=J_C$ and $J'=J_{C'}$. Apparently, $J_{C'}\subseteq J_C$. By Lemma \[lem:red\], for $c\in\mathcal{A}^{|J_C|}$, we have $$W_{I,J',f}^{(c')}\geq W_{I,J,f}^{(c)},$$ where $c'=c_{{\mathbf{i}}_{J}(J')}$. In particular, $$R_{I,f}\geq W_{I,J,f}^{(c)}.$$ Then $R_{I,f}\geq \max_{c\in \mathcal{A}^{|J|}} W_{I,J,f}^{(c)} =
W_{C,f}$.
Fix $c^*\in\mathcal{A}^{|J_C|}$ such that $W_{I,J,f}^{(c^*)}=
W_{C,f}$. Then, $W_{C',f} \geq W_{I,J',f}^{(c'')}\geq
W_{I,J,f}^{(c^*)} = W_{C,f}$ where $c''=c^*_{{\mathbf{i}}_J(J')}$.
Define $${\textrm{min-cut}}_{\text{A}}(\mathcal{N},f)=\min_{C\in\Lambda(\mathcal{N})}\dfrac{|C|}{\log_{|\mathcal{A}|}R_{C,f}}.$$ By Lemma \[col:red\], we have ${\textrm{min-cut}}(\mathcal{N},f)\geq {\textrm{min-cut}}_{\text{A}}(\mathcal{N},f)$. It is claimed in [@Appuswamy11 Theorem II.1] that ${\textrm{min-cut}}_{\text{A}}(\mathcal{N},f)$ is an upper bound on $C(\mathcal{N},f)$. We find, however, ${\textrm{min-cut}}_{\text{A}}(\mathcal{N},f)$ is not universally an upper bound for the computing capacity. Consider the example in Fig. \[fig:1\]. For cut set $C_1=\{e_4,e_6,e_7\}$, we have $I_{C_1}=\{ 1,
3\}$. On the other hand, it can be proved that $R_{{C_1},f}=4$ since i) $f$ is an affine function of $x_2$ given that $x_1$ and $x_3$ are fixed, and ii) it takes $2$ bits to represent this affine function over the binary field. Hence $${\textrm{min-cut}}_{\text{A}}(\mathcal{N}_1,f) \leq
\dfrac{|C_1|}{\log_{|\mathcal{A}|}R_{C_1,f}}=\dfrac{3}{2} < 2 = \mathcal{C}(\mathcal{N}_1,f).$$
For a network $\mathcal{N}$ as defined in Section \[sec:funct-comp-netw\], we say a subset of nodes $U\subset
\mathcal{V}$ is a *cut* if $|U\cap S|>0$ and $\rho \notin U$. For a cut $U$, denote by $\mathcal{E}(U)$ the cut set determined by $U$, i.e., $$\mathcal{E}(U) = \{e\in \mathcal{E}: {\mathrm{tail}}(e)\in V, {\mathrm{head}}(e)\in
\mathcal{V}\setminus U\}.$$ Let $$\Lambda^*(\mathcal{N}) = \{\mathcal{E}(U): U \text{ is a cut in }
\mathcal{N}\}.$$ Define $${\textrm{min-cut}}_{\text{K}}(\mathcal{N},f)=\min_{C\in\Lambda^*(\mathcal{N})}\dfrac{|C|}{\log_{|\mathcal{A}|}R_{C,f}}.$$ Since $\Lambda^*(\mathcal{N}) \subset
\Lambda(\mathcal{N})$, ${\textrm{min-cut}}_{\text{K}}(\mathcal{N},f) \geq
{\textrm{min-cut}}_{\text{A}}(\mathcal{N},f)$. It is implied by [@Kowshik12 Lemma 3] that ${\textrm{min-cut}}_{\text{K}}(\mathcal{N},f)$ is an upper bound on $\mathcal{C}(\mathcal{N},f)$. However, ${\textrm{min-cut}}_{\text{K}}(\mathcal{N},f)$ is also not universally an upper bound for the computing capacity. Consider the example in Fig. \[fig:1\]. For the cut $U_1 = \{1,3,v\}$, the corresponding cut set $\mathcal{E}(U_1)= C_1=\{e_4,e_6,e_7\}$. Hence, $${\textrm{min-cut}}_{\text{K}}(\mathcal{N}_1,f) \leq
\dfrac{|C_1|}{\log_{|\mathcal{A}|}R_{C_1,f}}=\dfrac{3}{2} < 2 = \mathcal{C}(\mathcal{N}_1,f).$$
Though in general the upper bounds in [@Appuswamy11; @Kowshik12] are not valid, for various special cases discussed in [@Appuswamy11], e.g., multi-edge tree networks, these bounds still hold.
Tightness {#sec:tightness-results}
---------
The upper bound in Theorem \[theo1\] is tight when the network is a multi-edge tree. We may alternatively prove the same result using [@Appuswamy11 Theorem III.3], together with the facts that $${\textrm{min-cut}}(\mathcal{N},f)=\min_{C\in\Lambda^*(\mathcal{N})}\dfrac{|C|}{\log_{|\mathcal{A}|}W_{C,f}}$$ and that for a multi-edge tree network, $W_{C,f}=R_{C,f}$ for $C\in \Lambda^*(\mathcal{N})$.
\[theo:2\] If $G$ is a multi-edge tree, for network $\mathcal{N}=(G,S,\rho)$ and any target function $f$, $$\mathcal{C}(\mathcal{N},f)={\textrm{min-cut}}(\mathcal{N},f).$$
Fix a pair of $(n,k)$ such that $\frac{k}{n}\log_{|\mathcal{B}|}|\mathcal{A}|\leq{\textrm{min-cut}}(\mathcal{N},f)$. It suffices to show that there exists an $(n,k)$ code computing $f$ on $\mathcal{N}$. We have $$W_{C,f}^{k}\leq |\mathcal{B}|^{n|C|}\label{tr:1}$$ for all $C\in\Lambda(\mathcal{N})$. For node $u$, define
[rCl]{} P(u) & = & { vS|vu },\
(u) & = & { v|(v,u) }.
For all $u\in G$, ${\mathcal{E}_{\mathrm{o}}}(u)$ is a cut set, and
[rCl]{} I\_[[\_]{}(u)]{} & = & P(u), J\_[[\_]{}(u)]{}=,\
R\_[[\_]{}(u),f]{}\^[k]{} & & ||\^[n|[\_]{}(u)|]{}.\[tr:2\]
We assign each $u$ a function $$\gamma_u:\mathcal{A}^{|P(u)|}\rightarrow \left\{
1,2,\dots,R_{P(u),f} \right\}$$ such that $$\gamma_u(x)=\gamma_u(y)\Leftrightarrow x\equiv y|_{P(u)}.$$ For any $s\in\mathcal{A}^{|P(u)|}$, the value $\gamma_u(s)$ determines the equivalence class $s$ lies in. For $x \in \mathcal{A}^{k\times |P(u)|}$, let $$\gamma_u^k(x) = (\gamma_u(x^1), \gamma_u(x^2), \ldots, \gamma_u(x^k))^{\top}.$$
Then consider the following $(n,k)$ code, where we claim that $\gamma_u^k(x_{P(u)})$ can be computed by all nodes $u$, given the initial input $x\in\mathcal{A}^{k\times s}$. This claim is proved inductively with the outline of the code.
Each source node $i\in S$ computes $\gamma_i^k(x_i)$. Note that there are only $R_{ \left\{ i \right\},f}^k\leq
|\mathcal{B}|^{n|{\mathcal{E}_{\mathrm{o}}}(i)|}$ possible outputs of $\gamma_i^k$. So we can encode each output into a distinct string of $\mathcal{B}^{n|{\mathcal{E}_{\mathrm{o}}}(i)|}$, and send the $n|{\mathcal{E}_{\mathrm{o}}}(i)|$ entries of the string in $n$ uses of the edges in ${\mathcal{E}_{\mathrm{o}}}(i)$. Thus the claim holds for all source nodes $i$.
For an intermediate node $u$, let $m=|\mathrm{prec}(u)|$ and denote $\mathrm{prec}(u)=\left\{ v_1,\dots,v_m \right\}$. Assume that the claim holds for all $v\in\mathrm{prec}(u)$. Node $u$ first recovers $\gamma_v^k(x_{P(v)})$ for all $v\in\mathrm{prec}(u)$ using the symbols received from ${\mathcal{E}_{\mathrm{i}}}(u)$. This is possible by the induction hypothesis and ${\mathcal{E}_{\mathrm{o}}}(v)\subset {\mathcal{E}_{\mathrm{i}}}(u)$ for all $v\in\mathrm{prec}(u)$. Then node $u$ fixes $s\in\mathcal{A}^{k\times|P(u)|}$ such that $\gamma_{v_j}^k(s_{{\mathbf{i}}_{P(u)}(P(v_j))})=\gamma_{v_j}^k(x_{P(v_j)})$ holds for all $1\leq j\leq m$, i.e. $$\label{tr:3}s^i_{{\mathbf{i}}_{P(u)}(P(v_j) )}\equiv x^i_{P(v_j)}|_{P(v_j)},\forall 1\leq j\leq m,1\leq i\leq k.$$ Such an $s$ can be found by enumerating the matrices in $\mathcal{A}^{k\times|P(u)|}$.
By , we can encode each output of $\gamma_u^k$ into a distinct string of $\mathcal{B}^{n|{\mathcal{E}_{\mathrm{o}}}(u)|}$. In this $(n,k)$ code, node $u$ encodes $\gamma_u^k(s)$ and sends the $n|{\mathcal{E}_{\mathrm{o}}}(u)|$ entries of the string in $n$ uses of the edges in ${\mathcal{E}_{\mathrm{o}}}(u)$.
We claim that $\gamma_u^k(s) = \gamma_u^k(x_{P(u)})$. As $G$ is a tree, we have $$P(u)=\cup_{1\leq j\leq m}P(v_j)$$ where $P(v)\cap P(v')=\emptyset$ if $v\neq v'$. Then define a sequence of strings $\left\{ a_j \right\}_{j=0}^m, a_j\in\mathcal{A}^{1\times P(u)}$ as follows: $$a_0=s^i,a_m=x^i_{P(u)},$$ $$(a_j)_{{\mathbf{i}}_{P(u)}(P(v_l))}=\begin{cases}s^i_{{\mathbf{i}}_{P(u)}(P(v_l))}, l>j;\\x^i_{P(v_l)},l\leq j.\end{cases}$$
Consider strings $a_{j-1}$ and $a_{j}$, for all $1\leq j\leq m$. In Lemma \[lem:red2\], let
$$\left\{
\begin{array}[]{l}
a'=a_{j-1},\\
b'=a_j,\\
a=(a')_{{\mathbf{i}}_{P(u)}(P(v_j))}=(s^i)_{{\mathbf{i}}_{P(u)}(P(v_j))},\\ b=(b')_{{\mathbf{i}}_{P(u)}(P(v_j))}=(x^i)_{P(v_j)},\\
c=(a')_{{\mathbf{i}}_{P(u)}(P(u)\setminus P(v_j))}=(b')_{{\mathbf{i}}_{P(u)}(P(u)\setminus P(v_j))},
\end{array}
\right.$$
we have $a'\equiv b'|_{P(u)}$, i.e. $a_{j-1}\equiv a_j|_{P(u)}$ for all $1\leq j\leq m$. The equivalence then extends to $$a_0=s^i\equiv a_m=x^i_{P(u)}|_{P(u)}$$ for all $1\leq i\leq k$. Therefore $s^i\equiv x^i_{P(u)}|_{P(u)}$ always holds, and we have $$\gamma_u^k(s)=\gamma_u^k(x_{P(u)}),$$ finishing the induction.
The sink node can compute $\gamma_\rho^k(x)$, identifying $f^{(k)}(x)$.
The upper bound in Theorem \[theo1\] is not tight for certain cases. Consider the network $\mathcal{N}_2$ in Fig. \[fig:5\](a) provided in [@Appuswamy11]. Note that in $\mathcal{N}_2$, source nodes $1$ and $2$ have incoming edges. To match our model described in Section \[sec:funct-comp-netw\], we can modify $\mathcal{N}_2$ to $\mathcal{N}_2'$ shown in Fig. \[fig:5\](b), where the number of edges from node $i$ to node $i'$ is infinity, $i=1,2$. Every network code in $\mathcal{N}_2'$ naturally induces a network code in $\mathcal{N}_2$ and vise versa. Hence, we have
[rCl]{} (\_2,f) & = & (\_2’,f).
We then evaluate ${\textrm{min-cut}}(\mathcal{N}_2',f)$. Note that $$\dfrac{|C|}{\log_{|\mathcal{A}|}W_{C,f}}<\infty$$ holds only if $|C|<\infty$, and we can thus consider only the finite cut sets. For a finite cut set $C$, we denote by $C'= C\cap\left\{ e_1,\dots,e_4
\right\}$. We have $|C'|\leq|C|$ and $J_{C'}\subseteq J_C$, and we claim $I_{C'}=I_C$. Note that $I_{C'}\subseteq I_C$. Suppose that there exists $i\in I_C\setminus I_{C'}$, then there exists a path from $i$ to $\rho$ which is disjoint with $C'$, but shares a subset $D$ of edges with $C$. Then $D\subset {\mathcal{E}_{\mathrm{o}}}(i)$ and hence $|D|=1$. We simply replace the edge in $D$ by an arbitrary edge in ${\mathcal{E}_{\mathrm{o}}}(i)\setminus C$ and form a new path from $i$ to $\rho$. This is always possible, since $C\cap {\mathcal{E}_{\mathrm{o}}}(i)$ is finite while ${\mathcal{E}_{\mathrm{o}}}(i)$ is not. The newly formed path is disjoint with $C$, and then we have $i\notin I_C$, a contradiction.
According to Lemma \[col:red\], we have $W_{C',f}\geq W_{C,f}$ and hence $\frac{|C'|}{\log_{|\mathcal{A}|}W_{C',f}}\leq\frac{|C|}{\log_{|\mathcal{A}|}W_{C,f}}$. Therefore we can consider only cut sets $C'\subseteq\left\{ e_1,e_2,e_3,e_4 \right\}$. We then have ${\textrm{min-cut}}(\mathcal{N}_2',f) = 1$, where the minimum is obtained by the cut set $\left\{ e_2,e_4 \right\}$. While for network $\mathcal{N}_2$, it has been proved in [@Appuswamy11] that $\mathcal{C}(\mathcal{N}_2,f)=\log_64 < 1$. Hence ${\textrm{min-cut}}(\mathcal{N}_2',f)=1>\mathcal{C}(\mathcal{N}_2',f)$.
Proof of Main Theorem {#sec:proof-main-theorem}
=====================
To prove Theorem \[theo1\], we first give the definition of F-extension and two lemmas.
\[def:3\]\[F-Extension\] Given a network $\mathcal{N}$ and a cut set $C\in\Lambda(\mathcal{N})$, define $D(C)\subseteq \mathcal{E}$ as $$D(C)=\bigcup_{ i\notin I_C}{\mathcal{E}_{\mathrm{o}}}( i).$$ Then the F-extension of $C$ is defined as $$F(C)=C\cup D(C).$$
\[lem1\] For every cut set $C$, $F(C)$ is a global cut set, i.e. $$\forall C\in\Lambda(\mathcal{N}), I_{F(C)}=S.$$
Clearly, $I_C\subseteq I_{F(C)}$, then it suffices to show that for all $ i\notin I_C$, we have $ i\in I_{F(C)}$. This is true, since ${\mathcal{E}_{\mathrm{o}}}( i)\subseteq F(C)$ and $i\in I_{{\mathcal{E}_{\mathrm{o}}}(
i)}$ imply $i\in I_{F(C)}.$
\[lem2\] Consider a $(n,k)$ network code in $\mathcal{N}=(G,S,\rho)$. For any global cut set $C$, $\psi(x)$ is a function of $g^{C}(x)$, i.e., $\psi(x) =
\psi^C(g^C(x))$ for certain function $\psi^C$.
For a global cut set $C$ of $\mathcal{N}$. Let $G_C$ be the subgraph of $G$ formed by the (largest) connected component of $G$ including $\rho$ after removing $C$ from $\mathcal{E}$. Let $S_C$ be the set of nodes in $G_C$ that do not have incoming edges. Since $G_C$ is also a DAG, $S_C$ is not empty. For each node $u\in S_C$, we have i) $u$ is not a source node in $\mathcal{N}$ since otherwise $C$ would not be a global cut set, and ii) all the incoming edges of $u$ in $G$ are in $C$ since otherwise $G_C$ can be larger. For each node $u$ in $G_C$ but not in $S_C$, the incoming edges of $u$ are either in $G_C$ or in $C$, since otherwise the cut set $C$ would not be global. If we can show that for any edge $e$ in $G_C$, $g^e(x)$ is a function of $g^C(x)$, then $\psi(x)=\varphi({\mathcal{E}_{\mathrm{i}}}(\rho))$ is a function of $g^C(x)$.
Suppose that $G_C$ has $K$ nodes. Fix an order on the set of nodes in $G_C$ that is consistent with the partial order induced by $G_C$, and number these nodes as $u_1<\ldots< u_K$, where $u_K=\rho$. Denote by ${\mathcal{E}_{\mathrm{o}}}(u|G_C)$ the set of outgoing edges of $u$ in $G_C$. We claim that $g^{{\mathcal{E}_{\mathrm{o}}}(u_i|G_C)}(x)$ is a function of $g^C(x)$ for $i=1,\dots,K$, which implies that for any edge $e$ in $G_C$, $g^e(x)$ is a function of $g^C(x)$. We prove this inductively. First $g^{{\mathcal{E}_{\mathrm{o}}}(u_1|G_C)}(x)$ is a function of $g^C(x)$ since $u_1\in S_C$ and hence all the incoming edges of $u_1$ in $G$ are in $C$. Assume that the claim holds for the first $k$ nodes in $G_C$, $k\geq
1$. For $u_{k+1}$, we have two cases: If $u_{k+1}\in S_C$, the claim holds since all the incoming edges of $u_{k+1}$ in $G$ are in $C$. If $u_{k+1}\notin S_C$, we know that ${\mathcal{E}_{\mathrm{i}}}(u_{k+1})\subset
\cup_{i=1}^k{\mathcal{E}_{\mathrm{o}}}(u_i|G_C) \cup C$. By the induction hypothesis, we have that ${\mathcal{E}_{\mathrm{o}}}(u_{k+1}|G_C)$ is a function of $g^C(x)$. The proof is completed.
In the following proof of Theorem \[theo1\], it will be handy to extend the equivalence relation for a block of function inputs. For disjoint sets $I,J\in
S$ and $c\in\mathcal{A}^{1\times |J|}$ we say $a,b\in\mathcal{A}^{k\times
|I|}$ are $(I,J,c)$-equivalent if for any $x,y\in\mathcal{A}^{k\times s}$ with $x_I=a,y_I=b,x_J=y_J=\left(
c^\top,c^\top,\dots,c^\top \right)^{\top}$ and $x_{S\backslash I\cup J}=y_{S\backslash I\cup
J}$, we have $f^{(k)}(x)=f^{(k)}(y)$. Then for the set $\mathcal{A}^{k\times |I|}$, the number of equivalence classes induced by the equivalence relation is $\left( W_{I,J,f}^{(c)}\right)^k$.
Suppose that we have a $(n,k)$ code with $$\label{eq:5}
\frac{k}{n} \log_{|\mathcal{B}|}|\mathcal{A}|>
{\textrm{min-cut}}(\mathcal{N},f).$$ We show that this code cannot compute $f(x)$ correctly for all $x\in
\mathcal{A}^{k\times s}$. Denote $$\label{eq:6}
C^*=\arg\min_{C\in\Lambda(\mathcal{N})}\dfrac{|C|}{\log_{|\mathcal{A}|}W_{C,f}}$$ and $$\label{eq:7}
c^*=\arg\max_{c\in\mathcal{A}^{|J_{C^*}|}}W^{(c)}_{I_{C^*},J_{C^*},f}.$$ By -, we have $$\frac{k}{n}\log_{|\mathcal{B}|}|\mathcal{A}|>\frac{|C^*|}{\log_{|\mathcal{A}|}W_{I_{C^*},J_{C^*},f}^{(c^*)}},$$ which leads to $$\label{eq:3}
|\mathcal{B}|^{|C^*|n}<\left(W^{(c^*)}_{C^*,f}\right)^{k}.$$
Note that $g^{C^*}(x)$ only depends on $x_{K_{C^*}}$. By and the pigeonhole principle, there exist $a, b
\in \mathcal{A}^{k\times |I_{C^*}|}$ such that i) $a$ and $b$ are not $(I_{C^*},J_{C^*},c^*)$-equivalent and ii) $g^{C^*}(x)=g^{C^*}(y)$ for any $x,y\in\mathcal{A}^{k\times s}$ with $$\label{eq:4} \left\{
\begin{array}[]{l}
x_{I_{C^*}} = a,\quad y_{I_{C^*}} = b, \\
x_{J_{C^*}} = y_{J_{C^*}}=(c^{*\top},c^{*\top},\dots,c^{*\top})^\top, \\
x_{S\setminus K_{C^*}} = y_{S\setminus K_{C^*}}.
\end{array}
\right.$$ Fix $x,y\in\mathcal{A}^{k\times s}$ satisfying and $f^{(k)}(x)\neq f^{(k)}(y)$. The existence of such $x$ and $y$ is due to i). Since $C^*$ and $D(C^*)$ are disjoint (see Definition \[def:3\]) and for any $i\notin I_{C^*}$, $x_i=y_i$, together with ii), we have $$g^{F(C^*)}(x) = g^{F(C^*)}(y).$$ Thus, applying Lemma \[lem2\] we have $\psi(x)=\psi(y)$. Therefore, the code cannot computes both $f^{(k)}(x)$ and $f^{(k)}(y)$ correctly. The proof is completed.
[10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{}
A. Giridhar and P. Kumar, “Computing and communicating functions over sensor networks,” *Selected Areas in Communications, IEEE Journal on*, vol. 23, no. 4, pp. 755–764, April 2005.
R. Appuswamy, M. Franceschetti, N. Karamchandani, and K. Zeger, “Network coding for computing: Cut-set bounds,” *Information Theory, IEEE Transactions on*, vol. 57, no. 2, pp. 1015–1030, Feb 2011.
H. Kowshik and P. Kumar, “Optimal function computation in directed and undirected graphs,” *Information Theory, IEEE Transactions on*, vol. 58, no. 6, pp. 3407–3418, June 2012.
A. Ramamoorthy and M. Langberg, “Communicating the sum of sources over a network,” *Selected Areas in Communications, IEEE Journal on*, vol. 31, no. 4, pp. 655–665, April 2013.
R. Ahlswede, N. Cai, S.-Y. R. Li, and R. W. Yeung, “Network information flow,” *[IEEE]{} Trans. Inform. Theory*, vol. 46, no. 4, pp. 1204–1216, Jul. 2000.
S.-Y. R. Li, R. W. Yeung, and N. Cai, “Linear network coding,” *[IEEE]{} Trans. Inform. Theory*, vol. 49, no. 2, pp. 371–381, Feb. 2003.
R. Koetter and M. Medard, “An algebraic approach to network coding,” *[IEEE/ACM]{} Trans. Networking*, vol. 11, no. 5, pp. 782–795, Oct. 2003.
R. Appuswamy and M. Franceschetti, “Computing linear functions by linear coding over networks,” *Information Theory, IEEE Transactions on*, vol. 60, no. 1, pp. 422–431, Jan 2014.
B. Rai and B. Dey, “On network coding for sum-networks,” *Information Theory, IEEE Transactions on*, vol. 58, no. 1, pp. 50–63, Jan 2012.
[^1]: This work was supported in part by the National Basic Research Program of China Grant 2011CBA00300, 2011CBA00301, the National Natural Science Foundation of China Grant 61033001, 61361136003, 61471215. The work described in this paper was partially supported by a grant from University Grants Committee of the Hong Kong Special Administrative Region, China (Project No. AoE/E-02/08).
[^2]: A function $f:\mathcal{A}\rightarrow \mathcal{A}$ is identity if $f(x)=x$ for all $x\in \mathcal{A}$.
[^3]: One use of a network means the use of each link in the network at most once.
[^4]: No proof is provided for the upper bound (Lemma 3) in [@Kowshik12].
|
---
abstract: 'We present a descriptive analysis of Twitter data. Our study focuses on extracting the main side effects associated with HIV treatments. The crux of our work was the identification of personal tweets referring to HIV. We summarize our results in an infographic aimed at the general public. In addition, we present a measure of user sentiment based on hand-rated tweets.'
author:
- |
Cosme Adrover, Ph.D. Todd Bodnar Marcel Salathé, Ph.D.\
Center for Infectious Disease Dynamics\
Pennsylvania State University\
208 Mueller Lab\
University Park, PA 16802, U.S.A.
title: 'Targeting [HIV-related]{} Medication Side Effects and Sentiment Using Twitter Data'
---
Introduction
============
Twitter is a popular micro-blogging platform where users share thoughts and emotions. Everyday, hundreds of millions of tweets are posted on Twitter. This offers a large potential source of information for translational medicine, like for instance, to predict regions with large likelihood disease outbreaks based on tweeted vaccination sentiments [@marcel:2011]. Moreover, this large pool of available data can be used to estimate what the most common side effects of medication are, as quoted by online users.
The study that we present is based upon tweets filtered by specific keywords related to HIV and HIV treatments. Our goal was to determine if there are common side effects related to a particular HIV treatment, and to establish overall user sentiment.
Through our study, we defined methods to target populations of interest. We used crowdsourcing Amazon Mechanical Turk to rate tweets to create training samples for our machine learning algorithms. On the analytical side, these algorithms were used to identify our targeted community: users with HIV whose tweets included references to treatments, symptomatic descriptions, opinions, feelings, etc.
Datasets
========
We used tweets collected between September 2010 and August 2013. These tweets were filtered by at least one of the following keywords contained within the tweet: Sustiva, Stocrin, Viread, FTC, Ziagen, 3TC, Epivir, Retrovir, Viramune, Edurant, Prezista, Reyataz, Norvir, Kaletra, Isentress, Tivicay, Atripla, Trizivir, Truvada, Combivir, Kivexa, Epzicom, Complera, Stribild, HIV treatment, HIV drug, anti-hiv, triple therapy hiv, anti hiv. The sample size is of 39,988,306 tweets.
Processing of tweets {#processing}
====================
For each collected tweet, we created a list of all the distinct tokens contained in the tweet. If at least one keyword matched at least one item in the list, we kept that tweet. Otherwise, we discarded the tweet. This step reduced our data sample to about 1.8 million tweets, mainly due to the presence of compound words such as *giftcard*, triggered by the keyword FTC, that represented a large source of noise. Moreover, a subsample of tweets containing the keyword FTC presented a large bias towards information related to the *Federal Trade Commission*. We decided to discard this subsample for this study. After this further reduction of the data, the sample to analyze contained 316,081 tweets.
Based on random trials of human-rated tweets, we discarded tweets containing the following words: *bit.ly, t.co, million, http, free, buy, news, de, e, za, que, en, lek, la, obat, da, majka, molim, hitno, mil, africa*; also the tweets starting with *HIV*, tweets containing *3TC* and that were posted by users identified as non-English speakers on Twitter. We estimate that this removal lead to a maximum loss of about 300 tweets from our targeted community.
Following the aforementioned processing of tweets, the sample was drastically reduced to 37,337 tweets, about 0.1$\%$ of the original sample size. In the next section, we describe how we identified tweets posted by our community of interest.
Identification of targeted community
====================================
Our goal was to be able to identify the targeted community of users making personal posts about HIV from the remaining noise contained within our data. This noise mainly constitutes tweets characterized by an objective message, such as news regarding a particular HIV treatment. To remove the noise and attain our goal, we defined a set of features that aimed to transform the tweet into quantitative information. Moreover, these features were selected based on their separation power between objective information and subjective sentences charged with personal references. Tab. 1 summarizes the description of each feature used in our analysis. Fig. \[variables\] displays two of these features obtained using crowdsourcing rated data for both targeted community and noise. This figure shows how these features are indeed useful tools to distinguish targeted community from noise, our main goal in this section [@featuremine].
![Distributions of tagnoun (top) and personalcount (bottom) for noise (black dots) and targeted community (red). The statistical uncertainties are represented on the vertical axis. The horizontal red lines are bins and not uncertainties.[]{data-label="variables"}](tagnoun.pdf "fig:"){height="2in"}\
![Distributions of tagnoun (top) and personalcount (bottom) for noise (black dots) and targeted community (red). The statistical uncertainties are represented on the vertical axis. The horizontal red lines are bins and not uncertainties.[]{data-label="variables"}](personalcount.pdf "fig:"){height="2in"}\
\[features\]
Feature Definition
---------------- --------------------------------------------------------------------------------------------------
personalcount Number of first person pronouns in tweet.
is\_notenglish Number of words contained in a list of most common words in tweets rated as not english.
tagnoun Number of nouns in tweet divided by the total number of words.
sis\_noise Ratio of a measure of similarity between tweet and noise corpus by its uncertainty.
sis\_signal Ratio of a measure of similarity between tweet and targeted community corpus by its uncertainty.
bigrams\_noise Weighted number of bigrams in tweet contained in corpus of noise tweets.
is\_english Ratio of number of words in English by number of words in foreign languages.
common\_noise Weighted number of words in tweet similar to common words in noise corpus.
common\_signal Weighted number of words in tweet similar to common words in targeted community corpus.
wordscount Number of words in tweet.
ncharacters Number of characters in tweet.
At the same time, we aimed to remove non-English tweets. The method that we proposed applies the following requirements: *is\_english*$\geq$1, *ncharacters*$<$150, *in\_notenglish*$<$14\
and $\frac{\emph{wordscount}}{\emph{in\_notenglish}}>1$. We estimated that these requirements lead to a 6$\%$ loss of targeted tweets, while removing 20$\%$ of all remaining noise and 94$\%$ of non-English tweets.
We utilized the TMVA library [@tmva] to define our machine learning classifier. A Support Vector Machines classifier, trained using the variables personalcount, tagnoun, sis\_noise, sis\_signal, bigrams\_noise, is\_english, common\_noise,\
common\_signal and ncharacters, allowed to reduce the noise by 80$\%$ while keeping 90$\%$ of the targeted tweets (Fig. \[comparison\]). We refer to this point on the ROC curve as it represents our working region. This region allows us to keep a large fraction of interesting tweets, while removing the necessary noise at the level to permit crowdsourcing rating that is within our budget.
We were confident with the performances of our classifier as the results obtained using a different testing (cross-check) sample agree within the statistical uncertainties (Fig. \[comparison\]). Tab. 2 summarizes the data sample sizes used for training, testing and validating our classifier. We tested other algorithms such Boosted Decision Trees and Artificial Neural Networks, but Support Vector Machines outperforms the former two in our region of interest.
![Classifier efficiency on targeted tweets (Signal) versus noise rejection efficiency for two different testing samples. Gray bands represent statistical uncertainties.[]{data-label="comparison"}](comparison){width="2.5in" height="2in"}
\[yields\]
Data Type Training Testing Validation
----------- ---------- --------- ------------
Noise 603 603 603
Targeted 49 30 30
: Number of tweets used to train, test and validate our classifier, for both noise and targeted HIV-related tweets.
After using the trained classifier to reduce our sample of 37,337 tweets we obtained 5,543. These tweets were then sent for crowdsource rating, leading to 1,642 tweets from our targeted community, posted by 518 unique users. Taking into account analysis requirements, this last figure of tweets approximately represents 75$\%$ of the starting targeted community.
For a more a detailed version of this section, as well as relevant code, more information is available in the Appendix and at [cosme-adrover.com](http://www.cosme-adrover.com).
Analysis of targeted community tweets
=====================================
In the previous section we described how we identified users that tweet about their daily lives in the context of HIV. We present in Fig. \[drugstime1\] a visualization of HIV drugs mentioned by users from 09/09/2010 through 08/28/2013. In this plot, made using ggplot [@ggplot], we depict the seven most mentioned drugs separately, and group the rest under the label ’*Other*’. Fig. \[drugstime1\] presents a peak during the first semester of 2012. The total number of tweets is shared among all drugs equally in the first two bins. This trend disappears after the third bin, where atripla receives more explicit mentions. The only period where Atripla$\copyright$ is not ranked first in terms of mentions corresponds to the time spanning from May through August 2012, when Truvada$\copyright$ gained in popularity.
![Drug mentions from 09/09/2010 until 08/28/2013. Each bin spans a total of 60 days.[]{data-label="drugstime1"}](final_drug_on_time){width="3.5in" height="2.5in"}
In Fig. \[drugstimenortsides\] we study drug appearance over time for tweets that are not re-tweets and that contain references to side effects. These side effects were hand-rated. Fig. \[drugstimenortsides\] presents a clear peak during March and April 2012. Moreover, we see that the mentions to Truvada$\copyright$, used as part of a strategy to reduce the risk of HIV infection, are highly reduced when compared to Fig. \[drugstime1\] Spring-Summer 2012 period.
![Drug mentions from 09/09/2010 until 08/28/2013 in tweets that are not re-tweets and have mentions to side effects. Each bin spans a total of 60 days.[]{data-label="drugstimenortsides"}](final_drug_on_time_withsides){width="3.5in" height="2.5in"}
We investigate possible reasons for the peaks in Figs. \[drugstime1\], \[drugstimenortsides\] and conclude that one possible explanation relies on the fact that many new users shared their side effects for the first time during this period (Fig. \[drugstimenortsidesunique\]).
![Drug mentions from 09/09/2010 until 08/28/2013. Each bin spans a total of 60 days. Retweets are excluded, only tweets with mentions to side effects and from unique users over time are considered.[]{data-label="drugstimenortsidesunique"}](final_drug_on_time_withsides_unique){width="3.5in" height="2.5in"}
Study of Side Effects
---------------------
In this section we present an infographic (Fig. \[drugsides\]) that summarizes the most common side effects associated to certain HIV drugs. We designed this infographic to be accessible to the mainstream community. To avoid double counting, a *drug-effect* pair is considered only once for a given user. We only present pairs with at least three appearances in our samples. The numbers near the drug labels represent the number of times the drug is mentioned.
In conclusion, most users report problems regarding their sleep, be it a nightmare or a vivid dream, or lack of sleep. They also report nausea, headaches, as well as symptoms comparable to the effect of illegal psychoactive drugs. Also, about 10$\%$ of the selected relevant tweets indicate no side effects.
{width="9in"}
Analysis of user sentiment
--------------------------
With our sample of isolated HIV patients, we aimed to study sentiment over time. For this study, we hand-rated tweets from -5 to 5, in steps of 1. The former indicates extremely negative feeling and the latter extremely positive.
For each of the 60-day bins presented in the previous section, we computed the sum of the sentiments of all tweets:
$$\Psi \equiv \sum_{i=0}^{N\,\,tweet\,\,in\,\,bin} sentiment_{(tweet_i)}$$
We assign a systematic uncertainty of 1 to each rating, which leads to a total uncertainty in $\Psi$ of $\sqrt{N}$, where $N$ is the number of tweets in a given bin. Ideally, the systematic uncertainty would arise from the distributions of several ratings. This approach is out of the scope of this study in terms of precision and budget.
Fig. \[sentimentall\] shows the computed $\Psi$ in each bin defined as in Fig. \[drugstime1\] for tweets with no requirements imposed. This figure indicates an average negative sentiment, with departures from neutrality compatible within the uncertainties. This compatibility is less evident in bins 12 through 16. We study the effect on the sentiment of not considering retweets and conclude that the differences are minimal when compared to the total sample. Moreover, we compute the correlation between sentiment and number of drug appearances to be lower than 2$\%$.
In Fig. \[sentimentallnortsides\] we exclude retweets and consider only tweets with explicitly mentioned side effects. The sentiment in this last figure clearly drifts towards negative values. Moreover, the dip in the distribution occurs at the same time as the peak of the distribution of mentions (Fig. \[drugstimenortsides\]).
Plots for specific appearances of Atripla$\copyright$, Truvada$\copyright$, and Complera$\copyright$ indicate overall negative sentiment with the exception of Complera (see Figs \[allatripla\], \[alltruvada\], \[allcomplera\]).
![Sentiment score $\Psi$ as a function of time. No requirements are applied to tweets.[]{data-label="sentimentall"}](all){width="3in" height="2in"}
![Sentiment score $\Psi$ as a function of time. Retweets are excluded and only tweets with explicit side effects are considered.[]{data-label="sentimentallnortsides"}](allnortsides){height="2in"}
![Sentiment score $\Psi$ as a function of time for tweets with mentions of Atripla$\copyright$. No requirements are applied to tweets.[]{data-label="allatripla"}](allatripla){height="2in"}
![Sentiment score $\Psi$ as a function of time for tweets with mentions of Truvada$\copyright$. No requirements are applied to tweets.[]{data-label="alltruvada"}](alltruvada){height="2in"}
![Sentiment score $\Psi$ as a function of time with mentions of Complera$\copyright$. No requirements are applied to tweets.[]{data-label="allcomplera"}](allcomplera){height="2in"}
Conclusions
===========
We have presented a descriptive analysis of tweets collected from September 2010 through August 2013 and filtered by HIV-related keywords. After applying data cleaning, machine learning and human rating, we identify 1,642 tweets and 518 unique users describing personal information regarding HIV.
This study explored ways in which to use Twitter data as a source of information for translational medicine. We presented an infographic summarizing the main side effects associated with HIV treatments. Our results conclude that the main concerns of twitter users with HIV medication are related to sleeping issues. Many users report symptoms such as nausea, fatigue, and dizziness at times likened to being under the effect of physcoactive drugs.
The sentiment associated to tweets of the identified targeted community shows no correlation with drug appearance. The overall underlying sentiment is negative and not compatible with neutrality within the uncertainties for tweets that contain references to side effects, and excluding retweets.
Disclaimer
==========
The results presented in this paper should be taken as they are: a description of observations in data streamed from Twitter. In this paper, the authors report their findings and do not claim that the side effects quoted by Twitter users are directly caused by the listed drugs. Moreover, the authors declare no competing financial interests.
[10]{}
Salath[é]{}, Marcel and Khandelwal, Shashank, “Assessing vaccination sentiments with online social media: implications for infectious disease dynamics and control”, PLoS computational biology, 2011
Ryen W. White *et. al.*, “Web-Scale Pharmacovigilance: Listening to Signals from the Crowd”, J Am Med Inform Assoc, 2012
Hoecker, Andreas and Speckmayer, Peter and Stelzer, Joerg and Therhaag, Jan and von Toerne, Eckhard and Voss, Helge, “TMVA: Toolkit for Multivariate Data Analysis”, PoS, 2007, physics/0703039, arXiv
G. van Rossum and F.L. Drake (eds), “Python Reference Manual”, PythonLabs, Virginia, USA, 2001. Available at <http://www.python.org>
Hadley Wickham, “ggplot2: elegant graphics for data analysis”, Springer New York, 2009, 978-0-387-98140-6, <http://had.co.nz/ggplot2/book>
Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E., “Scikit-learn: Machine Learning in Python”, Journal of Machine Learning Research, 2011
ostgre[SQL]{} [G]{}lobal [D]{}evelopment [G]{}roup, “Postgre[SQL]{}”, <http://www.postgresql.org>, 2008
C. Adrover, “FeatureMine”, 2013, Available at <http://www.cosme-adrover.com>
“ROOT: An object oriented data analysis framework”, Rene Brun & Fons Rademakers Linux Journal 998July Issue 51, Metro Link Inc
Bird, Steven, Edward Loper and Ewan Klein (2009). Natural Language Processing with Python. O’Reilly Media Inc.
“Scikit-learn: Machine Learning in Python”, Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E., Journal of Machine Learning Research, 2011
The dataset of tweets used in this paper was provided by Gnip and was size was 15$Gb$. We loaded several attributes of the tweet, and the tweet itself, into a PostgreSQL [@postgresql] database. Tab. 3 shows the tweet attributes created in the database that are relevant for the study that we presented.
\[varsdatabase\]
Name Description Type
------------ ------------------------------ ------------------
tweet Posted tweet text
user\_id User’s identification number bigint
date Day that tweet was posted date
tweet\_id Specific tweet identifier bigint
user\_lang Language on user’s account char varying(15)
: Tweet attribute (left), attribute description (center) and type of variable defined in the database (right).
Processing of tweets {#processing-of-tweets .unnumbered}
====================
We loaded each tweet into a database, and created an index to improve throughput query processing. We used the function GIN [@postgresql] to index our database. Then we applied a tsvector [@postgresql] function to each tweet. This function transformed each tweet into a list of unique lexemes. This allowed to check whether the keyword itself triggered the searching algorithm or was a compound word with one of the keywords embedded within. We requested at least one token to match one of our keywords. This requirement lead to a sample of 1.8 million. Then we discarded tweets that contained FTC as mentioned in Sec. \[processing\], and estimated the most common tokens (Tab. 4).
\[tokens\]
Token Rows Total
----------- -------- --------
hiv 199159 227374
t.co 197078 202777
drug 94680 104102
rt 79181 85599
treatment 69529 73126
truvada 44325 49363
anti 44591 45885
onlin 27892 32538
buy 27363 30218
anti-hiv 29583 29900
prevent 28018 29151
new 26263 27667
fda 25850 27620
approv 24507 26773
retrovir 21734 25647
aid 23863 24804
bit.ly 22462 22932
news 16812 18168
generic 14851 17865
de 12959 17314
: Most common tokens (left column). Total number of appearances (right column) and number of rows or tweets it appears (middle column).
Hereafter signal denotes our targeted tweets posted by users with HIV, and noise denotes tweets about topics unrelated with our work, such as news. Our analysis started with loose criteria to reduce the noise in our sample. We selected three random samples of 500 tweets containing t.co, bit.ly, and starting by HIV. We selected also one random sample of 500 tweets not containing these words. We annotated all tweets in these random samples as either noise or signal. In the case of a slight indication of subjectivity, the tweet was rated as signal. We found 0, 1, and 1 possible signal tweets in the first samples, respectively. 24 possible signal tweets were found in the last sample where the words t.co, bit.ly and starting by HIV were excluded. From this last annotation we derived that 7.4 $\pm$ 2.7 possible signal tweets were expected to be found in 500 tweets taken from the original 316,081. The subsample with t.co, bit.ly, starting HIV was far from containing possible signal tweets, and was discarded from the following steps of the analysis. Although we expected to discard about 300 or 400 possible tweets, this cleaning process provided robustness to our procedure.
In a second step, we checked tweets containing http, news, or buy. We selected three random samples of 140 tweets and annotated zero tweets in each of the three cases. As we found 24 possible signal tweets in a total of 500 tweets, we expected 6.7 $\pm$ 2.6 possible signal tweets in 140 tweets, assuming a Poissonian distribution. We computed the probability to discard a possible signal tweet if we removed tweets containing http, news or buy to be $4 \times 10^{-6}$:
$$\label{eq:probabilities}
z > \frac{6.7-0}{2.6/\sqrt{3}} \rightarrow prob<0.000004,$$
were z is the cumulative probability distribution. After we discarded tweets containing either http, news, or buy, our dataset was reduced to 60,000 tweets.
In the final step we separated two samples of 150 tweets containing either free, buy, de, e, za, que, en, lek, la, obat, da, majka, molim, hitno, mil, africa. We annotated zero possible signal tweets in both cases. Then, we estimated that the probability of losing possible signal tweets if we removed tweets containing at least one of this large set of words was less than $5 \times 10^{-6}$:
$$\label{eq:probabilities}
z > \frac{10.0-0}{3.2/\sqrt{2}} \rightarrow prob<0.000005$$
At the end of our cleaning process, 37,337 tweets remained within our dataset.
Identification of targeted community {#identification-of-targeted-community-1 .unnumbered}
====================================
After we applied the cleaning process described in the previous section, our dataset contained a larger fraction of possible signal tweets. The main goal of our analysis was to get a pure sample of these tweets. In this section we detail the steps taken to reach such sample.
Firstly, we quantified each tweet into a set features, and, secondly, we combined these features into a single classifier.
Feature extraction {#feature-extraction .unnumbered}
------------------
Hereafter we give a broader definition of all features extracted from each tweet:
- *modalcount*: number of times the words “should”,\
“shoulda”, “can”, “could”, “may”, “might”, “must”, “ought”, “shall”, “would”, and “woulda” occur in the tweet;
- *futurecount*: number of times the words “going”, “will”, “gonna”, “should”, “shoulda”, “ll”, “d” occur in the tweet;
- *personalcount*: number of times the words “i”, “me”, “my”, “mine”, “ill”, “im”, “id”, “myself” occur in the tweet;
- *negative*: number of times the words “not”, “wont”, “nt”, “shouldnt”, “couldnt” occur in the tweet;
- *secondpron*: number of times the words “you”, “youll”, “yours”, “yourself” occur in the tweet;
- *thirdpron*: number of times the words “he”, “she”, “it”, “his”, “her”, “its”, “himself”, “him”, “herself”, “itself”, “they”, “their”, “them”, “themselves” occur in the tweet;
- *relatpron*: number of times the words “that”, “which”, “who”, “whose”, “whichever”, “whoever”, “whoever” occur in the tweet;
- *dempron*: number of times the words “this”, “these”, “that”, “those” occur in the tweet;
- *indpron*: number of times the words “anybody”, “anyone”, “anything”, “each”, “either”, “everyone”, “everything”, “neither”, “nobody”, “somebody”, “something”, “both”, “few”, “many”, “several”, “all”, “any”, “most”, “none”, “some” occur in the tweet;
- *intpron*: number of times the words “what”, “who”, “which”, “whom”, “whose” occur in the tweet;
- *percent*: number of $\%$ symbols in the tweet;
- *posnoise*: number of times the words “new”, “pill”, “state”, “states”, “stats”, “drug”, “people”, “approved”, “approve”, “approves”, “approval”, “approach”, “prevention”, “prevent”, “prevents”, “prevented” occur in the tweet;
- *is\_notenglish*: number of times words contained in a list of words extracted from annotated tweets as not English occur in the tweet;
- *regularpast*: number of words ending with $ed$ contained in the tweet;
- *gerund*: number of words ending with $ing$ contained in the tweet;
- *nment*: number of words ending with $ment$ contained in the tweet;
- *nfull*: number of words ending with $full$ contained in the tweet;
- *tagadj*: ratio of the number of adjectives tagged using NLTK [@nltk] in the tweet by the total number of words in the tweet;
- *tagverb*: ratio of the number of verbs tagged using NLTK [@nltk] in the tweet by the total number of words in the tweet;
- *tagprep*: ratio of the number of prepositions tagged using NLTK [@nltk] in the tweet by the total number of words in the tweet;
- *tagnoun*: ratio of the number of nouns tagged using NLTK [@nltk] in the tweet by the total number of words in the tweet;
- *tagconj*: ratio of the number of conjunctions tagged using NLTK [@nltk] in the tweet by the total number of words in the tweet;
- *tagadv*: ratio of the number of adverbs tagged using NLTK [@nltk] in the tweet by the total number of words in the tweet;
- *tagto*: ratio of the number of $to$ tagged using NLTK [@nltk] in the tweet by the total number of words in the tweet;
- *tagdeterm*: ratio of the number of determinants tagged using NLTK [@nltk] in the tweet by the total number of words in the tweet;
- *sis\_noise*: ratio of the similarity of the tweet with a corpus of annotated noise tweets by its uncertainty. To compute the similarity we first create a sparsity matrix of the tokens in the annotated corpus, then count the number of times the token appears in the tweet and divide by the number of elements in the corpus. We use scikit-learn [@scikitlearn] library in several parts of the definition of sis\_noise;
- *sis\_signal*: ratio of the similarity of the tweet with a corpus of annotated noise tweets by its uncertainty. To compute the similarity we first create a sparsity matrix of the tokens in the annotated corpus, then count the number of times the token appears in the tweet and divide by the number of elements in the corpus. We use scikit-learn [@scikitlearn] library in several parts of the definition of sis\_noise;
- *is\_english*: number of words in corpus of English words\
[@nltk] divided by number of words in corpuses of Spanish, Portuguese, French, German, Dutch, Italian, Russian, Swedish, and Danish [@nltk]. We add one in both numerator and denominator to avoid dividing by zero.
- *bigrams\_noise*: number of bigrams found in tweet that are contained in list of bigrams of noise annotated bigrams corpuses divided by the total number of bigrams from annotated corpuses;
- *bigrams\_signal*: number of bigrams found in tweet that are contained in list of bigrams of signal annotated bigrams corpuses divided by the total number of bigrams from annotated corpuses;
- *isolation*: number of keywords contained in tweet minus one;
- *common\_noise*: sum of the weights of each word contained in most common 25% of words in noise annotated tweets;
- *common\_signal*: sum of the weights of each word contained in most common 25% of words in signal annotated tweets;
- *wordscount*: number of words in tweet;
- *tweetlength*: number of characters in tweet.
We used crowdsourcing Amazon Mechanical Turk to rate tweets into three categories: signal, noise, and not English. Two workers rated each tweet. We had an agreement of about 80% between workers that rated our tweets. We used tweets with agreement to define our samples. Moreover, we used tweets rated as not English as a control sample in order to remove foreign language tweets (see below).
We show all the variables in Fig. \[hiv:variables:1\]. These figures have been obtained with 109 signal events and 1,809 noise events. Several features show more separation power than others. For instance, sis\_noise and commonsignal falls in the former category, while nful and tagadj falls in the latter. Tab. 5 shows the computed separation power between signal and noise of each feature.
{width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"} {width="1.5in"}
\[featuresimportance\]
Rank Variable Separation
------ ----------------- ------------------------
1 personalcount 2.853$\times 10^{-01}$
2 bigrams\_noise 2.689$\times 10^{-01}$
3 posnoise 2.389$\times 10^{-01}$
4 tagnoun 2.339$\times 10^{-01}$
5 sis\_noise 2.182$\times 10^{-01}$
6 ncharacters 1.971$\times 10^{-01}$
7 commonnoise 1.727$\times 10^{-01}$
8 tagconj 1.641$\times 10^{-01}$
9 wordcount 1.549$\times 10^{-01}$
10 sis\_signal 1.489$\times 10^{-01}$
11 tagdeterm 1.442$\times 10^{-01}$
12 tagadv 1.438$\times 10^{-01}$
13 tagprep 1.379$\times 10^{-01}$
14 commonsignal 1.350$\times 10^{-01}$
15 tagadj 1.229$\times 10^{-01}$
16 tagverb 1.138$\times 10^{-01}$
17 tagto 1.044$\times 10^{-01}$
18 is\_english 1.033$\times 10^{-01}$
19 in\_notenglish 7.112$\times 10^{-02}$
20 nment 6.890$\times 10^{-02}$
21 secondpron 6.648$\times 10^{-02}$
22 isolation 6.029$\times 10^{-02}$
23 regularpast 4.609$\times 10^{-02}$
24 bigrams\_signal 4.309$\times 10^{-02}$
25 modalcount 4.155$\times 10^{-02}$
26 ncount 3.524$\times 10^{-02}$
27 relatpron 3.280$\times 10^{-02}$
28 thirdpron 3.081$\times 10^{-02}$
29 gerund 2.707$\times 10^{-02}$
30 percent 2.656$\times 10^{-02}$
31 dempron 1.909$\times 10^{-02}$
32 intpron 1.646$\times 10^{-02}$
33 negative 1.283$\times 10^{-02}$
34 pharmacy 1.093$\times 10^{-02}$
35 indpron 6.787$\times 10^{-03}$
36 nful 5.266$\times 10^{-03}$
37 futurecount 5.259$\times 10^{-03}$
: Features ranked by its separation power between signal and noise.
Foreign Language Removal {#foreign-language-removal .unnumbered}
------------------------
We extracted features from tweets to be able to feed a machine learning algorithm, and separate noise from signal more efficiently. Nevertheless, even if we had the best semantic features, the machinery would have difficulties in separating tweets that are not in English. In this section we propose a method to suppress almost all of foreign tweets without losing much signal.
We recall that we used tweets rated as not English as our control sample. Fig. \[hiv:cuts:isenglish\] shows the distributions of is\_english for tweets rated as not English and the rest. 60% of not English tweets and 2.5% of the rest remain at values of is\_english below 1. Therefore, we would reject 2.5% of signal tweets and 60% of not English tweets if we required is\_english$\geq$1. The assumption that the distribution of\
is\_english for signal tweets and English tweets is the same, is validated by Fig. \[hiv:cuts:sigvseng\].
{width="3in"} \[hiv:cuts:isenglish\]
{width="3in"} \[hiv:cuts:sigvseng\]
Fig. \[hiv:cuts:innotenglish\] shows the distributions of in\_notenglish for tweets annotated as not English and the rest of annotated tweets.
{width="3in"} \[hiv:cuts:innotenglish\]
Almost all values of in\_notenglish are equal to wordcount for tweets annotated as not English. Fig. \[hiv:cuts:ratio\] shows the ratio of wordcount by in\_notenglish for tweets annotated as not English and the rest of tweets.
{width="3in"} \[hiv:cuts:ratio\]
In conclusion, Tab. 6 summarizes the yields before and after the requirements: is\_english$\ge$1, ncharacters$<$150,\
in\_notenglish$<$14, $\frac{wordcount}{in\_notenglish}>1$. These requirements lead to an estimated 6$\%$ signal lose, while removing 20$\%$ of all noise and 94$\%$ of not English tweets.
\[hiv:cuts:table\]
Type Yield before Yield after
--------------------- -------------- -------------
Signal 94 88
Noise (all) 2717 2179
Noise (not English) 472 29
: Yields before and after requirements.
Machine learning classifier {#machine-learning-classifier .unnumbered}
---------------------------
The goal of the data curation described through this appendix is to reduce the original sample of tweets to a sample containing only signal. To define training samples for signal, noise and not English, we performed a crowdsourcing request of 2,000 tweets. These tweets were rated by two Amazon Mechanical Turk workers. We believed that another request of about 5,000 $\pm$ 1,000 tweets to be annotated would not compromise our budget for future studies. Therefore, our goal was indeed to reduce the datasample to the quoted 5,000 $\pm$ 1,000 tweets. This implied a reduction of the noise of the order of 80$\%$. Hereafter we detail the steps that we took to reach such reduction.
We used TMVA [@tmva] library to compute a machine learning classifier using all features described above. We split our samples according to Tab. 2, and computed the signal efficiency versus noise rejection for four types of classifiers: Boosted Decision Trees with AdaBoost (BDT), Support Vector Machines (SVM), Boosted Decision Trees with Bagging (BDTG), and Artificial Neural Networks (MLP). The results are shown in Fig. \[hiv:ml:rocall38\]. This figure is a ROC rotated anti-clockwise ninety degrees.
{width="3in"} \[hiv:ml:rocall38\]
Tab. 5 indicates that the features at the top of the ranking have quantifiably larger separation power than those at the bottom. We tested, step by step, whether removing less performant features would have an impact on the overall performance of the classifier. At the end of this procedure that consisted in removing variables, a set of nine variables was chosen. This decision was taken given the inherent simplicity of the classifier defined with fewer variables and the similar separation power between signal and noise of the mentioned classifier defined with less variables. Fig. \[rocallfinal\] shows the ROC obtained for the aforementioned machine learning algorithms, defined with the variables: personalcount, bigrams\_noise, tagnoun, sis\_noise, ncharacters, commonnoise, commonsignal, is\_english, sis\_signal.
{width="3in"} \[rocallfinal\]
In Fig. \[rocallfinal\], the SVM classifier shows the best performances in our region of interest. Moreover, this classifier shows less dependency on the number of variables than BDT. We chose SVM trained with the aforementioned nine variables to define our classifier. Then, we compared the performances of our classifier with two different testing samples and verified a good compatibility (Fig. \[comparison\]).
The output of our classifier is a real number between 0 and 1. Higher values indicate higher probability of being signal. In order to estimate the threshold to apply to our sample, we used the annotated signal tweets and computed the 90% signal efficiency threshold to be 0.45. Therefore, we parsed our entire sample of tweets through our classifier and kept only those tweets with classifier outputs larger than 0.45. As a result of this filtering, our remaining sample was reduced to 5443 tweets that we sent for crowdsourcing rating. Finally, after crowdsourcing, our pure sample of signal contained 1642 annotated tweets.
|
---
abstract: 'We report direct evidence of pre-processing of the galaxies residing in galaxy groups falling into galaxy clusters drawn from the Local Cluster Substructure Survey (LoCuSS). 34 groups have been identified via their X-ray emission in the infall regions of 23 massive ($\rm \langle M_{200}\rangle = 10^{15}\,M_{\odot}$) clusters at $0.15<z<0.3$. Highly complete spectroscopic coverage combined with 24 $\rm\mu$m imaging from Spitzer allows us to make a consistent and robust selection of cluster and group members including star forming galaxies down to a stellar mass limit of $\rm M_{\star} = 2\times10^{10}\,M_{\odot}$. The fraction $\rm f_{SF}$ of star forming galaxies in infalling groups is lower and with a flatter trend with respect to clustercentric radius when compared to the rest of the cluster galaxy population. At $\rm R\approx1.3\,r_{200}$ the fraction of star forming galaxies in infalling groups is half that in the cluster galaxy population. This is direct evidence that star formation quenching is effective in galaxies already prior to them settling in the cluster potential, and that groups are favourable locations for this process.'
author:
- |
M. Bianconi$^{1,}\thanks{[email protected]}$, G. P. Smith$^{1}$, C. P. Haines$^{2}$, S. L. McGee$^{1}$, A. Finoguenov$^{3,4}$, E. Egami$^{5}$\
\
$^{1}$School of Physics and Astronomy, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK\
$^{2}$INAF - Osservatorio Astronomico di Brera, via Brera 28, 20122 Milano, Italy\
$^{3}$Max-Planck-Institut für extraterrestrische Physik, Giessenbachstra[ß]{}e, 85748 Garching, Germany\
$^{4}$Department of Physics, University of Helsinki, Gustaf Hällströmin katu 2a, FI-0014 Helsinki, Finland\
$^{5}$Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721, USA
bibliography:
- 'biblio.bib'
date: 'Accepted XXX. Received YYY; in original form ZZZ'
title: 'LoCuSS: Pre-processing in galaxy groups falling into massive galaxy clusters at z=0.2'
---
\[firstpage\]
galaxies: clusters: general – galaxies: groups: general – galaxies: evolution – galaxies: star formation
Introduction
============
{width="0.60\linewidth"}
Massive galaxy clusters appear last in the hierarchical ladder of cosmic structure formation, according to current $\Lambda$CDM models and observations [@frenk12]. Cluster evolution is driven by accretion, in the form of both baryonic and non-baryonic matter, of smooth continuous streams of small haloes, together with episodic merging of more massive systems. As a result, clusters are located in dynamically active and overdense regions of the Universe. Observationally, dense environments found in clusters are also characterised by a dearth of spiral galaxies compared to the field (e.g. @dressler80) and a star formation rate which shows a similar dependence on galaxy density (e.g @haines07 [@smith05]). While the accretion history must play a key role in shaping these galaxy populations, it is still unclear how. Indeed, the star formation transition between environments is not abrupt and the steady increase in the number of star forming galaxies goes together with the increase of distance from cluster centres (@wetzel12 [@haines15]). As shown in early studies [@couch98; @dressler99], the appearance of spiral galaxies with low star formation rates in the outskirts of clusters and the analysis of timescales of quenching processes, requires the presence of environmental effects accelerating the consumption of the gas reservoir prior to the settling of the galaxy into the cluster potential (so called pre-processing, @zabludoff96 [@fujita04]). Further observations [@haines15; @wetzel13] confirmed the deficit of star forming galaxies in the infalling regions of clusters, extending out to 5 virial radii, which can be successfully reproduced by simulations [@bahe13].
In numerical simulations, massive clusters have accreted up to 50% and 45% of their stellar mass and galaxies, respectively, through galaxy groups [@mcgee09]. Hence, being fundamental building blocks of both mass and galaxy population in clusters, galaxy groups could play a significant role in shaping the evolution of cluster galaxies, if confirmed to be favourable locations for star formation quenching. However, no clear scenario is accepted regarding the nature and effects of pre-processing. For example, @lopes14 argued that the main driver of galaxy evolution is local environment, both in groups and clusters. On the other hand@hou14 suggested that pre-processing is the dominant cause of the quenched fraction of galaxies in massive clusters. Despite this, the role of group pre-processing, both in isolation and around clusters, remains unclear and requires further investigation (see also @lietzen12).
Here we present a study of a new sample of recently discovered infalling groups [@haines17], focussing in particular on the star formation properties of the massive galaxies within them. The extensive coverage of the dataset in both area and wavebands is coupled with a consistent infrared-based object selection, resulting in a unique sample of unambiguously selected infalling group galaxies which is ideal for the study of pre-processing.
This paper is structured the following manner. In Section \[data\], we present the datasets used. In Section \[analysis\], we present the methods and main results of the data analysis. In Section \[summary\], we discuss the results and present future prospects of the project. Throughout this work, we assume $\rm H_0= 70\, km \,s^{-1} Mpc^{-1}$, $\rm\Omega_M= 0.3$ and $\rm\Omega_{\Lambda}= 0.7$.
LoCuSS Groups {#data}
=============
The galaxy groups considered here have been discovered in the infall regions of a subsample of clusters within the Local Cluster Substructure survey (LoCuSS). The LoCuSS survey targets X-ray luminous clusters from the *ROSAT* All Sky Survey catalogues (@ebeling98 [@ebeling00]) within a redshift range of $\rm 0.15\leq z \leq 0.30$. An extensive spectroscopic and photometric campaign targeted the first 30 clusters that had Subaru/Suprime-Cam ($V$ and $i'$ bands) imaging. In particular, wide-field ($\rm \approx$1 degree diameter) optical spectroscopy with MMT/Hectospec was performed and composes the Arizona Cluster Redshift Survey (ACReS, @haines13). Additionally, these clusters have been imaged in the near-infrared ($J$ and $K$ band) with UKIRT/WFCAM, covering $\rm 52'\times 52'$, mid-infrared 24$\rm \mu m$ with Spitzer/MIPS and far-infrared with Herschel/PACS and SPIRE, both covering $\rm 25'\times 25'$ (@haines10 [@smith10; @pereira10]). Archival *XMM*-Newton imaging is available for a subset of 23 out of these 30 clusters, extending out to $\rm \approx 2\,r_{200}$. These 23 clusters comprise the sample for the detection of infalling groups and are listed in Table 1 of @haines17, together with the complete description of the observational campaign and data reduction strategy. Hereafter we introduce some salient aspects that we used in the current work.
Groups are identified from their X-ray extended emission. The crucial step regards the separation of IGM extended emission from background and point sources. @haines17 followed the technique of @finoguenov09 [@finoguenov10; @finoguenov15] consisting of signal decomposition via wavelets [@vikhlinin98] to detect signals above 4$\sigma$ on scales of 8 and 16 arcsec, which is associated to point sources and subtracted from large scales using the *XMM* PSF model [@finoguenov09]. Similarly, extended sources (e.g. groups) are associated to signals on scales of 32 and 64 arcsec above 4$\sigma$ from a 32 arcsec generated noise map.
In order to confirm the extended X-ray detections as groups, we compared them with our optical imaging and spectroscopy. Sub-arcsecond resolution $V$ and $i$ band imaging is available from Subaru [@okabe10]. Spectroscopy is available from ACReS data. The ACReS survey was designed by selecting spectroscopic targets using the tight (0.1 mag) scatter of cluster galaxies in near infrared $J$-$K$ colour, following @haines09a [@haines09b]. As presented in @haines13, the colour cuts were tested by considering the $J$-$K$/$K$-band colour-magnitude diagrams of member galaxies for eight previously well-studied clusters, using only the literature redshifts (60-200 members per cluster). No a priori knowledge on the $J$-$K$ colour was used for the selection of these test galaxies. The $J$-$K$ color cut is found to be highly effective ($\rm98\%$) in retrieving confirmed spectroscopic members, irrespective of galaxy star-formation history or rate, as both star-forming and quiescent galaxies lie along the same sequence in $J$-$K$/$K$-band diagram. This allows for the selection of a stellar mass limited sample of galaxies, and for an effective removal of foreground stars with no additional a priori bias on galaxy spectroscopic type. The magnitude limit reached by the survey is $\rm M^*_K+1.5$, which corresponds to a mass limit of $\rm M_{*}\approx 2\times 10^{10} \, M_{\odot}$. In order to compensate for any selection bias of the spectroscopic targets due to intrinsic photometric properties and position on the plane of the sky, each galaxy is weighted by the inverse of the probability of being targeted by spectroscopy [@haines13], as proposed by @norberg02. The spectroscopic completeness of the K-band magnitude limited sample is $\rm 80\%$ in the *XMM* covered areas. The spectroscopic completeness increases to $\approx96\%$ when considering $\rm 24 \mu$m-detected sources [@haines13].
In order to associate galaxies to the intra-group medium (IGM) emission, a visual inspection of Subaru optical data was performed to firstly identify the central group galaxy as a massive, early-type galaxy close to the centre of X-ray emission. The galaxy companions are selected within $\rm 1000 \,km\,s^{-1}$ and 3 arcmin from the central group galaxy. In case of no clear central galaxy, a close ($\rm\approx 2 \, arcmin$ with $\rm \Delta v < 1000 \,km\,s^{-1}$) pair of galaxies is considered as the central object and the velocity and separation cuts are applied as before to select the other group galaxies. The analysis in @haines17 shows that this strategy allows us to identify groups out to $\rm z \sim 0.7$. Furthermore, all X-ray group at $\rm z < 0.4$ (based on their photometric redshift) have at least one spectroscopically confirmed galaxy member.
The result of group identification and member selection compared to the LoCuSS cluster galaxies can be seen in Figure \[caustic\]. Here the number density of cluster and group galaxies is plotted in the clustercentric versus peculiar velocity phase space, highlighting also the fraction of members in groups with respect to cluster members as a function of clustercentric distance. A total of 34 groups, averaging $\approx$10 members each, is understood to be infalling onto clusters, due to their relative position and velocity with respect to their closer cluster. @diaferio97 showed that the escape velocity of a galaxy cluster at a given radius can be traced by analysing the line-of-sight velocity distribution of galaxies with respect to their projected distance from the cluster centre (so-called “caustic” technique). In particular, galaxies within the characteristic structure of the caustic lines are gravitationally bound to the cluster, hence infalling or orbiting the cluster potential. Our groups are all located within the caustic lines of clusters (@haines17, Figure \[caustic\]), in areas of the diagram associated to recent or ongoing infall on cluster haloes (@haines15 [@rhee17]). This suggest that the groups are at the first encounter with the cluster environment. Advanced stages of accretion are excluded since ram-pressure stripping would have removed the IGM from group haloes.
{width="0.65\linewidth"}
Analysis and Results {#analysis}
====================
In this work, we perform a careful group member selection by identifying galaxies with peculiar velocity $\rm \left | v \right |= \left |c(z-\bar{z})/(1+\bar{z})\right |< 500 \,km\,s^{-1}$, computed with respect to each group mean redshift ($\rm \bar{z}$, following @harrison79), and within each group $\rm r_{200}$. Each group’s X-ray luminosity $\rm L_{X}$ is used to obtain an estimate of $\rm M_{200}$ following @leauthaud10, and hence its $\rm r_{200}$. @haines17 showed that these masses are compatible with estimates from each group velocity dispersion. Five of the groups selected in @haines17 have been excluded from the current analysis for being part of highly disturbed systems (A115-8 and 10, A689-7 and 8) and A1758-8 for being having mass and size comparable to a cluster rather than a group. Stellar masses were computed using the linear relation between stellar mass-to-light ratio and optical $g$-$i$ color from @bell03, $$\rm Log(M/L_K) = a_k + [b_k \times (g-i)]$$ where $\rm a_k = -0.211 $, $\rm b_k = 0.137 $. The relation is corrected by $\rm -0.15$ to be compatible with a @kroupa02 initial mass function (see also @haines15). The stellar mass of the group galaxies does not show any trend with redshift, with an average value of $\rm log M_{\star}[M_{\odot}]= 10.75$ across the considered redshift range. As shown by @bell03, the scatter in the relation ranges up to 0.1 dex, suggesting its robustness. The 24$\rm \mu$m luminosity of each source is obtained from a comparison between the observed flux and models of infrared spectral energy distributions by @rieke09 and subsequently converted into obscured star formation rate following @rieke09 $$\rm SFR_{IR}\,[M_{\odot}yr^{-1}] = 7.8\times10^{-10}\,L_{24}\,[L_{\odot}]$$ As quoted by @rieke09, the SFR evaluated from $\rm 24 \,\mu m$ emission is accurate within 0.2 dex when compared to SFRs obtained from a linear combination of total infrared luminosity and $\rm H_{\alpha}$ emission. Furthermore, $\rm L_{24}$ is a reliable proxy for normal galaxies in recovering the total infrared luminosity, hence allowing for a solid estimate of SFR (see also the review by @calzetti13). Type-1 AGN can contribute to dust heating at $\rm\approx1000\,K$, dominating the emission of the host galaxy at 24 $\rm\mu$m [@xu15]. In order to avoid any contamination to our star formation estimates, we excluded AGN candidates selected via optical broad lines and X-ray emission. Additionally, $\rm98\%$ of our 24 $\rm\mu$m galaxies are also detected at 100 $\rm\mu$m with Herschel. At this wavelength, the infrared emission is dominated by star formation, excluding a significant contribution from AGN. When considering clusters in the low reshift bin ($\rm 0.15 < z< 0.20$) $\rm 99\%$ of the sources above the $\rm SFR_{IR}$ threshold present emission in $\rm H_{\alpha}$. This indicates that the infrared emission is indeed due to star formation and not evolved stellar populations [@haines15]. Given the depth of Spitzer observations, the lower limit of sensitivity for which star formation can be evaluated is $\rm SFR_{IR}\approx 2\,M_{\odot}yr^{-1}$. This value is also representative of the average $\rm SFR_{IR}\approx 5\,M_{\odot}yr^{-1}$ of the galaxies within the LoCuSS survey, in which only four objects are exceeding $\rm SFR_{IR}\approx 40\,M_{\odot}yr^{-1}$ [@haines13].
The total number of selected infalling group members is 400 of which 375 satisfy our stellar mass cut ($\rm M_* > 2\times 10^{10} \,M_{\odot}$), and 29/375 satisfy $\rm SFR > 2\,M_{\odot}\,yr^{-1}$ and are thus classified as star forming. Our main result is presented in Figure \[sf\_fraction\]. The fraction of massive star forming galaxies $\rm f_{sf}$ in clusters (red points) and the infalling groups (gray triangles) is plotted with respect to cluster-centric distance, and compared to that seen in coeval field population (blue square). We can see a clear decrease of massive star forming galaxies in clusters with respect to decreasing projected clustercentric distance. Both groups and clusters depart from the field population. Noticeably, the fraction of massive star forming cluster galaxies shows a steeper increasing trend with clustercentric radius compared to group galaxies. This culminates at around $\rm 1.3\,r_{200}$, at which the fraction of star forming group galaxies is about half of the fraction in clusters, with a significance of $\rm \approx 2\sigma$. The fraction of star forming galaxies does not depend on stellar mass within the mass range that we cover in our sample. We tested the sensitivity of our result to the star formation threshold that we adopt by gradually increasing it, resulting only in a continuous decrease of $\rm f_{sf}$ in field, clusters and groups. Furthermore, we checked the effect of contamination by cluster galaxies to group members by relaxing the membership cuts in radius and velocity, obtaining an increase of the fraction of star-forming galaxies in groups. The flat trend of $\rm f_{SF}$ in groups suggests that star formation quenching is effective in groups prior to their arrival in clusters. We consider this as direct evidence of pre-processing. As groups are accreted, they populate clusters with galaxies similar to those found in cluster cores. We tested the dependence of $\rm f_{SF}$ in groups with respect to distance from the group center, by removing galaxies at progressively smaller radii from the group X-ray peak. We did not find significant changes in the overall trend of $\rm f_{SF}$, other than an increase of the scatter.
Discussion {#summary}
==========
In this study, we present direct evidence of group galaxies undergoing pre-processing, resulting in a fraction of star forming galaxies that is lower than cluster galaxies at the same cluster-centric distance. In clusters, the increase of the fraction of star-forming galaxies with cluster centric distance is due to relative increase of the number of star-forming galaxies in the infall regions with respect to passive galaxies, which dominate cluster cores [@haines15]. In this work, we separate galaxies infalling within groups from the remaining cluster galaxies, which have not been distinguished for their accretion history, i.e. isolated or in lower mass groups. We should expect the fraction of star-forming objects to be lower in groups compared to that of cluster infall regions if pre-processing has occurred. We find the fraction of star-forming galaxies in groups does not change much with respect to cluster-centric radius, indeed suggesting already occurred pre-processing. Furthermore, the fraction of star-forming galaxies in groups is compatible with cluster cores, suggesting the effectiveness of pre-processing in quenching star formation. Hence, infalling groups appear to populate clusters with pre-processed galaxies, with significant lower values of star formation compared to isolated field objects.
Several are the physical processes suggested to be responsible for the fast ageing of galaxies, as a result of pre-processing. Gravitational encounters can trigger violent episodes of star formation, with a subsequent consumption and expulsion of gas [@park09; @moore98; @dekel03]. Similarly, gas can be removed via hydrodynamical interaction between the infalling galaxy and the surrounding medium, as suggested by @gunn72 and @larson80 through ram-pressure stripping and starvation, respectively. In this work, we do not aim to disentangle the complex physical picture behind star formation quenching. Instead, we present a systematic study of a consistently selected sample of galaxy groups, showing that infalling groups are a favourable environment for pre-processing and that groups populate clusters with passive galaxies.
Group mass estimates allowed @haines17 to suggest a late (below $\rm z=0.223$) but relevant contribution of $\rm\approx16\%$ to the cluster total mass due to accretion of massive groups with $\rm M_{200}>10^{13.2} \,M_{\odot}$. These findings are corroborated by a comparison with different approaches to the analysis of simulated dark matter halos, via considering multiple cosmological models [@zhao09], different simulations [@vandenbosch14], or different Millennium simulation runs [@fakhouri10], resulting in a broad agreement on massive group accretion by clusters. Morphological analysis of Subaru images is currently underway and aims to identify structural parameter trends within the population of infalling galaxies. Additionally, high-resolution Hubble data would allow to extend the analysis beyond basic identification of discs and tidal features. A comparison with reference cluster and field objects would allow us to obtain insights on the different evolutionary paths. In particular, regular morphology and high bulge-to-disk ratio would suggest that quenching occurred at high-redshift, whereas the presence of tidal features and,more generally, perturbed stellar components would highlight recent interactions in the clusters and infalling groups. A comparison with a sample of isolated groups from surveys (e.g. COSMOS, AEGIS) would also allow evolutionary biases and the influence of large scale environment on group galaxies to be quantified.
Acknowledgements {#acknowledgements .unnumbered}
================
MB, GPS, and SLM acknowledge support from the Science and Technology Facilities Council through grant number ST/N000633/1. CPH acknowledges financial support from PRIN INAF 2014.
\[lastpage\]
|
---
abstract: |
The coexistence of weak ferromagnetism and superconductivity in ErNi$_{2}$B$%
_{2}$C suggests the possibility of a spontaneous vortex phase (SVP) in which vortices appear in the absence of an external field. We report evidence for the long-sought SVP from the in-plane magnetic penetration depth $\Delta
\lambda (T)$ of high-quality single crystals of ErNi$_{2}$B$_{2}$C. In addition to expected features at the Néel temperature $T_{N}$ = 6.0 K and weak ferromagnetic onset at $T_{WFM}=2.3~$K, $\Delta \lambda (T)$ rises to a maximum at $T_{m}=0.45$ K before dropping sharply down to $\sim $0.1 K. We assign the 0.45 K-maximum to the proliferation and freezing of spontaneous vortices. A model proposed by Koshelev and Vinokur explains the increasing $\Delta \lambda (T)$ as a consequence of increasing vortex density, and its subsequent decrease below $T_{m}$ as defect pinning suppresses vortex hopping.
author:
- 'Elbert E. M. Chia'
- 'M. B. Salamon'
- Tuson Park
- 'Heon-Jung Kim'
- 'Sung-Ik Lee'
- Hiroyuki Takeya
bibliography:
- 'RNBC.bib'
- 'CeCoIn5.bib'
- 'Vortex.bib'
title: 'Observation of the spontaneous vortex phase in the weakly ferromagnetic superconductor ErNi$_{2}$B$_{2}$C: A penetration depth study'
---
It is now clear that the borocarbide superconductor ErNi$_{2}$B$_{2}$C develops weak ferromagnetism (WFM) below T$_{WFM}$ = 2.3 K while remaining a singlet superconductor [@Choi01; @Canfield96]. The question naturally arises: how do these two seemingly incompatible orders — ferromagnetism and superconductivity — coexist microscopically? Clearly superconductivity will be suppressed if the internal field $B_{in}$ generated by the ferromagnetic moment exceeds $H_{c}$ for a Type-I, or $H_{c2}$ for a Type-II, superconductor (SC). For a Type-II SC, however, vortices are predicted to appear *spontaneously* if $B_{in}$ lies in the range $%
H_{c1}<B_{in}\sim 4\pi M<H_{c2}$ [Blount79,Tachiki79,Greenside81,Kuper80]{}. In this spontaneous vortex phase (SVP), the vortex screening currents shield superconducting regions from the intrinsic magnetization. The vortices, however, may be qualitatively different from those generated by externally applied fields [radzihovsky01]{}. In this Letter we report unusual features in the penetration depth data of a high quality single-crystal of ErNi$_{2}$B$_{2}$C that give strong evidence for the existence of the SVP.
There have been previous SVP reports that we consider inconclusive. Ng and Varma [@Ng97b], for example, interpreted small angle neutron scattering (SANS) data on ErNi$_{2}$B$_{2}$C as a prelude to the SVP. In that experiment, Yaron et al. [@Yaron96] reported that the vortex-line lattice begins to tilt away from the *c*-axis (along which the magnetic is field applied) towards the *a-b* plane below $T_{WFM}$. However, the tilt can merely be a result of the vector sum of the applied field and the internal field produced by the ferromagnetic domains in the basal plane. Additional evidence was provided by SANS data [@Furukawa01] with the applied magnetic field in the basal plane. A large field was applied to align ferromagnetic domains. When the field was removed, the flux line lattice was found to persist below $T_{WFM}$ but disappear above it. However, owing to the low $T_{c}$ ($\sim $ 8.5 K) and increased pinning below $T_{WFM}$ [@Gammel00], trapped flux cannot be ruled out.
Among the magnetic members of the rare-earth (RE) nickel borocarbide family, RENi$_{2}$B$_{2}$C (RE = Ho, Er, Dy, etc.), ErNi$_{2}$B$_{2}$C, is a particularly good candidate for study. Superconductivity arises at $%
T_{c}\approx ~11$ K and persists when antiferromagnetic (AF) order sets in at $T_{N}$ $\approx $ 6 K [@Cho95]. In the AF state the Er spins are directed along the *b*-axis, forming a transversely polarized, incommensurate sinusoidal spin-density-wave (SDW) state, with modulation vector modulation vector $\delta $ = 0.553$a^{\ast }$ (a$^{\ast }$ = 2$\pi $/$a$) [@Zarestky95]. The appearance of higher-order reflections at lower temperatures [@Choi01] signals the development of a square-wave modulation, with regular spin slips spaced by 20$a$. Below 2.3 K WFM appears with $B_{in}\cong $0.1 T, approximately one Er magnetic moment per twenty unit cells, clearly correlated with spin slips.
The relative stability of various phases of a ferromagnetic superconductor was explored [@Greenside81] by Greenside *et al.* A spiral phase is not possible in the presence of strong uniaxial anisotropy and the spontaneous vortex phase is more stable than a linearly polarized state for small values of $\zeta
=[F_{FM}/F_{s}]$, the ratio of ferromagnetic to superconducting free-energy densities at $T=0$. For $\zeta =100$ and the ratio $\lambda /\gamma =10$, where $\gamma
=[3k_{B}T_{c}S/(2aM^{2}(S+1)]^{1/2}$ is related to the exchange stiffness, Greenside *et al.* find the SVP to be the most stable low-temperature phase; indeed, they suggest that the effect is most likely to be found in a dilute ferromagnetic superconductor, and that smaller values of $\zeta $ favor SVP. In the case of ErNi$_{2}$B$_{2}$C, where only 5% of the Er atoms contribute to ferromagnetism, we have $F_{s}=-H_{c}^{2}/8\pi \approx -1.5\times
10^{5}$ erg/cm$^{3}$, where $H_{c}$$\approx $ 1900 G from Ref. . The ferromagnetic energy density is $%
F_{FM}=-3Nk_{B}T_{c}S/(2(S+1))\approx -4.3\times 10^{6}$ erg/cm$^{3}$, where $N$=1.5x10$^{22}$ cm$^{-3}$ is the density of the (magnetic) Er atoms, and $%
S=$ 3/2 is the Er spin. This then gives $\zeta =30$, strongly favoring the SVP. The spin-stiffness length is $\gamma =100$ Å at low temperatures where $M\approx $ 88 G, so that $\lambda /\gamma \approx 7,$ close to the value assumed in Ref. . As ErNi$_{2}$B$_{2}$C is strongly Type II ($\lambda /\xi \approx 5)$, we conclude that the SVP phase is the preferred state for coexisting ferromagnetism and superconductivity.
We have measured the temperature dependence of the in-plane magnetic penetration depth $\Delta \lambda (T)=\lambda (T)-\lambda (T_{base})$, in single crystals of ErNi$_{2}$B$_{2}$C down to $T_{base}=0.12$ K using a tunnel-diode based, self-inductive technique at 21 MHz [@Bonalde2000] with a noise level of 2 parts in 10$^{9}$ and low drift. The magnitude of the ac field was estimated to be less than 40 mOe. The cryostat was surrounded by a bilayer Mumetal shield that reduced the dc field to less than 1 mOe. *The very small values of the ac and dc field in our system ensure that our measurement is essentially a zero-field one, thereby eliminating the possibility of trapped flux.* Details of sample growth and characterization are described in Ref. . The samples were then annealed according to conditions described in Ref. . The sample was mounted, using a small amount of GE varnish, on a single crystal sapphire rod. The other end of the rod was thermally connected to the mixing chamber of an Oxford Kelvinox 25 dilution refrigerator. The sample temperature is monitored using a calibrated RuO$_{2}$ resistor at low temperatures (*T*$_{base}$ – 1.8 K), and a calibrated Cernox thermometer at higher temperatures (1.3 K – 12 K).
The deviation $\Delta \lambda (T)=\lambda (T)-\lambda (0.12$ K$)$ is proportional to the change in resonant frequency $\Delta f(T)$ of the oscillator, with the proportionality factor $G$ dependent on sample and coil geometries. We determine $G$ for a pure Al single crystal by fitting the Al data to extreme nonlocal expressions and then adjust for relative sample dimensions [@Chia03]. Testing this approach on a single crystal of Pb, we found good agreement with conventional BCS expressions. The value of *G* obtained this way has an uncertainty of $\pm $10% because our sample, with approximate dimensions 1.2 $\times $ 0.9 $\times $ 0.4 mm$^{3}$, has a rectangular, rather than square, basal area [@Prozorov2000].
![Temperature dependence of the penetration depth $\Delta lambda$(*T*) from 0.12 K to 13 K. Inset shows the low-temperature region. The arrows point to the features at $T_{N}$ and $T_{WFM}$.[]{data-label="fig:ENBCAllT"}](Fig1.eps){width="8cm"}
Figure \[fig:ENBCAllT\] shows the temperature-dependence of the in-plane penetration depth $\Delta \lambda (T)$. We see the following features: (1) onset of superconductivity at $T_{c}^{\ast }=11.3$ K, (2) a slight shoulder at $T_{N}=6.0$ K, (3) a broad peak at $T_{WFM}=2.3~$K, (4) another sharp peak at $T_{m}=0.45~$K, and (5) an eventual downturn below $T_{m}$. The features at $T_{N}$ and $T_{WFM}$ have not been seen in previous microwave measurements of $\Delta \lambda (T)$ on either thin-film [@Andreone99a] or single-crystal ErNi$_{2}$B$_{2}$C [@Jacobs95] but the former has been observed in SANS data [@Gammel99]. We show in a separate publication [@Chia05] that the feature at $T_{N}$ is only observed for relatively small non-magnetic scattering rates. The large value of $T_{c}$ and the resolvability of the features at $T_{N}$ and $T_{WFM}$ attest to the high purity of the samples. We show, for comparison, data for a sample grown by floating-zone methods in the inset to Fig. \[fig:ENBCWFM\]. No clear signal is seen at $T_{N}$, although there may be some sign of the Neel transition near 5 K. In place of the up-turn in the penetration depth, the signal levels off near $T_{WFM}$ before decreasing below 1 K. This suggests that spontaneous vortices at the surface of this sample are strongly pinned.
![($\bigcirc$) Temperature dependence of the penetration depth $%
\Delta \protect\lambda$(*T*) in the WFM phase. We’ve assumed $\protect%
\lambda (T_{base}) \approx \protect\lambda $(0)=700 Å from Ref. . The solid line is the fit of the data to Eqn. \[eqn:lambda2level\] for $n$=0.21, while the dotted line is that of $n$=0.125. The values of other parameters are mentioned in the text. The inset shows $\Delta \protect\lambda$ for a floating-zone-grown sample exhibiting no signals at $T_N$ or $T_{WFM}$[]{data-label="fig:ENBCWFM"}](Fig2c.eps){width="8cm"}
Figure \[fig:ENBCWFM\] shows the data below $T_{WFM}$. The strong upturn is a significant deviation from the normal monotonic decrease of the penetration depth with decreasing temperature. Because we expect the Meissner effect to vanish ($\lambda (T)$ to diverge) in the SVP in the absence of pinning [@Ng97b], it is natural to analyze the low-temperature data in the context of weakly pinned vortices in the low-frequency limit. We use a two-level tunneling model proposed by Koshelev and Vinokur (KV) [@Koshelev91]. This approach has been revisited by Korshunov [@Korshunov01] and applied to ultrathin cuprate films by Calame *et al.* [@Calame01]. At relatively high frequencies, small oscillations of the pinned lattice near equilibrium (Campbell regime) dominate absorption. At lower frequencies, jumps of lattice regions between different metastable states (two-level systems) come into play and determine the absorption. Both regimes are sensitive to the pinning strength, which depends on the value of the internal magnetic field $B_{in}$. At small field, vortices are pinned independent of one another. When $B_{in}$ exceeds a characteristic value $B_{p}$, pinning becomes collective [@Larkin79], and the vortex lattice splits into volumes that are pinned as a whole, with correlated regions having length $L_{c0}$ parallel to the field. We argue below that, in the WFM phase, $H_{c1}<B_{in},B_{p}\ll H_{c2}$ and that the frequency lies in the two-level regime.
In the two-level model, the pinning state is characterized by a large number of neighboring metastable configurations. If we shift any volume $V$ of the vortex lattice, then a finite probability exists that at some distance $u$ there is another state with an energy within $\Delta $ of the starting configuration and separated from it by a barrier $U$. At finite temperatures such regions of the lattice jump among metastable states. Under the action of an applied ac field of frequency $\omega $, the ac screening current exerts an ac Lorentz force on the vortex lattice which induces jumps among nearby metastable states. The motion of the vortices, which carry magnetic fields, then increases the ac field penetration into the sample, resulting in an increase in the effective penetration depth $\lambda _{eff}$. The system exhibits typical Debye-type behavior with the properties determined by $\tau (T)\sim \tau _{0}\exp (U/T),$ the mean time between jumps. When two-level response dominates over Cambell behavior the penetration is estimated [@Koshelev91] to behave as $$\lambda _{eff}^{2}(T)=\lambda _{\parallel }^{2}(T)+\frac{B_{in}^{2}(T)n_{tl}%
}{16\pi T}\left\langle \frac{V^{2}u^{2}}{(1+(\omega \tau (T))^{2})\cosh
^{2}(\Delta /T)}\right\rangle , \label{eqn:lambda2level}$$where $\langle $...$\rangle $ denotes an average over the distribution of two level systems. Here $\lambda _{\parallel }(T)$ is the London penetration depth in the absence of vortices; $B_{in}$, the internal magnetic field; and $n_{tl}$, the concentration of two-level systems. In the low-field region ($%
B_{in}<B_{p}$), the vortex lines move independently, and their presence does not change the penetration depth considerably ($\lambda _{eff}\approx
\lambda _{\parallel }$). However, in the collective pinning state ($%
B_{in}>B_{p}$), the jumping volume is not too small, and the characteristic distance at which the nearest metastable state exists is approximately the radius of the pinning force $u\approx \xi _{\parallel }$. As the temperature decreases, there is insufficient thermal energy to overcome the barrier $U$. No jumping takes place, the vortices are frozen, and hence there is no extra penetration. One therefore recovers the London penetration depth $\lambda
_{\parallel }$ at the lowest temperatures.
The solid line shows the fit of Eq. \[eqn:lambda2level\] to the data below $T_{WFM}$. In this fit, we follow KV and replace $\left\langle
...\right\rangle $ with values that characterize an effective number $%
n_{tl}^{eff}$of active two-level systems. The values of the following quantities will be justified later: $B(T=0)\approx 1100$ G, $u\approx $ $\xi
_{\parallel }\approx 150~$Å, $V=L_{c0}u^{2}=5.4\times 10^{-16}$ cm$^{3}$, and $\tau _{0}=2.2\times 10^{-9}$ s. The temperature-dependence of the internal magnetic field $B(T)$ can be obtained by fitting magnetization values in Ref. to the expression $$B(T)\sim \left( 1-\frac{T}{T_{WFM}}\right) ^{n} \label{eqn:Mfit}$$giving $n$=0.21. This value of $n$ is between the 2D-Ising value of 0.125 and the 3D-Ising value of 0.31, which is reasonable because in ErNi$_{2}$B$%
_{2}$C the spins lie on sheets normal to the $a$ axis and are confined to be along or anti-along the $b$ axis, yet there is also 3D behavior in the superconductivity. Because $U$ and $\Delta $ are strongly correlated, we make the reasonable assumption that all metastable states are equivalent ($%
\Delta =0$) and choose the energy barrier $U=0.49$ K and pinning density $%
n_{tl}^{eff}=1.61\times 10^{11}$ cm$^{-3}$ that best fit the peak in $%
\lambda _{eff}(T)$. This value of the barrier makes $\omega \tau \approx 1$ near 1 K. Note that this value of $U$ is close to the position of the peak at $T_{m}$ — this is reasonable since below this temperature, the vortices no longer have enough thermal energy to overcome the barrier to hop among metastable states; hence, one recovers the Meissner state with $\lambda $ decreasing. We expect $\lambda _{\parallel }(T)$ to exhibit a power-law temperature dependence at low temperatures from the combination of gap-minima observed in non-magnetic borocarbides and the increased pair-breaking as Er spins disorder. Consequently, we set $\lambda _{\Vert
}(T)=\lambda _{\parallel }(0)(1+bT^{2})$ with $b=0.036$ K$^{-2}$ the third adjustable parameter in the fit. For comparison, we show the (dotted-line) fit with $n=1/8$ (2D-Ising model) for which $U=0.49$ K, $n_{tl}^{eff}=1.57%
\times 10^{11}$ cm$^{-3}$, and $b=0.033$ K$^{-2}$. Both fits reproduced the qualitative features of the data, though the latter curve fits the data slightly better.
To justify our application of the two-level model to our data, we evaluate various physical parameters in the model using standard expressions for the vortex state [@Blatter94]. We start with the measured quantities: zero-temperature in-plane penetration depth $\lambda _{\parallel }(0)\approx
700$ Å and coherence length $\xi _{\parallel }\approx 150$ Å (and $%
\kappa =\lambda _{\parallel }(0)/\xi _{\parallel }=4.7)$ from Ref. , ferromagnetic moment $M\approx 0.62\mu _{B}$/Er $%
\approx $ 100 Oe well below $T_{WFM}$ from Ref. , and the normal-state resistivity $\rho _{n}(T_{c}^{\ast })=5.8$ $\mu \Omega $ cm that we measured for our sample. The value of $M$ corresponds to an internal field $B\sim 4\pi M=1100$ G, which is greater than $H_{c1}\sim 500$ G, putting us in the mixed state.
In Table I, we give the expressions and values for the quantities that lead to the flux coherence length $L_{co}$ ($19.5$ nm), the collective pinning field $B_{p}$ (20 Oe), the frequency $\omega _{cr}$ above which Campbell response is expected (27 MHz), and the jump-time prefactor $\tau _{0}$ (2.2 ns). Since we operate at 21 MHz, this puts us in the two-level regime. Note that $B_{p}$ is less than H$_{c1},$ suggesting the the mixed state of ErNi$_{2}$B$_{2}$C is always in the collective pinning regime. Based on these values, we estimate the maximum density of two-level volumes to be $%
\sim 2.4\times 10^{13}$cm$^{-3}$ at the lowest temperatures, indicating that approximately 1% are active in our frequency window.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[Quantity]{} [Expression]{} [Value]{} [Notes]{}
--------------------------------------------- -------------------------------------------------------------------------------------------------- ----------------------------------------- ----------------------------------------------------------------------------------------
[Depairing current, ]{}$j_{s}$ ${\small c\Phi }_{0}{\small /(12}\sqrt{% $1.5\times 10^{8}$[ A/cm]{}$^{2}$
3}{\small \pi }^{2}{\small \xi }_{\parallel }{\small \lambda }_{\parallel
}^{2}{\small )}$
[Viscous drag coefficient, ]{}$\eta $ ${\small \approx \Phi }_{0}^{2}% $5.6\times 10^{-7}$[ erg s cm]{}$^{-3}$ [Bardeen-Stephens model]{}
{\small /(2\pi \xi }_{\parallel }^{2}{\small \rho }_{n}{\small c}^{2}{\small %
)}$
[Bean-model critical current, ]{}$j_{c}$ ${\small 4cM}_{h}{\small /L}$ $1.9\times 10^{4}$[ A/cm]{}$^{2}$ [Ref. [@Gammel00], ]{}$%
L=1$[ mm,]{}
$M_{h}$[ from hysteresis loop]{}
[Flux coherence length, ]{}$L_{c0}$ $x^{2}=\frac{j_{s}}{j_{c}}\ln x;$[ ]{}$x=\frac{\lambda _{\perp }L_{c0}}{\lambda _{\parallel }\xi [19.5 nm]{} [Ref. [@Cho95],[Koshelev91]{}, ]{}$\lambda _{\perp }/\lambda _{\parallel }\approx 1.3$
_{\parallel }}$
[Jump time prefactor, ]{}$\tau _{0}$ ${\small \eta \xi }_{\parallel }% $2.2\times 10^{-9}$[ s]{} [Ref. [@Koshelev91]]{}
{\small c/(\Phi }_{0}{\small j}_{c}{\small )}$
[Collective pinning field, ]{}$B_{p}$ ${\small \Phi }_{0}{\small (}% [20 Oe]{} [Ref. [@Koshelev91]]{}
\ln {\small x)}^{4/3}{\small /L}_{c0}^{2}$
[Campbell crossover, ]{}$\omega _{cr}(B,T)$ ${\small \approx T\Phi }% $27$[ MHz]{} [Ref. [@Koshelev91], ]{}$B$[=500 Oe; ]{}$T$[=2 K]{}
_{0}{\small /(\eta BV\xi }^{2}{\small )}$
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
: Vortex parameters for the two-level hopping regime
KV [@Koshelev91] also found the Campbell penetration depth to be $\sim $$%
B^{2}$, which is a monotonically increasing function with decreasing temperature, i.e. there is no peak at low temperatures. This is in agreement with our not being in the Campbell regime. We also measured $\lambda (T)$ with the ac field along the basal plane, finding features qualitatively similar to the present data, including the strength and position of the features at $T_{N}$, $T_{WFM}$ and $T_{m}$.
In conclusion, penetration depth data of single-crystal ErNi$_{2}$B$_{2}$C down to $\sim $0.1 K provide strong evidence for the existence of a spontaneous vortex phase below $T_{WFM}$. The high quality of our sample enables us to see features at $T_{N}$ and $T_{WFM}$ that have not been observed in previous studies of the penetration depth [Andreone99a,Jacobs95]{}. Other samples, such as that shown in the inset to Fig. \[fig:ENBCWFM\] show no clear signal at either $T_{N}$ or $T_{WFM}$, nor the upturn in penetration depth that we attribute to weakly pinned spontaneous vortices. As pointed out by Radzhiovsky [@radzihovsky01], the SVP lattice is much softer than a conventional lattice, and therefore especially sensitive to quenched disorder. It may well be that the spontaneous vortices may be glass-like rather than forming a lattice, and that ferromagnetic closure domains form at the surface. While these aspects may make spontaneous vortices difficult to detect by neutron scattering or surface magnetization, vortices will still have strong effects on the electrodynamics, as observed here.
This material is based upon work supported by the U.S. Department of Energy, Division of Materials Sciences under Award No. DEFG02-91ER45439, through the Frederick Seitz Materials Research Laboratory at the University of Illinois at Urbana-Champaign. Research for this publication was carried out in the Center for Microanalysis of Materials, University of Illinois at Urbana-Champaign. This work in Korea was supported by the Ministry of Science and Technology of Korea through the Creative Research Initiative Program.
|
---
abstract: 'An effective tight-binding (TB) Hamiltonian for monolayer GeSnH$_2$ is proposed which has an inversion-asymmetric honeycomb structure. The low-energy band structure of our TB model agrees very well with previous [*ab initio*]{} calculations under biaxial tensile strain. We predict a phase transition upon 7.5% biaxial tensile strain in agreement with DFT calculations. Upon 8.5% strain the system exhibits a band gap of 134 meV, suitable for room temperature applications. The topological nature of the phase transition is confirmed by: 1)the calculation of the $\mathbb{Z}_2$ topological invariant, and 2)quantum transport calculations of disordered GeSnH$_2$ nanoribbons which allows us to determine the universality class of the conductance fluctuations.'
author:
- Zahra Aslani$^1$
- 'Esmaeil Taghizadeh Sisakht$^{1,2}$'
- Farhad Fazileh$^2$
- 'H. Ghorbanfekr-Kalashami$^2$'
- 'F. M. Peeters$^2$'
bibliography:
- 'GeSnH2.bib'
nocite: '[@*]'
title: 'Topological phase transition in GeSnH$_2$ induced by biaxial tensile strain: A tight-binding study'
---
\[introduction\]Introduction
============================
Topological insulators (TIs) have attracted a lot of attention in condensed matter physics and from the materials science community during the past decade [@qi2011topological; @hasan2010colloquium; @moore2010birth; @kane-mele2005; @kanez22015]. TIs are fascinating states of quantum matter with insulating bulk and topologically protected edge or surface states. In two-dimensional (2D) TIs, also known as quantum spin Hall (QSH) insulators [@kane-mele2005], the gapless edge states are topologically protected by time reversal symmetry (TRS) and are spin-polarized conduction channels that are robust against non-magnetic scattering. They are more robust against backscattering than three-dimensional (3D) TIs, making them better suited for coherent transport related applications, low-power electronics, and quantum computing applications [@chuang2014prediction].
There are currently only a few 3D TI compounds that are experimentally realized such as Bi$_2$Se$_3$, Bi$_2$Te$_3$, and Sb$_2$Te$_3$ [@zhang2009topological]; and only HgTe/CdTe [@konig2007quantum] and InAs/GaSb [@knez2011evidence] quantum wells have been realized as 2D TIs. Also, due to the very small bulk band gaps (on the order of meV), these 2D exhibit TI only at ultra-low temperatures. Therefore, there is a great need to find new 2D TIs with large energy band gaps. Following the advancements in graphene and similar materials, intensive efforts have been devoted to explore 2D group-IV and V honeycomb systems, which can harbor 2D topological phases.
In terms of crystal structure, TIs can be generally divided into inversion-symmetric TIs (ISTIs) and inversion-asymmetric TIs (IATIs). Inversion asymmetry introduces many additional intriguing properties in TIs such as pyroelectricity, crystalline-surface-dependent topological electronic states, natural topological p-n junctions, and topological magneto-electric effects [@ma2015; @zhangh2016ydrogenated; @li2016robust]. Therefore, 2D IATI materials that are stable at room temperature would be highly promising candidates for future spintronics and quantum computing applications.
Also, it would be interesting if we could implement such features in group-IV honeycomb systems for the integration of devices that use other carbon-IV honeycomb elemens. This would avoid issues such as contact resistances and their integration within the traditional Si or Ge based devices.
An efficient method for the synthesis of suitable 2D nanomaterials with honeycomb structure is chemical functionalization. Hydrogenation and halogenation of the above systems for the realization of topological phases have been extensively explored.
Here we consider GeSnH$_2$, which is an inversion asymmetric hydrogenated bipartite honeycomb system. [*ab initio*]{} calculations have shown that monolayers (ML) of GeSn halide (GeSnX$_2$, X$=$F, Cl, Br, I) are large band gap 2D TIs with protected edge states forming QSH systems [@ma2015]. On the other hand hydrogenated ML GeSn (GeSnH$_2$) is a normal band insulator which can be transformed into a large band gap topological insulator via appropriate biaxial tensile strain [@ma2015].
Here, we propose a tight-binding (TB) model for a better understanding of the electronic band structure of GeSnH$_2$ near the Fermi level. The band structure of our TB model is fitted to the [*ab initio*]{} results, where we consider the cases with spin-orbit coupling (SOC) and without SOC. Within the linear regime of strain the band gap of our TB model agrees very well with the DFT results [@ma2015]. This TB model when including SOC predicts a band inversion at 7.5% biaxial tensile strain in agreement with DFT calculations [@ma2015]. We show the topological nature of the phase transition by the calculation of the $\mathbb{Z}_2$ topological invariant. Quantum transport of this system is calculated in order to examine the protection of the edge states against nonmagnetic scattering. It is shown that the conductance fluctuations of disordered nanoribbons for energies near the band gap belong to the universality class of the circular unitary ensemble ($\beta=2$), while for high energies and strong disorder the fluctuations follow the circular orthogonal ensemble ($\beta=1$).
This paper is organaized as follows. In Sec. \[structure\], we introduce the crystal structure and lattice constants of monolayer GeSnH$_2$. Our proposed TB model is introduced in Sec. \[Model\], and the effect of strain on the electronic properties of GeSnH$_2$ is examined. In Sec. \[Phase\], the topological phase transition under strain is examined by looking at the nano-ribbon band structure and by determining the $\mathbb{Z}_2$ topological invariant. In Sec. \[Transport\] electronic transport in disordered GeSnH$_2$ nanoribbons is examined and the protection of the chiral edge states against nonmagnetic scatterings is verified. Also, we discuss the universality class of the conductance fluctuations. Our results are summarized in Sec. \[Conclusion\].
\[structure\]lattice structure
==============================
The ML GeSnH$_2$ prefers a buckled honeycomb lattice, analogous to its homogeneous counterparts germanene [@germanene2011] and stanene [@stanene2013]. Figs. \[Lattice\](a) and \[Lattice\](b) show the atomic structure of ML GeSnH$_2$ and its geometrical parameters. The 2D honeycomb lattice consists of two inequivalent sublattices made of Ge and Sn atoms, which are named A and B sublattices, respectively. Both Ge and Sn atoms exhibit $sp^3$ hybridization. One $sp^3$ orbital is passivated by a hydrogen atom and the other three are bonded to three neighboring Ge or Sn atoms. Thus, the unit cell of GeSnH$_2$ consists of four atoms: one Ge, one Sn, and two Hydrogen atoms that are right above (below) the Ge (Sn) atoms. The lattice translation vectors are ${\bm a}_{1,2}=a({\sqrt{3}}/{2}\hat{y}\pm{{1}/{2}}\hat{x})$ with the lattice constant $a=4.41$ Åand the buckling height $(h)$ is $0.76$ Å [@ma2015; @zhangh2016ydrogenated]. Note that the $x$ and $y$ axes are taken to be along the armchair and zigzag directions, respectively; and the $z$ axis is in the normal direction to the plane of the GeSnH$_2$ film.
Fig. \[Lattice\](c) shows the hexagonal Brillouin zone of ML GeSnH$_2$ with primitive reciprocal lattice vectors ${\bm b}_{1,2}={2\pi}/{a}({\sqrt{3}/3}\hat{y}\pm\hat{x})$ and three high symmetry points $\Gamma$, K, M. The dynamical stability of ML GeSnH$_2$ was confirmed by the DFT calculated phonon spectrum [@ma2015].
![Top (a) and side (b) views of the ML GeSnH$_2$ structure. ${\bm a_1}$ and ${\bm a_2}$ are the lattice vectors, $a$ is the lattice constant and $h$ is the buckling height. (c) First Brillouin zone of the system with three symmetry points $\Gamma$, K and M and the reciprocal lattice vectors ${\bm b_1}$ and ${\bm b_2}$.[]{data-label="Lattice"}](lattice.eps){width=".48\textwidth"}
\[Model\]Low-energy effective TB Hamiltonian for monolayer GSH$_2$
==================================================================
The electronic structure and topological properties of 2D honeycomb ML GeSnH$_2$ were studied in Ref. [@ma2015] using first principle calculations based on DFT. The DFT calculations including spin-orbit interaction predicted that a topological phase transition is induced in ML GeSnH$_2$ through the application of biaxial in-plane tensile strain. It was shown that the low-energy electronic structure of ML GeSnH$_2$ is determined exclusively by using $s$, $p_{x}$ and $p_{y}$ atomic orbitals of Ge and Sn atoms [@ma2015; @zhangh2016ydrogenated]. However, in order to examine the effect of random disorder on the electronic properties of this system and to confirm the topological nature of the phase transition, large system sizes are required. A limitation of the DFT calculations is that only small system sizes are managable. Therefore, we need a TB model for ML GeSnH$_2$ that is able to describe the low-energy electronic structure of large system sizes.
In the following we will derive a low-energy TB model including spin-orbit coupling (SOC) and we will show that the results of the proposed TB model even in the presence of biaxial strain is in very good agreement with previous DFT calculations.
----------------------- ------------------------------------ --------------------------------------------------------------------------------------------------
$t_{ss}$ $V_{ss\sigma}$ $t_{ss}^0[1-2\epsilon\cos^2\phi_0]$
$t_{sp_x}$ $lV_{sp\sigma}$ $t_{sp_x}^0[1-2\epsilon\cos^2\phi_0+\eta\epsilon\tan\phi_0]$
$\overline{t}_{sp_x}$ $l\overline{V}_{sp\sigma}$ $\overline{t}_{sp_x}^0[1-2\epsilon\cos^2\phi_0+\eta\epsilon\tan\phi_0]$
$t_{sp_y}$ $mV_{sp\sigma}$ $t_{sp_y}^0[1-2\epsilon\cos^2\phi_0+\eta\epsilon\tan\phi_0]$
$\overline{t}_{sp_y}$ $m\overline{V}_{sp\sigma}$ $\overline{t}_{sp_y}^0[1-2\epsilon\cos^2\phi_0+\eta\epsilon\tan\phi_0]$
$t_{p_xp_x}$ $l^2V_{pp\sigma}+(1-l^2)V_{pp\pi}$ $t_{p_xp_x}^0[1-2\epsilon\cos^2\phi_0+2\eta\epsilon\tan\phi_0]-2\eta\epsilon\tan\phi_0V_{pp\pi}$
$t_{p_yp_y}$ $m^2V_{pp\sigma}+(1-m^2)V_{pp\pi}$ $t_{p_yp_y}^0[1-2\epsilon\cos^2\phi_0+2\eta\epsilon\tan\phi_0]-2\eta\epsilon\tan\phi_0V_{pp\pi}$
$t_{p_xp_y}$ $lm(V_{pp\sigma}-V_{pp\pi})$ $t_{p_xp_y}^0[1-2\epsilon\cos^2\phi_0+2\eta\epsilon\tan\phi_0]$
----------------------- ------------------------------------ --------------------------------------------------------------------------------------------------
\[SOC\]Tight-Binding model Hamiltonian without SOC
--------------------------------------------------
To describe the low-energy spectrum and the electronic properties of ML GeSnH$_2$, we propose a TB model hamiltonian involving the three outer-shell $s$, $p_x$ and $p_y$ atomic orbitals of Ge and Sn atoms. The effective TB Hamiltonian without SOC can be written in the second quantized representation as $$\begin{aligned}
%\label{hamiltoni}
H_{0}&=\sum_{i\alpha}E_{i\alpha} c_{i\alpha}^{\dag} c_{i\alpha} \\
&+\sum _{\langle i,j\rangle ;\alpha ,\beta}t_{i\alpha ,j\beta}(c_{i\alpha}^{\dag} c_{j\beta} +h.c.).
\end{aligned}
\label{hamiltoni}$$ Here, $\alpha$, $\beta\in (s,p_x,p_y)$ are the orbital indices and $\langle i,j\rangle$ denotes the nearest-neighboring $i$th and $j$th atoms. $E_{i\alpha}$ is the on-site energy of $\alpha$th orbital of $i$th atom, $c_{i\alpha}^{\dag} (c_{i\alpha})$ is the creation (annihilation) operator of an electron in the $\alpha$th orbital of the $i$th atom, and $t_{i\alpha ,j\beta}$ is the nearest-neighbor hopping parameter between $\alpha$th orbital of $i$th atom and $\beta$th orbital of $j$th atom.
------- ------ ------ ------ ------- ------- ------ ------- ------
-1.51 3.33 2.35 3.69 -1.03 -5.47 5.23 -2.26 1.91
------- ------ ------ ------ ------- ------- ------ ------- ------
: Numerical values of the SK parameters obtained from a fitting to the $ab$ $initio$ results. The energy units are eV.[]{data-label="table2"}
The hopping parameters of Eq. (\[hamiltoni\]) are determined by the Slater-Koster (SK) [@slater-k1954] integrals as shown in the second column of table \[table1\], where $l=\cos\theta\cos\phi _0$ and $m=\sin\theta\cos\phi _0$ are direction cosines of the angles of the vector connecting two nearest-neighboring atoms with respect to $x$ and $y$ axes, respectively.
We calculated the hopping parameters and on-site energies of the above Hamiltonian using the method of minimization of the least square difference between the DFT obtained band structure based on the Heyd-Scuseria-Ernzerhof (HSE) approximation [@ma2015] and the band structure of our TB model. Our TB model Hamiltonian has nine fitting parameters; namely, four on-site orbital energies $(E_{Ge,s}, E_{Ge,p}, E_{Sn,s}, E_{Sn,p})$ and five SK parameters related to the hopping energies $(V_{ss\sigma}, V_{sp\sigma}, \overline{V}_{sp\sigma}, V_{pp\sigma}, V_{pp\pi})$. Note that $V_{sp\sigma} (\overline{V}_{sp\sigma})$ is the hopping integral between $s$ orbitals of atoms in sublattice A (B) and $p$ orbitals of atoms in sublattice B (A).
Table \[table2\] presents the obtained numerical values of the SK parameters. Using the optimized parameters, we can reproduce the three low-energy bands near the Fermi level ($s$ and $p_{xy}$ bands). Fig. \[bands1\](a) shows the TB low-energy bands of ML GeSnH$_2$ that is in good agreement with the DFT results. The band structure has a direct band gap of 1.155 eV at the $\Gamma$ point.
![The TB band structure of ML GeSnH$_2$ (a) without SOC and (b) with SOC. Symbols represent the DFT-HSE data taken from Ref. [@ma2015]. []{data-label="bands1"}](bands1.eps){width="47.00000%"}
\[SOC\]Spin-orbit coupling in ML GSH$_2$
----------------------------------------
In general, spin-orbit interaction can be written as [@liu2011low] $$H_{SOC}=\frac{\hbar}{4m_0^2c^2}(\bm{\nabla} V\times\bm{p})\cdot\bm{\sigma},
\label{HSOC}$$ where, $\hbar$ is Planks constant, $m_0$ is the rest mass of an electron, $c$ is the velocity of light, $V$ is potential energy, $\bm{p}$ is momentum, and $\bm{\sigma}$ is the vector of Pauli matrices. The major part of SOC in systems that consist of heavy atoms comes from the orbital motion of electrons close to the atomic nuclei. In such systems and within the central field approximation, the crystal potential $V(\bm{r})$ can be considered as an effective spherical atomic potential $V_i(\bm{r})$ located at $i$th atom. Therefore, by substituting $\bm{\nabla} V_i(\bm{r})=({{dV_i}/{dr}}){{\bm r}/{r}}$ and $\bm{s}=({\hbar}/{2})\bm{\sigma}$ terms into Eq. (\[HSOC\]) the SOC term takes the form [@liu2011low], $$H_{SOC}=\lambda (r)\bm{L}\cdot\bm{s}.
\label{HSOC1}$$ The above equation can also be expressed in the form $$H_{SOC}=\lambda (r)\bigg(\frac{L_+s_-+L_-s_+}{2}+L_zs_z\bigg),
\label{HSOC2}$$ where, $\lambda (r) ={1}/{2m_o^2c^2}({1}/{r})({dV}/{dr})$ is the effective atomic SOC constant whose value depends on the specific atom. $L_{\pm}$, $s_{\pm}$ are the operators for angular momentum and spin, respectively. In the basis set of $\lvert s_{A}, p_{xA}, p_{yA}, s_{B}, p_{xB}, p_{yB}\rangle\otimes\lvert\uparrow ,\downarrow\rangle$ , the matrix elements of the on-site SOC Hamiltonian for ML GeSnH$_2$ are given by $$\langle \alpha _i\vert H_{SOC}\vert\beta _i\rangle=\lambda _i\langle\frac{L_+s_-+L_-s_+}{2}+L_zs_z\rangle_{\alpha\beta}.$$ Here $\alpha _i$ and $\beta _i$ are the atomic orbitals and $\lambda _i$ is the on-site SOC strength of the $i$th atom. Note that since the two atoms in the unit cell of ML GeSnH$_2$ are different, we have two distinct SOC strengths $\lambda _A$ and $\lambda _B$ for atoms in sublattice A (Ge atoms) and sublattice B (Sn atoms), respectively.\
The resulting SOC Hamiltonian matrix in the above basis is given by $$H_{SOC}=\left(\begin{array}{cc}
H_{SOC}^{\uparrow\uparrow} & H_{SOC}^{\uparrow\downarrow} \\
H_{SOC}^{\downarrow\uparrow} & H_{SOC}^{\downarrow\downarrow}\\
\end{array}\right),$$ where, the elements are 6$\times$6 matrices $$H_{SOC}^{\uparrow\uparrow}=\frac{1}{2}\left(\begin{array}{cccccc}
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & -i\lambda_{A} & 0 & 0 & 0\\
0 & i\lambda_{A} & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & -i\lambda_{B}\\
0 & 0 & 0 & 0 & i\lambda_ {B} & 0 \\
\end{array}\right),$$ $$\begin{split}
& H_{SOC}^{\downarrow\downarrow}=-H_{SOC}^{\uparrow\uparrow}~,\\
& H_{SOC}^{\uparrow\downarrow}=H_{SOC}^{\downarrow\uparrow}=0~.
\end{split}$$
The value of the on-site effective SOC strength $\lambda$ is determined by fitting the TB energy bands to the DFT results. Here we fitted the energy bands obtained from our TB model to the one from the DFT+HSE approach [@ma2015]. The optimized numerical results using the hopping parameters from Table \[table2\] are $\lambda _{A}=0.296$ eV, $\lambda _{B}=0.326$ eV for Ge and Sn atoms, respectively. The TB energy bands of ML GeSnH$_2$ in the presence of spin-orbit interaction using the mentioned values of SOC strengths are in good agreement with the $ab$ $initio$ results as shown in Fig. \[bands1\](b).
Notice from Fig. \[bands1\](b) the spin splitting in the doubly degenerated bands. The splitting is more clear for the upper valence band close to the K symmetry point which is 92 meV. For the conduction band the spin-splitting is 241 meV at the K symmetry point. All the energy states of a system with inversion symmetry will be spin-degenerate while the TRS is held. The degeneracy can be lifted by breaking the inversion symmetry in such a system. The band gap of the system is reduced to 0.977 eV, demonstrating that ML GeSnH$_2$ is still a normal semiconductor.
\[S&E\]The effect of strain
---------------------------
The electronic properties of a system can be affected significantly by applying strain [@bir1974; @sun2010]. This is due to the fact that strain changes both the bond lengths and the bond angles. This in turn changes the SK parameters and hopping integrals that further affect the electronic band structure.
Based on knowledge from our previous works, we expect a topological phase transition upon the application of biaxial tensile strain [@ma2015]. Further, since the electronic band structure of this system under strain is available from $ab$ $initio$ calculations, the comparison of our TB band structure with DFT results will be another verification for the correctness of our TB model Hamiltonian.
Here, we investigate the effect of biaxial tensile strain on ML GeSnH$_2$. In our previous work we studied the effect of biaxial tensile strain on a related system where the low-energy electronic properties were dominated by $s$, $p_x$ and $p_y$ atomic orbitals [@rezaei2017GeCH3]. We follow the approach of Ref. [@rezaei2017GeCH3] and obtain the effect of strain on the hopping parameters which we list in table \[table1\]. Note that $t_{i\alpha,j\beta}^{0}$ indicates the unstrained hopping parameters.
In such systems, it is expected that the buckling angle changes linearly with strain as $\phi =\phi_0 -\eta\epsilon$ [@rezaei2017GeCH3]. $\epsilon$ is the strength of the applied biaxial strain and $\phi_0$ is the initial buckling angle. However, our calculations show that the changes in the buckling angle in this system produces negligible effect on the hopping parameters and therefore, we will use the initial buckling angle $\phi_0$ in the rest of our calculations.
The Hamiltonian for the strained system can be obtained by substituting the new hopping parameters (last column of table \[table1\]) in the original Hamiltonian Eq. (\[hamiltoni\]). The calculated TB energy spectrum of ML GeSnH$_2$ are shown in Figs. \[bands2\](a-c) without SOC and Figs. \[bands2\](d-f) with SOC in the presence of 0%, 4%, and 8% biaxial tensile strains. As shown, these results are in good agreement with the DFT calculations [@ma2015].
![The TB band structure of ML GeSnH$_2$ without (upper panels) and with (lower panels) SOC in the presence of (a) and (d) 0%, (b) and (e) 4%, and (c) and (f) 8% biaxial tensile strain. Red dots indicate the DFT-HSE data taken from Ref. [@ma2015].[]{data-label="bands2"}](bands2_ab.eps "fig:"){width="48.00000%"} ![The TB band structure of ML GeSnH$_2$ without (upper panels) and with (lower panels) SOC in the presence of (a) and (d) 0%, (b) and (e) 4%, and (c) and (f) 8% biaxial tensile strain. Red dots indicate the DFT-HSE data taken from Ref. [@ma2015].[]{data-label="bands2"}](bands2_SOC_cd.eps "fig:"){width="48.00000%"}
![The calculated band gaps of ML GeSnH$_2$ at the $\Gamma$ point ($E_g^\Gamma$) and the global energy gap ($E_g^{global}$) as a function of biaxial strain (a) without SOC, and (b) with SOC. The DFT data are taken from [@ma2015].[]{data-label="gap-strain"}](gap_strain.eps "fig:"){width="45.00000%"} ![The calculated band gaps of ML GeSnH$_2$ at the $\Gamma$ point ($E_g^\Gamma$) and the global energy gap ($E_g^{global}$) as a function of biaxial strain (a) without SOC, and (b) with SOC. The DFT data are taken from [@ma2015].[]{data-label="gap-strain"}](gap_strain_SOC.eps "fig:"){width="46.00000%"}
In Figs. \[gap-strain\](a) and \[gap-strain\](b), we show the variation of the energy gap of ML GeSnH$_2$ as a function of biaxial tensile strain without and with SOC, respectively. As shown in Fig. \[gap-strain\](b), by applying biaxial tensile strain in the presence of SOC, the band gap that is located at the $\Gamma$ point decreases steadily and eventually a band inversion occurs at the critical value of 7.5% strain. By increasing the strain the band gap reaches $\sim$ 134 meV at a reasonable strain of 8.5%. With increasing strain even further to 9.5%, the band gap with SOC becomes indirect. As shown in TI phase in Fig. \[gap-strain\](b), both $E_g^\Gamma$ and $E_g^{global}$ (global band gap) remain relatively unaffected by increasing the strain beyond 13%.
Remarkably, the value of the band gap is significantly larger than $k_BT$ at room temperature ($\sim$ 25 meV), and large enough to realize the QSH effect in ML GeSnH$_2$ even at room temperature. The excellent agreement between the results of our TB model and the DFT calculations in predicting the band inversion in this system upon biaxial tensile strain, implicitly confirms the validity of our proposed TB model.
![The 1D energy bands and the total conductance (in units of $G_0=e^2/h$) of z-GeSnH$_2$-NR for $N=26$ in the presence of (a) and (c) 5%, (b) and (d) 8.5% biaxial tensile strain.[]{data-label="band_zig"}](band_zig1_transport.eps){width="45.00000%"}
\[Phase\]Topological phase transition of monolayer GSH$_2$ under strain
=======================================================================
Using our TB model including spin orbit interaction, we showed that ML GeSnH$_2$ is a NI with a direct band gap of 0.977 eV. We showed that applying biaxial tensile strain modifies its electronic spectrum and a band inversion is taking place at $\epsilon=7.5\%$.
One way to show that there is a topological phase transition at the critical strain of $\epsilon=7.5\%$, is to perform calculations of the 1D band structure of nanoribbons with zigzag edges in the presence of biaxial tensile strain, and verify the existence of gapless helical edge states. Here, we study the edge state energy bands by cutting a 2D film of ML GeSnH$_2$ into nanoribbons. In our calculations, the Sn and Ge atoms at the edges of the nanoribbon are passivated by hydrogen atoms.
The width of a zigzag GeSnH$_2$ nanoribbon (z-GeSnH$_2$-NR) is defined by N, which is the number of zigzag chains across the ribbon width. In order to reduce the interaction between the two edges, the width of the nanoribbons is taken to be at least 10 nm.
We calculated the band structure of the corresponding nanoribbons. Note that the change of the hopping parameters and the on-site energies of the edge Ge or Sn atoms caused by the passivation procedure is negligible. Therefore, we can use the results in Table \[table1\] and \[table2\] for the on-site energies of Ge and Sn atoms and also for the hopping parameters corresponding to Ge-Sn bonds in unstrained or strained systems. We have two different on-site energies of hydrogen atoms $E_{H_0}^s$ and $E_{H_1}^s$, which are pertinent to the Hydrogen atoms that are introduced to passivate the Ge and Sn atoms on each edge, respectively. We can write the hopping parameters related to the H-Ge and H-Sn bonds as $$t_{H,X}^{ss}=V_{H,X}^{ss},~ t_{H,X}^{sp_{y}}=\pm V_{H,X}^{sp},~ (X=Ge,~Sn),$$ where $+(-)$ denotes the lower (upper) H-X edge bonds. The numerical value of the mentioned hopping parameters and on-site energies of the hydrogen atoms can be obtained by a fitting procedure. The results are shown in Table \[table3\].
------- ------ ------- ------ ------- -------
-4.87 2.05 -4.21 3.63 -1.84 -2.40
------- ------ ------- ------ ------- -------
: Numerical values of the on-site energy of Hydrogen atoms and the hopping parameters related to the H-Ge and H-Sn bonds. The energy is in units of eV.[]{data-label="table3"}
Figs. \[band\_zig\](a) and \[band\_zig\](b) show the energy bands of z-GeSnH$_2$-NR with $N=26$ in the presence of 5% and 8.5% biaxial tensile strains, respectively. Gapless conducting edge bands are seen for strain of $\epsilon=8.5\%$. This is consistent with our proposal for the topological phase transition at $\epsilon=7.5\%$.
It is now well established that for all time reversal invariant 2D band insulators a change in the $\mathbb{Z}_2$ topological invariant from zero to one, indicates a topological phase transition from a NI phase to TI. In our previous works[@sisakht2016strain; @rezaei2017GeCH3], we successfully used the algorithm of Fukui and Hatsugai [@fukui1] in order to calculate the $\mathbb{Z}_2$ number to characterize the topology of the energy bands. In this work, we implemented the same procedure to confirm the existence of two distinct topological phases in the electronic properties of ML GeSnH$_2$. We found that the value of $\mathbb{Z}_2$ switches from zero to one at the critical strain of $7.5\%$, which confirms the topological nature of the phase transition in the electronic properties of ML GeSnH$_2$.
\[Transport\]Electronic Transport in disordered GSH$_2$ Nanoribbons
===================================================================
Transport measurement is a different approach to confirm the existence of helical gapless edge states, which is an important signature of the TI phase. Therefore, we next study the transport properties of ML GeSnH$_2$ nanoribbons in the presence of strain To this end, we calculate the conductance of z-GeSnH$_2$-NR using the Landauer formalism [@datta1995; @datta2005]. As is standard the z-GeSnH$_2$-NR is divided into three regions; the left and right leads and the middle scattering region. We initially assume the ribbon to be perfect in order to satisfy the condition of ballistic transport. In the Landauer approach, the total conductance $G_c(E)$ per spin of a nanoscopic device at the Fermi energy $(E_F)$ is given by $$G_{c}(E)=(\frac{e^2}{h})Tr[\Gamma _L(E)G_D^R(E)\Gamma _R(E)G_D^A(E)],$$ where $\Gamma _{L(R)}=i[\Sigma _{L(R)}(E)-\Sigma^\dagger_{L(R)}(E)]$, with $\Sigma_{L(R)}(E)$ being the self energy of the left (right) lead. $G_D^R(E)$ is the retarded Green’s function of the device and $G_D^A(E)= {G_D^R}^\dagger(E)$. The retarded Green’s function of the device is given by $$G_D^R(E) = {[E-H_D-\Sigma_L^R(E)-\Sigma_R^R(E)]}^{-1}.$$ Here $H_D$ is the Hamiltonian for the device region. The numerically calculated total conductance, in units of $G_0={e^2}/{h}$, for $N=26$ as a function of energy is shown in Figs. \[band\_zig\](c) and \[band\_zig\](d) in the presence of 5% and 8.5% biaxial tensile strain, respectively. For strains less than $7.5\%$ ($\epsilon<7.5\%$), the z-GeSnH$_2$-NR has nonzero conductivity only above a threshold energy corresponding to the minimum energy of the conduction band, which opens a conducting channel with conductance in steps of $2G_0$. As shown in Fig \[band\_zig\](b), the total conductance at zero energy changes from 0 to 2 by applying biaxial tensile strains larger than $7.5\%$ ($\epsilon>7.5\%$). The non-zero conductance in the gap originates from the zero energy edge states and also indicates a topological phase transition from NI to TI in ML GeSnH$_2$. These conducting edge states are protected by TRS leading to the robustness of the electronic quantized conductance against backscattering by disorder and therefore holds great promise for spintronics applications [@gusev2011disorder; @van2017effect].
It would be helpful to further consider the effect of disorder on the electronic transport properties of this system. The disorder may be originated from unwanted dislocations or other defects. Such calculations are another proof for our proposal of strain-induced TI transition. In TIs, the edge states are robust against weak disorder, and only strong disorder can affect the electronic properties.
Here we introduce the disorder in our TB Hamiltonian model $H_{TB}=H_0+H_{SOC}$ using the so-called Anderson disordered model [@lee1981anderson] as $$H_{D}=\sum_{i,\alpha}W_{i} c_{i,\alpha}^{\dag} c_{i,\alpha},
\label{HSOC}$$ where $W_i$ is a random number uniformly distributed over the range $[-\frac{W}{2}, \frac{W}{2}]$. We assume a disordered z-GeSnH$_2$-NR with $N=34$ ($\sim$ 13 nm width) and 51 nm length as the middle region which is sandwiched between two semi-infinite perfect leads and calculate the conductance averaged over 100 different realizations. The width of the ribbon is chosen large enough in order to avoid finite-size effects. Fig. \[Conduct\_Devi\](a) shows the average conductance of z-GeSnH$_2$-NR as a function of disorder strength $W$ in the presence of biaxial strain $\epsilon=8.5\%$ at three different values of the Fermi energy. The energy $E_F=0.1$ eV corresponds to the energy in the middle of the band gap, while $E_F=0.35$ eV and $E_F=1.4$ eV are two energies in the bulk. It can be seen that the averaged quantized conductance at $E_F=0.1$ eV, which originates from the gapless edge states is insensitive to weak disorder. The mean conductance decreases only for strong disorder of strength $W>1.5$ eV which is a signature of a topological insulator.
![(a) The mean conductance $\langle G_{c}\rangle$ and (b) the standard deviation of conductance $\delta G_{c}$ of z-GeSnH$_2$-NR with $N=34$ (13 nm width) and 51 nm length as a function of disorder strength $W$.[]{data-label="Conduct_Devi"}](ucf.eps){width="45.00000%"}
The universal conductance fluctuations (UCF) [@lee1985UCF] are a mesoscopic phenomena, which is caused by the quantum interference of electrons. The UCF are of order ${e^2}/{h}$ and correspond to the deviation of the conductance from its ballistic value $G=nG_0$. The standard deviation of conductance in the diffusive regime and zero temperature depends only on the dimensionality and universality class of the disordered mesoscopic system and is independent of the details of the system such as the disorder strength $W$, the conductance $G_c$ and the sample size. The UCF takes universal values for three types of symmetry classes corresponding to $\beta$ = 1, 2 and 4, respectively. In systems that preserve TRS and spin rotational symmetry (SRS), the symmetry index $\beta =1$ (orthogonal ensemble); and $\beta =2$ corresponds to the case that TRS is broken (unitary ensemble); while $\beta =4$ for systems that preserve TRS but with broken SRS (symplectic ensemble) [@hu2017numerical; @hsu2018conductance]. We study numerically the conductance fluctuations in the disordered z-GeSnH$_2$-NR in the presence of biaxial tensile strain. The standard deviation of the conductance $\delta G_{c}={\langle {~(G_{c}-\langle G_c\rangle )}^2\rangle}^{\frac{1}{2}}$ of disordered z-GeSnH$_2$-NR is plotted as a function of disorder strength $W$ at three different Fermi energies in Fig. \[Conduct\_Devi\](b). There are no conductance fluctuations for energy $E_F=0.1$ eV in case of weak disorder strength, revealing the robustness of the helical edge states against disorder in the QSH phase. The $\delta G_c$ approach the value $\sim 0.52~e^2/h$, which corresponds to the UCF for the symmetry class $\beta=2$. Our total model Hamiltonian preserves TRS and the reason that the UCF shows the symmetry class $\beta=2$ is the following: since there are no spin-flip terms (Rashba SOC term) in our model Hamiltonian, we can block diagonal the Hamiltonian matrix with respect to the spin degrees of freedom with zero off-diagonal terms. Then we are dealing with two isolated and identical Hamiltonians corresponding to spin up and spin down states. Each of these blocks lack TRS due to the intrinsic SOC terms, and consequently the total UCF follows the $\beta=2$ symmetry class. A similar discussion can be found in Refs. [@hsu2018conductance; @choe2015universal], except that in Ref. [@hsu2018conductance] spin is not a good quantum number and the hamiltonian was block diagonalized with respect to the sublattice degrees of freedom. For high energies and strong disorder strengths, the standard deviation approaches the value $\sim 0.73~e^2/h$, which belongs to the symmetry class $\beta =1$. We will compare the localization length with the spin relaxation length (spin relaxation length originates from the intrinsic SOC terms) [@kaneko2010symmetry]. As long as the disorder is weak and the localization length is much larger than the spin relaxation length the SOC is significant and the system lacks SRS. But when disorder is strong enough and the localization length is much smaller than the spin relaxation length, we can ignore the SOC terms and SRS is preserved and the system follows the $\beta=1$ symmetry class. Finally, beyond $W\sim 6$ eV, where $\langle G_c\rangle$ approaches the value of $0.3~e^2/h$ the conductance fluctuations decreases and approaches the superuniversal curve [@qiao2010universal] that is independent of dimensionality and symmetry. The superuniversal curve is beyond $W=7$ eV, which is not shown in Fig. \[Conduct\_Devi\].
\[Conclusion\]Conclusion
========================
In summary, we constructed an effective TB model without and with SOC for ML GeSnH$_2$, which is able to reproduce the electronic spectrum of this system in excellent agreement with the DFT results near the Fermi level. Including SOC decreases the band gap from 1.155 eV to 0.977 eV. In the presence of biaxial tensile strain, our proposed TB model predicts correctly the evolution of the band spectrum and also predicts a topological phase transition from NI to TI phase in ML GeSnH$_2$ at 7.5% biaxial tensile strain. The global bulk gap, which is topologically protected, is 134 meV at a reasonable strain of 8.5%. This bulk gap exceeds the thermal energy at room temperature and is large enough to make ML GeSnH$_2$ suitable for room-temperature spintronics applications.
The strong SOC and the applied mechanical strain are two essential factors that induce the topological phase transition from NI to QSH phase in ML GeSnH$_2$. More interestingly, ML GeSnH$_2$ is a strain-induced TI with inversion asymmetry which makes it a promising candidate for understanding intriguing topological phenomena like magneto-electric effects. The TI nature of ML GeSnH$_2$ for strain $\epsilon>7.5\%$ was confirmed by calculating the $\mathbb{Z}_2$ topological invariant. Also we showed the existence of topologically protected gapless edge states in a typical z-GeSnH$_2$-NR in the presence of biaxial strain $\epsilon>7.5\%$.
In addition we found topologically protected gapless edge states in a typical z-GeSnH$_2$-NR for a biaxial strain of $\epsilon=8.5\%$ in the presence of disorder by calculating electronic transport. The conductance fluctuations reach the universal value of the unitary class $\beta=2$. For high Fermi energies and strong disorder the conductance fluctuations follow the orthogonal ensemble $\beta=1$.
Acknowledgements: This work was supported by the FLAG-ERA project TRANS-2D-TMD.
|
\
[^1]\
\
and\
[P. Moshin]{}\
\
> The generating functionals of Green’s functions with composite and external fields are considered in the framework of BV and BLT quantization methods for general gauge theories. The corresponding Ward identities are derived and the gauge dependence is investigated.
Introduction
============
The most general rules for manifestly covariant quantization of gauge theories in the path integral approach are provided by the BV formalism [@BV], based on the principle of BRST symmetry [@BRST], as well as by its Sp(2)-covariant version, the BLT quantization scheme [@BLT], based on the principle of extended BRST symmetry [@extBRST]. These methods currently underly the study of quantum properties of arbitrary (general) gauge theories in the Lagrangian formalism, either immediately providing the corresponding basis (e.g. derivation of the Ward identities, study of renormalization and gauge dependence [@LVT], analysis of unitarity conditions [@LM]) or playing a key role in the interpretation of alternative quantization methods (e.g. triplectic [@BMS], superfield [@LMR], osp(1,2)-covariant quantization [@GLM]). In particular, the methods [@BV; @BLT] have been used to analyse the quantum structure of general gauge theories with composite fields [@LO; @LOR] (in the BV and BLT formalisms, respectively); for these theories the corresponding Ward identities were derived and the related issue of gauge dependence was investigated. Also based on the use of the quantization methods [@BV; @BLT] was the recent study of Ref. [@ext],Œ which carried out an investigation of the Ward identities and gauge dependence for general gauge theories with external fields.
In particular, the studies of Refs. [@LOR; @ext] revealed the fact that the gauge dependence for theories with composite or external fields is described with the help of certain fermionic operators in the BV formalism, and doublets of fermionic operators in the BLT formalism. At the same time, a remarkable feature, commonly shared by both types of theories, consists in the property of (generalized) nilpotency of the fermionic operators in question.
The purpose of this paper is to provide an extension of the studies of Refs. [@LO; @LOR; @ext], which is aimed at incorporating composite fields, simultaneously with external ones, into arbitrary quantum gauge theories. The reason to consider this very general setting consists in the following. The procedure for constructing the effective action with composite fields [@comp] (see also Ref. [@ref]) offers a wide range of applications to quantum field theory models. Among the most important of them we find the study of such phenomenologically relevant theories as the Standard Model [@StMod] and models of the inflationary Universe [@infUniv], as well as SUSY theories [@SUSY] and theory of strings [@string]. Recent activities in supersymmetric YM theories and other gauge models have signalled the relevance of their extension to the case of composite fields on external background. The external field approach is applied mainly to handle certain difficulties in the path integral formulation of quantum theories by means of lifting the functional integration from a part of the variables, which are afterwards considered as external parameters. In the absence of a consistent theory of quantum gravity, this approach currently has the status of an indispensable tool providing insight into numerous problems which arise in the physics of black holes, as well as permitting to incorporate gravitational effects into the cosmology of the early Universe.
In this paper, the generating functionals of Green’s functions with composite and external fields combined are considered in the framework of the BV (section 2) and BLT (section 3) quantization methods for general gauge theories. For these functionals we derive the corresponding Ward identities and investigate the most general form of gauge dependence. It is revealed that combining composite and external fields into one scheme gives rise to a radical change in the character of both the Ward identities and gauge dependence (as compared to those obtained in Refs. [@LO; @LOR; @ext]). In particular, it is shown that in the most general case of a quantum theory with composite and external fields the operators describing the gauge dependence suffer from the violation of (generalized) nilpotency.
We use De Witt’s condensed notations [@DeWitt] as well as notations adopted in Refs. [@BV; @BLT]. The invariant tensor of the group Sp(2), which is a constant antisymmetric second-rank tensor, is denoted as $\varepsilon^{ab}$ ($a=1, 2$), with the normalization $\varepsilon^{12}=1$.Œ Symmetrization over Sp(2) indices is denoted as $A^{\{ab\}}=A^{ab}+A^{ba}$. Derivatives with respect to sources and antifields are understood as acting from the left, and those to fields, as acting from the right (unless otherwise specified); left-hand derivatives with respect to the fields are labelled by the subscript “${\it l}\,$" ($\delta_l/\delta\phi$ stands for the left-hand derivative with respect to the field $\phi$).
Quantum Gauge Theories with Composite and External Fields in the BV Formalism
=============================================================================
We first recall that the quantization of a gauge theory within the BV approach [@BV] requires introducing a complete set of fields $\phi^A$ and a set of corresponding antifields $\phi^*_A$ (which play the role of sources of BRST transformations), with Grassmann parities $$\varepsilon(\phi^A)\equiv\varepsilon_A,\;\;
\varepsilon(\phi^*_A)=\varepsilon_A+1.$$
The content of the configuration space of the fields $\phi^A$ (composed by the initial classical fields, the (anti) ghost pyramids and the Lagrangian multipliers) is determined by the properties of the original classical theory, i.e. by the linear dependence (for reducible theories) or independence (for irreducible theories) of the generators of gauge transformations.
In terms of the variables $\phi^A$ we define composite fields $\sigma^m(\phi)$, i.e. $$\begin{aligned}
\sigma^m(\phi)=
\sum_{k \geq 2}\frac{1}{k!}\Lambda^m_{A_1...A_k}\phi^{A_1}...\phi^{A_k},
\;\;\varepsilon(\sigma^m)\equiv\varepsilon_m,\end{aligned}$$ where $\Lambda^m_{A_1...A_k}$ are some field-independent coefficients.
The extended generating functional $Z(J,L,\phi^*)$ of Green’s functions with composite fields is constructed within the BV quantization approach by the rule (see, for example, Ref. [@LO]) $$\begin{aligned}
Z(J,L,\phi^*)&=&\int d\phi\;\exp\bigg\{\frac{i}{\hbar}\bigg(
S_{\rm ext}(\phi,\phi^{*})+
J_A\phi^A+L_m\sigma^m(\phi)\bigg)\bigg\},\end{aligned}$$ where $J_A$ are the usual sources of the fields $\phi^A$, $\varepsilon(J_A)=\varepsilon_A$; $L_m$ are the sources of the composite fields $\sigma^m(\phi)$, $\varepsilon(L_m)=\varepsilon_m$; and $S_{\rm ext}=
S_{\rm ext}(\phi,\phi^{*})$Œ is the gauge-fixed quantum action defined in the usual manner as $$\exp\bigg\{\frac{i}{\hbar}S_{\rm ext}\bigg\}=\exp\bigg(\hat{T}(\Psi)\bigg)
\exp\bigg\{\frac{i}{\hbar}S\bigg\}.$$
In eq. (2), $S=S(\phi,\phi^*)$ is a bosonic functional satisfying the equation $$\frac{1}{2}(S,S)=i\hbar\Delta S,$$ or equivalently $$\Delta\exp\bigg\{\frac{i}{\hbar}S\bigg\}=0,$$ with the boundary condition $$\begin{aligned}
S|_{\phi^{*}=\hbar=0}={\cal S},\end{aligned}$$ where $\cal S$ is the original gauge-invariant classical action. At the same time, the operator $\hat{T}(\Psi)$ has the form $$\hat{T}(\Psi)=[\Delta,\Psi]_{+}\;,$$ where $\Psi$ is a fermionic gauge-fixing functional.
In eqs. (3)–(5) we use the standard definition of the antibracket, given for two arbitrary functionals $F=F(\phi,\phi^*)$, $G=G(\phi,\phi^*)$ by the rule $$\begin{aligned}
(F,\;G)=\frac{\delta F}{\delta\phi^A}\frac{\delta G}{\delta
\phi^*_A}-(-1)^{(\varepsilon(F)+1)(\varepsilon(G)+1)}
\frac{\delta G}{\delta\phi^A}\frac{\delta F}
{\delta\phi^*_A}\,,\end{aligned}$$ as well as the usual definition of the operator $\Delta$ $$\begin{aligned}
\Delta=(-1)^{\varepsilon_A}\frac{\delta_l}{\delta\phi^A}\frac
{\delta}{\delta\phi^{*}_A}\,,\end{aligned}$$ possessing the property of nilpotency $\Delta^2=0$.
It is well-known that the gauge-fixing (2), (5) represents a particular case of transformation corresponding to any fermionic operator (chosen for $\Psi$) and describing the arbitrariness in solutions of eq. (3) (or eq. (4)), namely, $$\begin{aligned}
{\Delta}\exp\bigg\{\frac{i}{\hbar}S_{\rm ext}\bigg\}=0.\end{aligned}$$ In what follows we consider the most general case of gauge-fixing, corresponding to an arbitrary operator-valued fermionic functional $\Psi$Œ (clearly, it should be chosen in such a way as to ensure the existence of the functional integral).
Let us now consider the following representation of the generating functional $Z(J,L,\phi^*)$ in eq. (1): $$\begin{aligned}
Z(J,L,\phi^*)=\int d\psi\;{\cal Z}({\cal J},L,\psi,\phi^*)\exp\bigg
(\frac{i}{\hbar}{\cal Y}\psi\bigg),\end{aligned}$$ where $$\begin{aligned}
{\cal Z}({\cal J},L,\psi,\phi^*)=\int d\varphi\;\exp\bigg\{\frac{i}
{\hbar}\bigg(S_{\rm ext}(\varphi,\psi,\phi^{*})+{\cal J}\varphi
+L\sigma(\varphi,\psi)\bigg)
\bigg\}.\end{aligned}$$ Given this, we have assumed the decomposition $$\phi^A=(\varphi^i,\;\psi^\alpha),\;\;\;
J_A=({\cal J}_i,\;{\cal Y}_\alpha),\\$$ $$\varepsilon(\varphi^i)\equiv\varepsilon_i,\;\;\;
\varepsilon(\psi^\alpha)\equiv\varepsilon_\alpha.$$ In what follows we refer to ${\cal Z}={\cal Z}({\cal J},L,\psi,\phi^*)$ as the extended generating functional of Green’s functions with composite fields on the background of external fields $\psi^\alpha$. Clearly, the validity of the functional integral in eq. (1) implies the existence of the integral in eq. (7) without any restriction on the structure of the subspace $\psi^\alpha$. At the same time, the composite fields $\sigma^m (\varphi,\psi)$, according to their original definition, may be considered as given by $$\begin{aligned}
\sigma^m(\varphi, \psi)=
\sum_{k \geq 2}\frac{1}{k!}\tilde \Lambda^m_{i_1\ldots i_k} (\psi)
\varphi^{i_1} \ldots \varphi^{i_k},\end{aligned}$$ with the arising coefficients now depending on the external fields $\psi^\alpha$. This, as will be shown more explicitly below, leads to additional peculiarities, which are absent in the studies of Ref. [@LO; @LOR; @ext].
The Ward identities for a general gauge theory with composite and external fields considered in the BV quantization scheme can be obtained as direct consequence of equation (6) for the gauge-fixed quantum action $S_{\rm ext}$. Namely, integrating eq. (6) over the fields $\varphi^i$ with the weight functional $$\begin{aligned}
\exp\bigg\{\frac{i}{\hbar}\bigg[{\cal J}_i\varphi^i+L_m
\sigma^m(\varphi,\psi)
\bigg]\bigg\},Œ\end{aligned}$$ we have $$\begin{aligned}
\int d\varphi\;\exp\bigg[
\frac{i}{\hbar}\bigg({\cal J}_i\varphi^i+
L_m\sigma^m(\varphi,\psi)
\bigg)
\bigg]
\Delta\exp\bigg\{\frac{i}{\hbar}S_{\rm ext}(\varphi,\psi,\phi^{*})
\bigg\}=0.\end{aligned}$$ Next, performing in eq. (8) integration by parts, with allowance for the relation $$\exp\bigg\{\frac{i}{\hbar}\bigg(
{\cal J}_i\varphi^i+L_m\sigma^m(\varphi,\psi)
\bigg)\bigg\}
\Delta=$$ $$\begin{aligned}
=\bigg(\Delta-\frac{i}{\hbar}{\cal J}_i
\frac{\delta}{\delta\varphi^{*}_i}-
\frac{i}{\hbar}L_m\sigma^m_{,A}(\varphi,\psi)
\frac{\delta}{\delta\phi^{*}_A}\bigg)
\exp\bigg\{\frac{i}{\hbar}\bigg({\cal J}_i\varphi^i+
L_m\sigma^m(\varphi,\psi)\bigg)\bigg\},\end{aligned}$$ $$\sigma^m_{,A}(\varphi,\psi)\equiv\frac{\delta}{\delta\phi^{A}}
\sigma^m(\varphi,\psi),$$ we arrive at the following Ward identities for ${\cal Z}={\cal Z}({\cal J},L,\psi,\phi^*)$: $$\begin{aligned}
\hat{\omega}{\cal Z}=0,\end{aligned}$$ where $\hat{\omega}$ stands for the operator $$\begin{aligned}
\hat{\omega}=i\hbar\Delta_\psi+{\cal J}_i\frac{\delta}
{\delta\varphi^{*}_i}+L_m\sigma^m_{,A}\bigg(\frac{\hbar}{i}
\frac{\delta}{\delta{\cal J}},\psi\bigg)
\frac{\delta}{\delta\phi^{*}_A}\,,\;\;\;
\Delta_\psi\equiv(-1)^{\varepsilon_\alpha}\frac{\delta_l}{\delta\psi^
\alpha}\frac{\delta}{\delta\psi^{*}_\alpha}\,.\end{aligned}$$
At the same time, the fact that the composite fields $\sigma^m(\varphi,\psi)$ now depend on the external fields $\psi^\alpha$ implies that the nilpotency of $\hat{\omega}$ becomes in general violated, i.e. $$Œ \hat{\omega}^2=i\hbar(-1)^{\varepsilon_i}L_m
\sigma^m_{,i\alpha}\left(\frac{\hbar}{i}
\frac{\delta}{\delta{\cal J}},\psi\right)
\frac{\delta}{\delta\psi^*_\alpha}\frac{\delta}{\delta\varphi^*_i}\;,$$ where $$\sigma^m_{,i\alpha}\left(\frac{\hbar}{i}
\frac{\delta}{\delta{\cal J}},\psi\right)\equiv
\left.
\frac{\delta}{\delta\psi^\alpha}\sigma^m_{,i}(\varphi,\psi)
\right|_{ \varphi=\frac{\hbar}{i}\frac{\delta}{\delta{\cal J}}}\;.$$
In terms of the generating functional ${\cal W}={\cal W}({\cal J},L,\psi,\phi^*)$, $${\cal Z}=\exp\bigg\{\frac{i}{\hbar}{\cal W}\bigg\},$$ of connected Green’s functions with composite and external fields, the identities (10) take on the form $$\begin{aligned}
\hat{\Omega}{\cal W}=\frac{\delta{\cal W}}{\delta\psi^\alpha}
\frac{\delta{\cal W}}{\delta\psi^*_\alpha}\,,\end{aligned}$$ $$\hat{\Omega}=
i\hbar\Delta_\psi+{\cal J}_i\frac{\delta}
{\delta\varphi^{*}_i}+L_m\sigma^m_{,A}\bigg(\frac{\delta{\cal W}}
{\delta{\cal J}}
+\frac{\hbar}{i}\frac{\delta}{\delta{\cal J}},\psi\bigg)
\frac{\delta}{\delta\phi^{*}_A}.$$ In order to introduce the (extended) generating functional of 1PI vertex functions with composite and external fields we make use of the standard definition [@comp] of the generating functional of vertex functions with composite fields, which admits of a natural generalization to the case of external fields. Namely, let us introduce the generating functional in question by means of the following Legendre transformation with respect to the sources ${\cal J}_i$, $L_m$: $$\Gamma(\varphi,\Sigma,\psi,\phi^*)={\cal W}({\cal J},L,\psi,\phi^*)-
{\cal J}_i\varphi^i-L_m\bigg(\Sigma^m+\sigma^m(\varphi,\psi)\bigg),$$ where $$\varphi^i=\frac{\delta{\cal W}}{\delta{\cal J}_i},\;\;\;
\Sigma^m=\frac{\delta{\cal W}}{\delta L_m}
-\sigma^m\bigg(\frac{\delta{\cal W}}{\delta{\cal J}},\psi\bigg).$$ Given this, we haveŒ$${\cal J}_i=-\frac{\delta\Gamma}{\delta\varphi^i}+
\frac{\delta\Gamma}{\delta\Sigma^m}\sigma^m_{,i}(\varphi,\psi),\;\;\;
L_m=-\frac{\delta\Gamma}{\delta\Sigma^m}\,.$$ From the definition of the Legendre transformation it follows that $$\begin{aligned}
\left.\frac{\delta}{\delta{\cal J}_i}\right|_{L,\psi,\phi^*}&=&
\left.\frac{\delta\varphi^j}{\delta{\cal J}_i}
\frac{\delta_l}{\delta\varphi^j}\right|_{\Sigma,\psi,\phi^*}
+
\left.\frac{\delta\Sigma^m}{\delta{\cal J}_i}
\frac{\delta_l}{\delta\Sigma^m}\right|_{\varphi,\psi,\phi^*},\\
\left.\frac{\delta}{\delta L_m}\right|_{{\cal J},\psi,\phi^*}&=&
\left.\frac{\delta\varphi^i}{\delta L_m}
\frac{\delta_l}{\delta\varphi^i}\right|_{\Sigma,\psi,\phi^*}
+
\left.\frac{\delta\Sigma^m}{\delta L_m}
\frac{\delta_l}{\delta\Sigma^m}\right|_{\varphi,\psi,\phi^*},\\
\left.\frac{\delta_l}{\delta\psi^\alpha}\right|_{{\cal J},L,\phi^*}&=&
\left.\frac{\delta_l}{\delta\psi^\alpha}\right|_{\varphi,\Sigma,\phi^*}+
\left.\frac{\delta_l\varphi^i}{\delta\psi^\alpha}
\frac{\delta_l}{\delta\varphi^i}\right|_{\Sigma,\psi,\phi^*}
+
\left.\frac{\delta_l\Sigma^m}{\delta\psi^\alpha}
\frac{\delta_l}{\delta\Sigma^m}\right|_{\varphi,\psi,\phi^*},\\
\left.\frac{\delta}{\delta\phi^*_A}\right|_{{\cal J},L,\psi}&=&
\left.\frac{\delta}{\delta\phi^*_A}\right|_{\varphi,\Sigma,\psi}+
\left.\frac{\delta\varphi^i}{\delta\phi^*_A}
\frac{\delta_l}{\delta\varphi^i}\right|_{\Sigma,\psi,\phi^*}
+
\left.\frac{\delta\Sigma^m}{\delta\phi^*_A}
\frac{\delta_l}{\delta\Sigma^m}\right|_{\varphi,\psi,\phi^*}.\end{aligned}$$ Then, by virtue of eq. (12) and the relations $$\frac{\delta{\cal W}}{\delta\psi^\alpha}=
\frac{\delta\Gamma}{\delta\psi^\alpha}
-
\frac{\delta\Gamma}{\delta\Sigma^m}\sigma^m_{,\alpha}(\varphi,\psi),\;\;\;
\frac{\delta{\cal W}}{\delta\phi^*_A}=
\frac{\delta\Gamma}{\delta\phi^*_A}\,,$$ we arrive at the following Ward identities for the functional $\Gamma(\varphi,\Sigma,\psi,\phi^*)$: $$\begin{aligned}
&&
\frac{1}{2}(\Gamma,\Gamma)+\frac{\delta\Gamma}{\delta\Sigma^m}Œ \bigg(\sigma^m_{,A}(\hat{\varphi},\psi)
-\sigma^m_{,A}(\varphi,\psi)\bigg)\frac{\delta\Gamma}{\delta\phi^*_A}
=\nonumber\\
&& =
i\hbar\bigg\{
\Delta_\psi\Gamma-
(-1)^{\varepsilon_\alpha\varepsilon_\rho}
(G^{''-1})^{\rho\sigma}
\bigg[
\frac{\delta_l}{\delta\Phi^\sigma}
\bigg(
\frac{\delta\Gamma}{\delta\psi^\alpha}
-
\frac{\delta\Gamma}{\delta\Sigma^m}\sigma^m_{,\alpha}(\varphi,\psi)
\bigg)
\bigg]
\frac{\delta_l}{\delta\Phi^\rho}
\frac{\delta\Gamma}{\delta\psi^*_\alpha}\nonumber\\
&&
+
(-1)^{\varepsilon_i(\varepsilon_\alpha+\varepsilon_m+1)}
(G^{''-1})^{i\sigma}
\bigg[
\frac{\delta_l}{\delta\Phi^\sigma}
\bigg(
\frac{\delta\Gamma}{\delta\psi^\alpha}
-
\frac{\delta\Gamma}{\delta\Sigma^n}\sigma^n_{,\alpha}(\varphi,\psi)
\bigg) \!
\bigg]
\sigma^m_{,i}(\varphi,\psi)
\frac{\delta_l}{\delta\Sigma^m}
\frac{\delta\Gamma}{\delta\psi^*_\alpha}\nonumber\\
&&
-
(-1)^{\varepsilon_\alpha\varepsilon_m}\sigma^m_{,\alpha}(\varphi,\psi)
\frac{\delta_l}{\delta\Sigma^m}
\frac{\delta\Gamma}{\delta\psi^*_\alpha}\bigg\}.\end{aligned}$$ In eq. (13), we have assumed the notation $$\frac{\delta\Gamma}{\delta\phi^A}\equiv
\bigg(
\frac{\delta\Gamma}{\delta\varphi^i},\,
\frac{\delta\Gamma}{\delta\psi^\alpha}
\bigg)$$ and introduced the operator $\hat{\varphi^i}$ $$\hat{\varphi^i}=\varphi^i+i\hbar(G^{''-1})^{i\sigma}
\frac{\delta_l}{\delta\Phi^\sigma},Œ$$ where $$\frac{\delta_l N_\sigma}{\delta\Phi^\rho}\equiv-(G^{''})_{\rho\sigma},
\;\;\;\;
(G^{''-1})^{\rho\delta}(G^{''})_{\delta\sigma}=\delta^\rho_\sigma\,,$$ $$\begin{aligned}
\Phi^\sigma=(\varphi^i,\Sigma^m),\;\;\;
N_\sigma=
\bigg(-\frac{\delta\Gamma}{\delta\varphi^i}+
\frac{\delta\Gamma}{\delta\Sigma^m}\sigma^m_{,i}(\varphi,\psi),
-\frac{\delta\Gamma}{\delta\Sigma^m}\bigg).\end{aligned}$$
The reader may profit by considering the above results in the particular cases of theories where either composite or external fields alone are present.
Let us first turn to a quantum theory with composite fields only, which corresponds to the generating functional $Z=Z(J,L,\phi^*)$ of Green’s functions in eq. (1).
The Ward identities for the functional (1) can be derived from eqs. (10), (11) which determine the Ward identities for the generating functional (7) of Green’s functions with composite and external fields combined. Thus, we have $$J_A\frac{\delta Z}{\delta\phi^*_A}
+L_m\sigma^m_{,A}\left(\frac{\hbar}{i}
\frac{\delta}{\delta J}\right)\frac{\delta Z}{\delta\phi^*_A}=0.$$ The above relation is obtained by integrating eq. (10) over the external fields $\psi^\alpha$ with the weight functional $\exp{\left\{i/\hbar{\cal Y}_\alpha\psi^\alpha\right\}}$.
The corresponding Ward identities for the generating functional $W=W(J,L,\phi^*)$ of connected Green’s functions are consequently given by $$J_A\frac{\delta W}{\delta\phi^*_A}+L_m\sigma^m_{,A}
\left(\frac{\delta W}{\delta J}+
\frac{\hbar}{i}\frac{\delta}{\delta J}\right)
\frac{\delta W}{\delta\phi^*_A}=0,$$ which, in turn, implies the following Ward identities for the generating functional $\Gamma=\Gamma(\phi,\Sigma,\phi^*)$ of vertex functions: $$\frac{1}{2}(\Gamma,\Gamma)+\frac{\delta\Gamma}
{\delta\Sigma^m}\Biggl(\sigma^m_{,A}({\hat \phi})-\sigma^m_{,A}(\phi)Œ \Biggr)\frac{\delta\Gamma}{\delta\phi^{*}_{A}}=0.$$ Here, $\hat{\phi}^A$ stand for the operators $$\hat{\phi}^A=\phi^A+i\hbar(Q^{''-1})^{Ap}
\frac{\delta_l}{\delta F^p},$$ defined as $$\frac{\delta_l E_q}{\delta F^p}\equiv-(Q^{''})_{pq},
\;\;\;\;
(Q^{''-1})^{pr}(Q^{''})_{rq}=\delta^p_q\,,$$ $$\begin{aligned}
F^p=(\phi^A,\Sigma^m),\;\;\;
E_p=
\bigg(-\frac{\delta\Gamma}{\delta\phi^A}+
\frac{\delta\Gamma}{\delta\Sigma^m}\sigma^m_{,A}(\phi),
-\frac{\delta\Gamma}{\delta\Sigma^m}\bigg).\end{aligned}$$
Note that the above Ward identities for a quantum theory with composite fields coincide with the results obtained in Ref. [@LO].
We next consider the case of a quantum theory with external fields only, evidently corresponding to the assumptions $\sigma^m(\phi)=0$, $L=0$. Within these restrictions, the generating functional (7) of Green’s functions, obviously, reduces to ${\cal Z}={\cal Z}({\cal J},\psi,\phi^*)$, while its Ward identities, determined by eqs. (10), (11), accordingly take on the form $$\left(i\hbar\Delta_\psi+{\cal J}_i\frac{\delta}
{\delta\varphi^{*}_i}\right){\cal Z}=0.$$ This implies the following Ward identities for the generating functional ${\cal W}={\cal W}({\cal J},\psi,\phi^*)$ of connected Green’s functions: $$\left(i\hbar\Delta_\psi+{\cal J}_i\frac{\delta}
{\delta\varphi^{*}_i}\right){\cal W}=
\frac{\delta{\cal W}}{\delta\psi^\alpha}
\frac{\delta{\cal W}}{\delta\psi^*_\alpha}\,.$$ Finally, one readily establishes the fact that the Ward identities for the corresponding generating functional $\Gamma=\Gamma(\varphi,\psi,\phi^*)$ of vertex functions can be represented as $$\frac{1}{2}(\Gamma,\Gamma)=i\hbar\Delta_\psi\Gamma-
i\hbar(\Gamma^{''-1})^{ij}
\bigg(\frac{\delta_l}{\delta\varphi^j}
\frac{\delta\Gamma}{\delta\psi^\alpha}\bigg)Œ \bigg(\frac{\delta}{\delta\psi^*_\alpha}
\frac{\delta\Gamma}{\delta\varphi^i}\bigg),$$ where $$(\Gamma^{''-1})^{ik}(\Gamma^{''})_{kj}=\delta^i_j\,,\;\;\;
(\Gamma^{''})_{ij}\equiv\frac{\delta_l}{\delta\varphi^i}
\frac{\delta\Gamma}{\delta\varphi^j}\,.$$
The above Ward identities for a quantum theory with external fields coincide with the results of Ref. [@ext].
Consider now the change of the generating functionals ${\cal Z}$, ${\cal W}$, $\Gamma$ with composite and external fields under a variation of the gauge fermion $\Psi$ chosen in the most general form of an operator-valued functional, i.e. $$\delta\Psi\bigg(\phi^A,\,\phi^*_A;\,
\frac{\delta_l}{\delta\phi^A},\,\frac{\delta}{\delta\phi^*_A}\bigg)=
\delta\Psi\bigg(\varphi^i,\,\psi^\alpha,\,\phi^*_A;\,
\frac{\delta_l}{\delta\varphi^i},\,\frac{\delta_l}{\delta\psi^\alpha},\,
\frac{\delta}{\delta\phi^*_A}\bigg).$$ Clearly, the gauge enters the generating functional ${\cal Z}$ only in the gauge-fixed quantum action $S_{\rm ext}$, eq. (2), whose variation reads $$\delta\bigg(\!\exp\bigg\{\frac{i}{\hbar}S_{\rm ext}\bigg\}\bigg)
=\hat{T}(\delta X)\exp\bigg\{\frac{i}{\hbar}S_{\rm ext}\bigg\},$$ where $\delta X$ is related to $\delta\Psi$ through a certain linear operator-valued transformation. The explicit form of $\delta X$ is not essential for the following treatment; nevertheless it is always possible to choose the operator ordering in such a way that $$\begin{aligned}
&&
\delta X\bigg(\varphi^i,\psi^\alpha,\phi^*_A;\,
\frac{\delta_l}{\delta\varphi^i},\frac{\delta_l}{\delta\psi^\alpha},
\frac{\delta}{\delta\phi^*_A}\bigg)=
\delta X^{(0)}\bigg(\varphi^i,\psi^\alpha,\phi^*_A;\,
\frac{\delta_l}{\delta\psi^\alpha},\frac{\delta}{\delta\phi^*_A}\bigg)
\nonumber\\
&&
+
\sum_{N=1}\frac{\delta_l}{\delta\varphi^{i_1}}\ldots
\frac{\delta_l}{\delta\varphi^{i_N}}
\delta X^{(i_1\ldots i_N)}\bigg(\varphi^i,\psi^\alpha,\phi^*_A;\,
\frac{\delta_l}{\delta\psi^\alpha},\frac{\delta}{\delta\phi^*_A}\bigg).\end{aligned}$$ With allowance for eqs. (5)–(7), the variation of the functionalŒ ${\cal Z}({\cal J},L,\psi,\phi^*)$ reads $$\begin{aligned}
\delta{\cal Z}({\cal J},L,\psi,\phi^*)=
\int d\varphi\;\exp\bigg(\frac{i}{\hbar}{\cal J}_i\varphi^i+
L_m\sigma^m(\varphi,\psi)\bigg)\,\times\nonumber\\
\times
\Delta\!\bigg(\delta X\exp\bigg\{\frac{i}{\hbar}
S_{\rm ext}(\varphi,\psi,\phi^{*})\bigg\}\bigg).\end{aligned}$$ Then, performing in eq. (15) integration by parts and taking the relations (9), (11), (14) into account, we transform the variation of the functional ${\cal Z}({\cal J},L,\psi,\phi^*)$ into the form $$\begin{aligned}
\delta{\cal Z}=\frac{1}{i\hbar}\hat{\omega}\delta\tilde{X}{\cal Z},\end{aligned}$$ where $$\delta\!\tilde{X}=\delta\! X \! \bigg(
\frac{\hbar}{i}\frac{\delta}{\delta{\cal J}_i},\,
\psi^\alpha\!,\,\phi^*_A; (-1)^{\varepsilon_i}\frac{1}{i\hbar}{\cal J}_i,
\frac{\delta_l}{\delta\psi^\alpha}
+
\frac{1}{i\hbar}(-1)^{\varepsilon_\alpha}L_m\sigma^m_{,\alpha}
\bigg(
\frac{\hbar}{i}\frac{\delta}{\delta{\cal J}},\psi\bigg),\,
\frac{\delta}{\delta\phi^*_A}\bigg).$$
In terms of the generating functional ${\cal W}({\cal J},L,\psi,\phi^*)$ eq. (16) can be represented as follows: $$\begin{aligned}
\delta{\cal W}=-\hat{Q}\langle\delta\tilde{X}\rangle,\end{aligned}$$ where $\langle\delta\tilde{X}\rangle$ is the vacuum expectation of the operator-valued functional $\delta\tilde{X}$ $$\begin{aligned}
&&\hspace*{-1cm}
\langle\delta\tilde{X}\rangle=\delta X
\bigg(\frac{\delta{\cal W}}{\delta{\cal J}_i}+\frac{\hbar}{i}
\frac{\delta}{\delta{\cal J}_i},\,\psi^\alpha,\,\phi^*_A\,;
(-1)^{\varepsilon_i}\frac{1}{i\hbar}{\cal J}_i,\nonumber\\
&&\hspace*{-1cm}
\frac{i}{\hbar}\frac{\delta_l{\cal W}}{\delta\psi^\alpha}+
\frac{\delta_l}{\delta\psi^\alpha}
+
\frac{1}{i\hbar}(-1)^{\varepsilon_\alpha}L_m\sigma^m_{,\alpha}
\bigg(
\frac{\delta{\cal W}}{\delta{\cal J}}
+Œ \frac{\hbar}{i}\frac{\delta}{\delta{\cal J}},\psi
\bigg),
\frac{i}{\hbar}\frac{\delta{\cal W}}{\delta\phi^*_A}+
\frac{\delta}{\delta\phi^*_A}\bigg),\end{aligned}$$ and $\hat{Q}$ stands for an operator given by the rule $$\begin{aligned}
\hat{Q}=\exp\bigg\{\!\!-\frac{i}{\hbar}{\cal W}\bigg\}\hat{\omega}
\exp\bigg\{\frac{i}{\hbar}{\cal W}\bigg\}.\end{aligned}$$
From eq. (18) it follows, in particular, that the breakdown of nilpotency in the case of $\hat{\omega}$ is also inherited by $\hat{Q}$, i.e. $$\hat{Q}^2=i\hbar(-1)^{\varepsilon_i}L_m
\sigma^m_{,i\alpha}
\left(\frac{\delta W}{\delta{\cal J}}
+
\frac{\hbar}{i}\frac{\delta}{\delta{\cal J}},\psi\right)
\left[
\left(\frac{\delta}{\delta\psi^*_\alpha}
+
\frac{i}{\hbar}
\frac{\delta{\cal W}}{{\delta\psi^*_\alpha}}\right) \!
\left(\frac{\delta}{\delta\varphi^*_i}
+
\frac{i}{\hbar}
\frac{\delta{\cal W}}{{\delta\varphi^*_i}}\right)
\right].$$
By virtue of the Ward identities (12) for the functional ${\cal W}({\cal J},L,\psi,\phi^*)$, eq. (18) admits of the representation $$\begin{aligned}
\hat{Q}=\hat{\Omega}-\frac{\delta{\cal W}}{\delta\psi^\alpha}
\frac{\delta}{\delta\psi^*_\alpha}-(-1)^{\varepsilon_\alpha}
\frac{\delta{\cal W}}{\delta\psi^*_\alpha}\frac{\delta_l}
{\delta\psi^\alpha}\,.\end{aligned}$$
In order to derive the form of gauge dependence of $\Gamma=\Gamma(\varphi,\Sigma,\psi,\phi^*)$, we observe that $\delta\Gamma=\delta{\cal W}$. Hence, $$\begin{aligned}
\delta\Gamma=-\hat{q}
\langle\langle\delta\tilde{X}\rangle\rangle,\end{aligned}$$ where $\langle\langle\delta\tilde{X}\rangle\rangle$ and $\hat{q}$ are the values related through the Legendre transformation to $\langle\delta\tilde{X}\rangle$ and $\hat{Q}$ in eq. (17). Namely, the functional $\langle\langle\delta\tilde{X}\rangle\rangle$ has the formŒ$$\begin{aligned}
&&\hspace*{-0.7cm}
\langle\langle\delta\tilde{X}\rangle\rangle
=\delta X\bigg(\hat{\varphi}^i,\;
\psi^\alpha,\;
\phi^*_A;\;
\frac{i}{\hbar}(-1)^{\varepsilon_i}\bigg(
\frac{\delta\Gamma}{\delta\varphi^i}
-
\frac{\delta\Gamma}{\delta\Sigma^m}\sigma^m_{,i}(\varphi,\psi)
\bigg),\nonumber\\
&&\hspace*{-0.7cm}
\frac{\delta_l}{\delta\psi^\alpha}
+
\frac{i}{\hbar}(-1)^{\varepsilon_\alpha}\bigg(
\frac{\delta\Gamma}{\delta\psi^\alpha}
-
\frac{\delta\Gamma}{\delta\Sigma^m}\sigma^m_{,\alpha}(\varphi,\psi)
\bigg)\nonumber\\
&&\hspace*{-0.7cm}
-(-1)^{\varepsilon_\alpha(\varepsilon_\rho+1)}
(G^{''-1})^{\rho\sigma}
\bigg[
\frac{\delta_l}{\delta\Phi^\sigma}
\bigg(
\frac{\delta\Gamma}{\delta\psi^\alpha}
-
\frac{\delta\Gamma}{\delta\Sigma^m}\sigma^m_{,\alpha}(\varphi,\psi)
\bigg)
\bigg]
\frac{\delta_l}{\delta\Phi^\rho}\nonumber\\
&&\hspace*{-0.7cm}
-
(-1)^{\varepsilon_\alpha(\varepsilon_m+1)}
\sigma^m_{,\alpha}(\varphi,\psi)\frac{\delta_l}{\delta\Sigma^m}
+
\frac{i}{\hbar}(-1)^{\varepsilon_\alpha}
\frac{\delta\Gamma}{\delta\Sigma^m}
\sigma^m_{,\alpha}(\hat{\varphi},\psi)\nonumber\\
&&\hspace*{-0.7cm}
+(-1)^{\varepsilon_\alpha+
\varepsilon_i(\varepsilon_\alpha+\varepsilon_m+1)}
(G^{''-1})^{i\sigma}
\bigg[
\frac{\delta_l}{\delta\Phi^\sigma}
\bigg(
\frac{\delta\Gamma}{\delta\psi^\alpha}
-
\frac{\delta\Gamma}{\delta\Sigma^n}\sigma^n_{,\alpha}(\varphi,\psi)
\bigg)
\bigg]Œ \sigma^m_{,i}(\varphi,\psi)\frac{\delta}{\delta\Sigma^m},
\nonumber\\
&&\hspace*{-0.7cm}
\frac{\delta}{\delta\phi^*_A}
+
\frac{i}{\hbar}\frac{\delta\Gamma}{\delta\phi^*_A}
-
(-1)^{\varepsilon_\rho(\varepsilon_A+1)}
(G^{''-1})^{\rho\sigma}
\bigg(\frac{\delta_l}{\delta\Phi^\sigma}\frac{\delta\Gamma}{\delta\phi^*_A}
\bigg)\frac{\delta_l}{\delta\Phi^\rho}
\nonumber\\
&&\hspace*{-0.7cm}
+
(-1)^{\varepsilon_i(\varepsilon_A+\varepsilon_m)}
(G^{''-1})^{i\sigma}
\bigg(\frac{\delta_l}{\delta\Phi^\sigma}\frac{\delta\Gamma}{\delta\phi^*_A}
\bigg)\sigma^m_{,i}(\varphi,\psi)\frac{\delta_l}{\delta\Sigma^m}\bigg).\end{aligned}$$ Meanwhile, the operator $\hat{q}$ admits of the representation $$\begin{aligned}
&&\hspace*{-0.7cm}
\hat{q}=i\hbar\bigg\{
(-1)^{\varepsilon_\alpha}
\frac{\delta_l}{\delta\psi^\alpha}
-
(-1)^{\varepsilon_\alpha\varepsilon_m}
\sigma^m_{,\alpha}(\varphi,\psi)\frac{\delta_l}{\delta\Sigma^m}
\nonumber\\
&&\hspace*{-0.7cm}
-(-1)^{\varepsilon_\alpha\varepsilon_\rho}
(G^{''-1})^{\rho\sigma}
\bigg[
\frac{\delta_l}{\delta\Phi^\sigma}
\bigg(
\frac{\delta\Gamma}{\delta\psi^\alpha}
-
\frac{\delta\Gamma}{\delta\Sigma^m}\sigma^m_{,\alpha}(\varphi,\psi)
\bigg)
\bigg]
\frac{\delta_l}{\delta\Phi^\rho}
\nonumber\\
&&\hspace*{-0.7cm}
+
(-1)^{\varepsilon_i(\varepsilon_\alpha+\varepsilon_m+1)}
(G^{''-1})^{i\sigma}
\bigg[
\frac{\delta_l}{\delta\Phi^\sigma}
\bigg(
\frac{\delta\Gamma}{\delta\psi^\alpha}
-Œ \frac{\delta\Gamma}{\delta\Sigma^n}\sigma^n_{,\alpha}(\varphi,\psi)
\bigg)
\bigg]
\sigma^m_{,i}(\varphi,\psi)\frac{\delta_l}{\delta\Sigma^m}
\bigg\}\times
\nonumber\\
&&\hspace*{-0.7cm}
\times
\bigg\{
\frac{\delta}{\delta\psi^*_\alpha}
-
(-1)^{\varepsilon_\rho(\varepsilon_\alpha+1)}(G^{''-1})^{\rho\sigma}
\bigg(\frac{\delta_l}{\delta\Phi^\sigma}
\frac{\delta\Gamma}{\delta\psi^*_\alpha}\bigg)
\frac{\delta_l}{\delta\Phi^\rho}
\nonumber\\
&&\hspace*{-0.7cm}
+
(-1)^{\varepsilon_i(\varepsilon_\alpha+\varepsilon_m)}
(G^{''-1})^{i\sigma}
\bigg(
\frac{\delta_l}{\delta\Phi^\sigma}
\frac{\delta\Gamma}{\delta\psi^*_\alpha}\bigg)
\sigma^m_{,i}(\varphi,\psi)
\frac{\delta_l}{\delta\Sigma^m}
\bigg\}
\nonumber\\
&&\hspace*{-0.7cm}
-
\bigg[
\frac{\delta\Gamma}{\delta\phi^A}
-
\bigg(
\frac{\delta\Gamma}{\delta\Sigma^n}\sigma^n_{,A}(\varphi,\psi)
-
\frac{\delta\Gamma}{\delta\Sigma^n}\sigma^n_{,A}(\hat{\varphi},\psi)
\bigg)
\bigg]\times
\nonumber\\
&&\hspace*{-0.7cm}
\times
\bigg\{
\frac{\delta}{\delta\phi^*_A}
-(-1)^{\varepsilon_\rho(\varepsilon_A+1)}(G^{''-1})^{\rho\sigma}
\bigg(\frac{\delta_l}{\delta\Phi^\sigma}
\frac{\delta\Gamma}{\delta\phi^*_A}
\bigg)
\frac{\delta_l}{\delta\Phi^\rho}
\nonumber\\
&&\hspace*{-0.7cm}
+Œ (-1)^{\varepsilon_i(\varepsilon_A+\varepsilon_m)}(G^{''-1})^{i\sigma}
\bigg(\frac{\delta_l}{\delta\Phi^\sigma}
\frac{\delta\Gamma}{\delta\phi^*_A}
\bigg)
\sigma^m_{,i}(\varphi,\psi)
\frac{\delta_l}{\delta\Sigma^m}
\bigg\}
\nonumber\\
&&\hspace*{-0.7cm}
-
\frac{\delta\Gamma}{\delta\psi^*_\alpha}
\bigg\{
(-1)^{\varepsilon_\alpha}
\frac{\delta_l}{\delta\psi^\alpha}
-
(-1)^{\varepsilon_\alpha\varepsilon_m}
\sigma^m_{,\alpha}(\varphi,\psi)\frac{\delta_l}{\delta\Sigma^m}
\nonumber\\
&&\hspace*{-0.7cm}
-(-1)^{\varepsilon_\alpha\varepsilon_\rho}
(G^{''-1})^{\rho\sigma}
\bigg[
\frac{\delta_l}{\delta\Phi^\sigma}
\bigg(
\frac{\delta\Gamma}{\delta\psi^\alpha}
-
\frac{\delta\Gamma}{\delta\Sigma^m}\sigma^m_{,\alpha}(\varphi,\psi)
\bigg)
\bigg]
\frac{\delta_l}{\delta\Phi^\rho}
\nonumber\\
&&\hspace*{-0.7cm}
+
(-1)^{\varepsilon_i(\varepsilon_\alpha+\varepsilon_m+1})
(G^{''-1})^{i\sigma}
\bigg[
\frac{\delta_l}{\delta\Phi^\sigma}
\bigg(
\frac{\delta\Gamma}{\delta\psi^\alpha}
-
\frac{\delta\Gamma}{\delta\Sigma^n}\sigma^n_{,\alpha}(\varphi,\psi)
\bigg)
\bigg]
\sigma^m_{,i}(\varphi,\psi)\frac{\delta_l}{\delta\Sigma^m}
\bigg\}.\nonumber\\\end{aligned}$$ At the same time, the algebraic properties of $\hat{Q}$, eq. (18), obviously imply the breakdown of nilpotency for the operator $\hat{q}$ in the general case of a theory with composite and external fields.
Quantum Theories with Composite and External Fields in the BLT Approach
=======================================================================
Consider now the quantum properties of general gauge theories with composite and external fields in the framework of the BLT formalism [@BLT]. For this purpose we remind that the quantization rules [@BLT] imply introducing a set of fields $\phi^A$ and a set of the corresponding antifields $\phi^*_{Aa}$, $\bar{\phi}_A$, with $$\varepsilon(\phi^A)=\varepsilon_A,\;\;
\varepsilon(\phi^*_{Aa})=\varepsilon_A+1,\;\;
\varepsilon(\bar{\phi}_A)=\varepsilon_A.$$ The doublets of antifields $\phi^*_{Aa}$ play the role of sources of BRST and antiBRST transformations, while the antifields $\bar{\phi}_A$ are the sources of mixed BRST and antiBRST transformations. The structure of the configuration space of the fields $\phi^A$ in the BLT approach is identical (for any given gauge theory) with that of the BV formalism. Given this, when considered in the framework of the BLT method, the fields are combined into irreducible, completely symmetric, Sp(2)-tensors [@BLT].
The extended generating functional $Z(J,L,\phi^*,\bar{\phi})$ of Green’s functions with composite fields is constructed within the BLT formalism by the rule (see, for example, Ref. [@LOR]) $$\begin{aligned}
Z(J,\phi^*,\bar{\phi})&=&\int d\phi\;\exp\bigg\{\frac{i}{\hbar}\bigg(
S_{\rm ext}(\phi,\phi^{*},\bar{\phi})+J_A\phi^A+L_m\sigma^m(\phi)
\bigg)\bigg\},\end{aligned}$$ where $S_{\rm ext}=S_{\rm ext}(\phi,\phi^{*},\bar{\phi})$ is the gauge-fixed quantum action defined as $$\exp\bigg\{\frac{i}{\hbar}S_{\rm ext}\bigg\}=\exp\bigg(\!\!-i\hbar
\hat{T}(F)\bigg)\exp\bigg\{\frac{i}{\hbar}S\bigg\}.$$ In eq. (23), $S=S(\phi,\phi^*,\bar{\phi})$ is a bosonic functional satisfying the equations $$\begin{aligned}
\frac{1}{2}(S,S)^a+V^aS=i\hbar\Delta^aS,\end{aligned}$$ or equivalently $$\begin{aligned}
\bar{\Delta}^a\exp\bigg\{\frac{i}{\hbar}S\bigg\}=0,\;\;\;
\bar{\Delta}^a\equiv\Delta^a+\frac{i}{\hbar}V^a,\end{aligned}$$ with the boundary condition $$\begin{aligned}
S|_{\phi^{*}=\bar{\phi}=\hbar=0}={\cal S}Œ\end{aligned}$$ (again ${\cal S}$ is the classical action). At the same time, $\hat{T}(F)$ is an operator of the form $$\hat{T}(F)=\frac{1}{2}\varepsilon_{ab}[\bar{\Delta}^b,[\bar{\Delta}^a,
F]_{-}]_{+}\,,$$ where $F$ is a bosonic (generally, operator-valued) gauge-fixing functional.
In eqs. (24)–(26) we use the extended antibrackets, defined for two arbitrary functionals $F=F(\phi,\phi^*,\bar{\phi})$, $G=G(\phi,\phi^*,\bar{\phi})$ by the rule [@BLT] $$\begin{aligned}
(F,G)^a=\frac{\delta F}{\delta\phi^A}\frac{\delta G}{\delta
\phi^{*}_{Aa}}-
(-1)^{(\varepsilon(F)+1)(\varepsilon(G)+1)}
\frac{\delta G}{\delta\phi^A}\frac{\delta F} {\delta\phi^{*}_{Aa}},\end{aligned}$$ as well as the operators $\Delta^a$, $V^a$ $$\begin{aligned}
\Delta^a=(-1)^{\varepsilon_A}\frac{\delta_l}{\delta\phi^A}\frac
{\delta}{\delta\phi^{*}_{Aa}}\;,\;\;
V^a=\varepsilon^{ab}\phi^{*}_{Ab}\frac{\delta}{\delta\bar\phi_A}\;\end{aligned}$$ with the properties $$\begin{aligned}
\Delta^{\{a}\Delta^{b\}}=0,\;\;V^{\{a}V^{b\}}=0,\;\;
\Delta^{\{a}V^{b\}}+V^{\{a}\Delta^{b\}}=0.\end{aligned}$$ From the above algebraic properties, obviously, follow the identities $[\hat{T}(F),\,\bar{\Delta}^a]=0$, which imply that the functional $S_{\rm ext}$ in eq. (23) satisfies equations of the same form as in (25), i.e. $$\begin{aligned}
\bar{\Delta}^a\exp\bigg\{\frac{i}{\hbar}S_{\rm ext}\bigg\}=0.\end{aligned}$$
Consider now the following representation of the generating functional $Z(J,L,\phi^*,\bar{\phi})$ in eq. (22): $$\begin{aligned}
Z(J,L,\phi^*,\bar{\phi})=\int d\psi\;{\cal Z}
({\cal J},L,\psi,\phi^*,\bar{\phi})
\exp\bigg(\frac{i}{\hbar}{\cal Y}\psi\bigg),\end{aligned}$$ where ${\cal Z}({\cal J},L,\psi,\phi^*,\bar{\phi})$ is the (extended) generating functional of Green’s functions with composite fields on the background of external fields $\psi^\alpha$ $$\begin{aligned}
{\cal Z}({\cal J},L,\psi,\phi^*,\bar{\phi})=\int d\varphi\;
\exp\bigg\{\frac{i}
{\hbar}\bigg(S_{\rm ext}(\varphi,\psi,\phi^{*},\bar{\phi})+Œ {\cal J}\varphi+L\sigma(\varphi,\psi)\bigg)\bigg\}\end{aligned}$$ with $$\phi^A=(\varphi^i,\;\psi^\alpha),\;\;J_A=({\cal J}_i,\;{\cal Y}_\alpha).$$
In order to obtain the Ward identities for a general gauge theory with composite and external fields in the BLT quantization formalism, we apply a procedure quite similar to that presented in the case of the BV approach. Namely, by virtue of eq. (27) for the gauge-fixed quantum action $S_{\rm ext}$ (23), we have $$\begin{aligned}
\int d\varphi\;\exp\bigg[\frac{i}{\hbar}\bigg({\cal J}_i\varphi^i
+L_m\sigma(\varphi,\psi)\bigg)\bigg]
\bar{\Delta}^a\exp\bigg\{\frac{i}{\hbar}S_{\rm ext}(\varphi,\psi,\phi^{*},
\bar{\phi})\bigg\}=0.\end{aligned}$$ As a result of integration by parts in eq. (29), with allowance for the relations $$\begin{aligned}
\exp\bigg\{\frac{i}{\hbar}\bigg(
{\cal J}_i\varphi^i+L_m\sigma^m(\varphi,\psi)
\bigg)\bigg\}
\bar{\Delta}^a=\nonumber\\\end{aligned}$$ $$=
\bigg(
\bar{\Delta}^a-\frac{i}{\hbar}{\cal J}_i
\frac{\delta}{\delta\varphi^{*}_{ia}}-
\frac{i}{\hbar}L_m\sigma^m_{,A}(\varphi,\psi)
\frac{\delta}{\delta\phi^{*}_{Aa}}\bigg)
\exp\bigg\{\frac{i}{\hbar}
\bigg({\cal J}_i\varphi^i+
L_m\sigma^m(\varphi,\psi)
\bigg)
\bigg\},$$ we find that the Ward identities for the functional ${\cal Z}={\cal Z}({\cal J},L,\psi,\phi^*,\bar{\phi})$ have the form $$\begin{aligned}
\hat{\omega}^a{\cal Z}=0,\end{aligned}$$ where $$\hat{\omega}^a=i\hbar{\Delta}^a_\psi-V^a
+
{\cal J}_i\frac{\delta}{\delta\varphi^{*}_{ia}}
+
L_m\sigma^m_{,A}\bigg(\frac{\hbar}{i}Œ \frac{\delta}{\delta{\cal J}},\psi\bigg)
\frac{\delta}{\delta\phi^{*}_{Aa}}\,,$$ $${\Delta}^a_\psi\equiv(-1)^{\varepsilon_\alpha}\frac{\delta_l}
{\delta\psi^\alpha}\frac{\delta}{\delta\psi^{*}_{\alpha a}}.$$ Note that the operators $\hat{\omega}^a$ satisfy the relations $$\hat{\omega}^{\{a}\hat{\omega}^{b\}}=
i\hbar(-1)^{\varepsilon_i}L_m
\sigma^m_{,i\alpha}\left(\frac{\hbar}{i}
\frac{\delta}{\delta{\cal J}},\psi\right)
\frac{\delta}{\delta\psi^*_{\alpha\{a}}
\frac{\delta}{\delta\varphi^*_{ib\}}}\;,$$ which imply the violation of the property $\hat{\omega}^{\{a}\hat{\omega}^{b\}}=0$ of generalized nilpotency.
In terms of the generating functional ${\cal W}={\cal W}({\cal J},L,\psi,\phi^*,\bar{\phi})$, $${\cal Z}=\exp\bigg\{\frac{i}{\hbar}{\cal W}\bigg\},$$ of connected Green’s functions, the Ward identities (31) are represented as $$\begin{aligned}
\hat{\Omega}^a{\cal W}=\frac{\delta{\cal W}}{\delta\psi^\alpha}
\frac{\delta{\cal W}}{\delta\psi^*_{\alpha a}}\,,\end{aligned}$$ $$\hat{\Omega}^a=
i\hbar\Delta_\psi^a
-V^a
+{\cal J}_i\frac{\delta}
{\delta\varphi^{*}_{ia}}+L_m\sigma^m_{,A}\bigg(\frac{\delta{\cal W}}
{\delta{\cal J}}
+\frac{\hbar}{i}\frac{\delta}{\delta{\cal J}},\psi\bigg)
\frac{\delta}{\delta\phi^{*}_{Aa}}.$$ Consequently, the Ward identities for the generating functional of 1PI vertex functions $$\Gamma(\varphi,\Sigma,\psi,\phi^*,\bar{\phi})=
{\cal W}({\cal J},L,\psi,\phi^*,\bar{\phi})
-
{\cal J}_i\varphi^i
-
L_m\bigg(\Sigma^m+\sigma^m(\varphi,\psi)\bigg),$$ $$Œ \varphi^i=\frac{\delta{\cal W}}{\delta{\cal J}_i},\;\;\;
\Sigma^m=\frac{\delta{\cal W}}{\delta L_m}
-\sigma^m\bigg(\frac{\delta{\cal W}}{\delta{\cal J}},\psi\bigg)$$ have the form $$\begin{aligned}
&&\hspace*{-0.3cm}
\frac{1}{2}(\Gamma,\Gamma)^a+V^a\Gamma+
\frac{\delta\Gamma}{\delta\Sigma^m}
\bigg(\sigma^m_{,A}(\hat{\varphi},\psi)
-\sigma^m_{,A}(\varphi,\psi)\bigg)\frac{\delta\Gamma}{\delta\phi^*_{Aa}}
=
\nonumber\\
&&\hspace*{-0.3cm}
=
i\hbar\bigg\{
\Delta^a_\psi\Gamma-
(-1)^{\varepsilon_\alpha\varepsilon_\rho}
(G^{''-1})^{\rho\sigma}
\bigg[
\frac{\delta_l}{\delta\Phi^\sigma}
\bigg(
\frac{\delta\Gamma}{\delta\psi^\alpha}
-
\frac{\delta\Gamma}{\delta\Sigma^m}\sigma^m_{,\alpha}(\varphi,\psi)
\bigg)
\bigg]
\frac{\delta_l}{\delta\Phi^\rho}
\frac{\delta\Gamma}{\delta\psi^*_{\alpha a}}
\nonumber\\
&&\hspace*{-0.3cm}
+
(-1)^{\varepsilon_i(\varepsilon_\alpha+\varepsilon_m+1)}
(G^{''-1})^{i\sigma}
\bigg[
\frac{\delta_l}{\delta\Phi^\sigma}
\bigg(
\frac{\delta\Gamma}{\delta\psi^\alpha}
-
\frac{\delta\Gamma}{\delta\Sigma^n}\sigma^n_{,\alpha}(\varphi,\psi)
\bigg)
\bigg]
\sigma^m_{,i}(\varphi,\psi)
\frac{\delta_l}{\delta\Sigma^m}
\frac{\delta\Gamma}{\delta\psi^*_{\alpha a}}
\nonumber\\
&&\hspace*{-0.3cm}
-
(-1)^{\varepsilon_\alpha\varepsilon_m}\sigma^m_{,\alpha}(\varphi,\psi)
\frac{\delta_l}{\delta\Sigma^m}
\frac{\delta\Gamma}{\delta\psi^*_{\alpha a}}\bigg\}.Œ\end{aligned}$$
As in the previous section, let us consider the particular cases corresponding to theories with either composite or external fields only.
Namely, for a quantum theory with composite fields, considered in the absence of external fields, we obtain, by virtue of eq. (31), (32), the following Ward identities for the generating functional $Z=Z(J,L,\phi^*,\bar{\phi})$ of Green’s functions in eq. (22): $$J_A\frac{\delta Z}{\delta\phi^*_{Aa}}
+
L_m\sigma^m_{,A}\left(\frac{\hbar}{i}
\frac{\delta}{\delta J}\right)\frac{\delta Z}
{\delta\phi^*_{Aa}}-V^aZ=0.$$ Again, they can be obtained by integrating eq. (31) over the external fields $\psi^\alpha$ with the weight functional $\exp\{i/\hbar\,{\cal Y}_\alpha\psi^\alpha\}$.
The corresponding generating functional $W=W(J,L,\phi^*,\bar{\phi})$ of connected Green’s functions satisfies the Ward identities of the form $$J_A\frac{\delta W}{\delta\phi^*_{Aa}}
+
L_m\sigma^a_{,A}\left(\frac{\delta W}{\delta J}
+
\frac{\hbar}{i}\frac{\delta}{\delta J}\right)
\frac{\delta W}{\delta\phi^*_{Aa}}-V^aW=0.$$ At the same time, the Ward identities for the generating functional $\Gamma=\Gamma(\phi,\Sigma,\phi^*,\bar{\phi})$ of vertex functions are given by $$\frac{1}{2}(\Gamma,\Gamma)^a+V^a\Gamma+\frac{\delta\Gamma}
{\delta\Sigma^m}\Biggl(\sigma^m_{,A}({\hat \phi})-\sigma^m_{,A}(\phi)
\Biggr)\frac{\delta\Gamma}{\delta\phi^{*}_{Aa}}=0.$$
Note that the above Ward identities for a quantum theory with composite fields considered in the BLT formalism coincide with the results obtained in Ref. [@LOR].
Next, for a theory with external fields in the absence of composite fields ($\sigma^m(\phi)=0$, $L_m=0$) the generating functional (28) of Green’s functions evidently reduces to ${\cal Z}={\cal Z}({\cal J},\psi,\phi^*,\bar{\phi})$, which satisfies, by virtue of eqs. (31), (32), the following Ward identities: $$\left(i\hbar{\Delta}^a_\psi+{\cal J}_i\frac{\delta}Œ {\delta\varphi^{*}_{ia}}-V^a\right){\cal Z}=0.$$ Meanwhile, the Ward identities for the corresponding generating functional ${\cal W}={\cal W}({\cal J},\psi,\phi^*,\bar{\phi})$ of connected Green’s functions, accordingly, take on the form $$\left(i\hbar{\Delta}^a_\psi+{\cal J}_i\frac{\delta}
{\delta\varphi^{*}_{ia}}-V^a\right){\cal W}=
\frac{\delta{\cal W}}{\delta\psi^\alpha}
\frac{\delta{\cal W}}{\delta\psi^*_{\alpha a}}\,.$$ Finally, in the case of the generating functional $\Gamma=\Gamma(\varphi,\psi,\phi^*,\bar{\phi})$ of vertex functions the Ward identities in question can be represented as $$\frac{1}{2}(\Gamma,\Gamma)^a+V^a\Gamma=i\hbar\Delta_\psi^a\Gamma
-i\hbar(\Gamma^{''-1})^{ij}
\bigg(\frac{\delta_l}{\delta\varphi^j}
\frac{\delta\Gamma}{\delta\psi^\alpha}\bigg)
\bigg(\frac{\delta}{\delta\psi^*_{\alpha a}}
\frac{\delta\Gamma}{\delta\varphi^i}\bigg).$$
The above Ward identities for a quantum theory with external fields considered in the framework of the BLT formalism coincide with the corresponding results of Ref. [@ext].
Let us now study the gauge dependence of the above generating functionals with composite and external fields, considered in the BLT quantization approach, under the most general variation $$\delta F\bigg(\varphi^i,\psi^\alpha,\phi^*_{Aa},\bar{\phi}_A;\,
\frac{\delta_l}{\delta\varphi^i},\frac{\delta_l}{\delta\psi^\alpha},
\frac{\delta}{\delta\phi^*_{Aa}},\frac{\delta}{\delta\bar{\phi}_A}\bigg)$$ of the gauge boson. From eq. (23) it follows that $$\delta\bigg(\!\exp\bigg\{\frac{i}{\hbar}S_{\rm ext}\bigg\}\bigg)
=-i\hbar\hat{T}(\delta Y)\exp\bigg\{\frac{i}{\hbar}S_{\rm ext}\bigg\},$$ where $\delta Y$, related to $\delta F$ through a linear (operator-valued) transformation, always admits of the representation $$\begin{aligned}
&&
\hspace*{-0.5cm}
\delta Y\bigg(\varphi^i,\psi^\alpha,\phi^*_{Aa},\bar{\phi}_A;\,
\frac{\delta_l}{\delta\varphi^i},\frac{\delta_l}{\delta\psi^\alpha},
\frac{\delta}{\delta\phi^*_{Aa}},\frac{\delta}{\delta\bar{\phi}_A}\bigg)=
\nonumber\\
&&Œ \hspace*{-0.5cm}
=\delta Y^{(0)}\bigg(\varphi^i,\psi^\alpha,\phi^*_{Aa},
\bar{\phi}_A;\,
\frac{\delta_l}{\delta\psi^\alpha},\frac{\delta}{\delta\phi^*_{Aa}},
\frac{\delta}{\delta\bar{\phi}_A}\bigg)
\nonumber\\
&&
\hspace*{-0.5cm}
+
\sum_{N=1}\frac{\delta_l}{\delta\varphi^{i_1}}\ldots
\frac{\delta_l}{\delta\varphi^{i_N}}
\delta Y^{(i_1\ldots i_N)}
\bigg(\varphi^i,\psi^\alpha,\,\phi^*_{Aa},\bar{\phi}_A;\,
\frac{\delta_l}{\delta\psi^\alpha},
\frac{\delta}{\delta\phi^*_{Aa}},\frac{\delta}{\delta\bar{\phi}_A}\bigg).\end{aligned}$$ Taking eqs. (27), (28) and (35) into account, we have $$\begin{aligned}
\delta{\cal Z}({\cal J},L,\psi,\phi^*,\bar{\phi})
=\frac{i\hbar}{2}
\varepsilon_{ab}\int d\varphi\;\exp\bigg[\frac{i}{\hbar}\bigg(
{\cal J}_i\varphi^i+L_m\sigma^m(\varphi,\psi)\bigg)\bigg]\times
\nonumber\\
\times
\bar{\Delta}^a\bar{\Delta}^b\bigg(\delta Y
\exp\bigg\{\frac{i}{\hbar}S_{\rm ext}(\varphi,\psi,\phi^{*},\bar{\phi})
\bigg\}\bigg),\end{aligned}$$ and therefore, with allowance for eqs. (30), (32), (35), integration by parts in eq. (36) yields $$\begin{aligned}
\delta{\cal Z}=\frac{i}{2\hbar}
\varepsilon_{ab}\hat{\omega}^b\hat{\omega}^a\delta\tilde{Y}{\cal Z},\end{aligned}$$ where $$\begin{aligned}
&&
\delta\tilde{Y}=\delta Y\bigg(
\frac{\hbar}{i}\frac{\delta}{\delta{\cal J}_i},\,
\psi^\alpha,\,\phi^*_{Aa}\,,\bar{\phi}_{A}\,;
(-1)^{\varepsilon_i}\frac{1}{i\hbar}{\cal J}_i,
\\
&&
\frac{\delta_l}{\delta\psi^\alpha}
+
\frac{1}{i\hbar}(-1)^{\varepsilon_\alpha}L_m\sigma^m_{,\alpha}
\bigg(
\frac{\hbar}{i}\frac{\delta}{\delta{\cal J}},\psi\bigg),\,
\frac{\delta}{\delta\phi^*_{Aa}}\,,\frac{\delta}{\delta\bar{\phi}_{A}}\bigg).\end{aligned}$$ Transforming eq. (37) in terms of the generating functional ofŒ connected Green’s functions ${\cal W}({\cal J},L,\psi,\phi^*,\bar{\phi})$, we arrive at the relation $$\begin{aligned}
\delta{\cal W}=\frac{1}{2}\varepsilon_{ab}\hat{Q}^b\hat{Q}^a
\langle\delta\tilde{Y}\rangle,\end{aligned}$$ where $$\langle\delta\tilde{Y}\rangle=\delta Y
\bigg(\frac{\delta{\cal W}}{\delta{\cal J}_i}+\frac{\hbar}{i}
\frac{\delta}{\delta{\cal J}_i},\,\psi^\alpha,\,\phi^*_{Aa}\,,
\bar{\phi}_A;\,
(-1)^{\varepsilon_i}\frac{1}{i\hbar}{\cal J}_i,$$ $$\frac{i}{\hbar}\frac{\delta_l{\cal W}}{\delta\psi^\alpha}+
\frac{\delta_l}{\delta\psi^\alpha}
+
\frac{1}{i\hbar}(-1)^{\varepsilon_\alpha}L_m\sigma^m_{,\alpha}
\bigg(
\frac{\delta{\cal W}}{\delta{\cal J}}
+
\frac{\hbar}{i}\frac{\delta}{\delta{\cal J}},\psi
\bigg)\!,
\frac{i}{\hbar}\frac{\delta{\cal W}}{\delta\phi^*_{Aa}}+
\frac{\delta}{\delta\phi^*_{Aa}},
\frac{i}{\hbar}\frac{\delta{\cal W}}{\delta\bar{\phi}_A}+
\frac{\delta}{\delta\bar{\phi}_A}\bigg),$$ while the operators $\hat{Q}^a$ are defined by $$\begin{aligned}
\hat{Q}^a=\exp\bigg\{\!\!-\frac{i}{\hbar}{\cal W}\bigg\}\hat{\omega}^a
\exp\bigg\{\frac{i}{\hbar}{\cal W}\bigg\}\end{aligned}$$ and admit of the representation $$\begin{aligned}
\hat{Q}^a=\hat{\Omega}^a-\frac{\delta{\cal W}}{\delta\psi^\alpha}
\frac{\delta}{\delta\psi^*_{\alpha a}}-(-1)^{\varepsilon_\alpha}
\frac{\delta{\cal W}}{\delta\psi^*_{\alpha a}}\frac{\delta_l}
{\delta\psi^\alpha}\,.\end{aligned}$$ Note that the operators $\hat{Q}^a$ satisfy the relations $$\begin{aligned}
\hat{Q}^{\{a}\hat{Q}^{b\}}
&=&i\hbar(-1)^{\varepsilon_i}L_m
\sigma^m_{,i\alpha}
\left(\frac{\delta W}{\delta{\cal J}}
+
\frac{\hbar}{i}
\frac{\delta}{\delta{\cal J}},\psi\right)\times\\Œ &&\times
\left[
\left(\frac{\delta}{\delta\psi^*_{\alpha\{a}}
+
\frac{i}{\hbar}
\frac{\delta{\cal W}}{{\delta\psi^*_{\alpha\{a}}}\right)
\left(\frac{\delta}{\delta\varphi^*_{ib\}}}
+
\frac{i}{\hbar}
\frac{\delta{\cal W}}{\delta\varphi^*_{ib\}}}\right)
\right],\end{aligned}$$ which follow from the properties of $\hat{\omega}^a$ and imply, in particular, the breakdown of generalized nilpotency also in the case of $\hat{Q}^a$.
Finally, with allowance for eq. (38), the variation of the generating functional $\Gamma=\Gamma(\varphi,\Sigma,\psi,\phi^*,\bar{\phi})$ of vertex Green’s functions is given by $$\begin{aligned}
\delta\Gamma=\frac{1}{2}\varepsilon_{ab}\hat{q}^b\hat{q}^a
\langle\langle\delta\tilde{Y}\rangle\rangle.\end{aligned}$$ The functional $\langle\langle\delta\tilde{Y}\rangle\rangle$ has the form $$\begin{aligned}
&&\hspace*{-0.7cm}
\langle\langle\delta\tilde{Y}\rangle\rangle
=\delta Y\bigg(\hat{\varphi}^i,\;
\psi^\alpha,\;\phi^*_{Aa},\;\bar{\phi}_A;\;
\frac{i}{\hbar}(-1)^{\varepsilon_i}\bigg(
\frac{\delta\Gamma}{\delta\varphi^i}
-
\frac{\delta\Gamma}{\delta\Sigma^m}\sigma^m_{,i}(\varphi,\psi)
\bigg),
\\
&&\hspace*{-0.7cm}
\frac{\delta_l}{\delta\psi^\alpha}
+
\frac{i}{\hbar}(-1)^{\varepsilon_\alpha}\bigg(
\frac{\delta\Gamma}{\delta\psi^\alpha}
-
\frac{\delta\Gamma}{\delta\Sigma^m}\sigma^m_{,\alpha}(\varphi,\psi)
\bigg)
\\
&&\hspace*{-0.7cm}
-(-1)^{\varepsilon_\alpha(\varepsilon_\rho+1)}
(G^{''-1})^{\rho\sigma}
\bigg[
\frac{\delta_l}{\delta\Phi^\sigma}
\bigg(Œ \frac{\delta\Gamma}{\delta\psi^\alpha}
-
\frac{\delta\Gamma}{\delta\Sigma^m}\sigma^m_{,\alpha}(\varphi,\psi)
\bigg)
\bigg]
\frac{\delta_l}{\delta\Phi^\rho}
\\
&&\hspace*{-0.7cm}
-
(-1)^{\varepsilon_\alpha(\varepsilon_m+1)}
\sigma^m_{,\alpha}(\varphi,\psi)\frac{\delta_l}{\delta\Sigma^m}
+
\frac{i}{\hbar}(-1)^{\varepsilon_\alpha}
\frac{\delta\Gamma}{\delta\Sigma^m}
\sigma^m_{,\alpha}(\hat{\varphi},\psi)
\\
&&\hspace*{-0.7cm}
+(-1)^{\varepsilon_\alpha+\varepsilon_i(\varepsilon_\alpha+
\varepsilon_m+1)}
(G^{''-1})^{i\sigma}
\bigg[
\frac{\delta_l}{\delta\Phi^\sigma}
\bigg(
\frac{\delta\Gamma}{\delta\psi^\alpha}
-
\frac{\delta\Gamma}{\delta\Sigma^n}\sigma^n_{,\alpha}(\varphi,\psi)
\bigg)
\bigg]
\sigma^m_{,i}(\varphi,\psi)\frac{\delta}{\delta\Sigma^m},
\\
&&\hspace*{-0.7cm}
\frac{\delta}{\delta\phi^*_{Aa}}
+
\frac{i}{\hbar}\frac{\delta\Gamma}{\delta\phi^*_{Aa}}
-
(-1)^{\varepsilon_\rho(\varepsilon_A+1)}
(G^{''-1})^{\rho\sigma}
\bigg(\frac{\delta_l}{\delta\Phi^\sigma}\frac{\delta\Gamma}
{\delta\phi^*_{Aa}}\bigg)\frac{\delta_l}{\delta\Phi^\rho}
\\
&&\hspace*{-0.7cm}
+
(-1)^{\varepsilon_i(\varepsilon_A+\varepsilon_m)}
(G^{''-1})^{i\sigma}
\bigg(\frac{\delta_l}{\delta\Phi^\sigma}\frac{\delta\Gamma}
{\delta\phi^*_{Aa}}\bigg)\sigma^m_{,i}(\varphi,\psi)\frac{\delta_l}
{\delta\Sigma^m},
\\
&&\hspace*{-0.7cm}
\frac{\delta}{\delta\bar{\phi}_A}
+Œ \frac{i}{\hbar}\frac{\delta\Gamma}{\delta\bar{\phi}_A}
-
(-1)^{\varepsilon_\rho\varepsilon_A}
(G^{''-1})^{\rho\sigma}
\bigg(\frac{\delta_l}{\delta\Phi^\sigma}
\frac{\delta\Gamma}{\delta\bar{\phi}_A}
\bigg)\frac{\delta_l}{\delta\Phi^\rho}
\\
&&\hspace*{-0.7cm}
+
(-1)^{\varepsilon_i(\varepsilon_A+\varepsilon_m+1)}
(G^{''-1})^{i\sigma}
\bigg(\frac{\delta_l}{\delta\Phi^\sigma}
\frac{\delta\Gamma}{\delta\bar{\phi}_A}
\bigg)\sigma^m_{,i}(\varphi,\psi)
\frac{\delta_l}{\delta\Sigma^m}\bigg).\end{aligned}$$ At the same time, the operators $\hat{q}^a$ can be represented as $$\begin{aligned}
&&\hspace*{-0.5cm}
\hat{q}^a=i\hbar\bigg\{
(-1)^{\varepsilon_\alpha}
\frac{\delta_l}{\delta\psi^\alpha}
-
(-1)^{\varepsilon_\alpha\varepsilon_m}
\sigma^m_{,\alpha}(\varphi,\psi)\frac{\delta_l}{\delta\Sigma^m}
\nonumber\\
&&\hspace*{-0.5cm}
-(-1)^{\varepsilon_\alpha\varepsilon_\rho}
(G^{''-1})^{\rho\sigma}
\bigg[
\frac{\delta_l}{\delta\Phi^\sigma}
\bigg(
\frac{\delta\Gamma}{\delta\psi^\alpha}
-
\frac{\delta\Gamma}{\delta\Sigma^m}\sigma^m_{,\alpha}(\varphi,\psi)
\bigg)
\bigg]
\frac{\delta_l}{\delta\Phi^\rho}
\nonumber\\
&&\hspace*{-0.5cm}
+
(-1)^{\varepsilon_i(\varepsilon_\alpha+\varepsilon_m+1)}
(G^{''-1})^{i\sigma}
\bigg[
\frac{\delta_l}{\delta\Phi^\sigma}
\bigg(
\frac{\delta\Gamma}{\delta\psi^\alpha}
-
\frac{\delta\Gamma}{\delta\Sigma^n}\sigma^n_{,\alpha}(\varphi,\psi)
\bigg)Œ \bigg]
\sigma^m_{,i}(\varphi,\psi)\frac{\delta_l}{\delta\Sigma^m}
\bigg\}\times
\nonumber\\
&&\hspace*{-0.5cm}
\times
\bigg\{
\frac{\delta}{\delta\psi^*_{\alpha a}}
-
(-1)^{\varepsilon_\rho(\varepsilon_\alpha+1)}(G^{''-1})^{\rho\sigma}
\bigg(\frac{\delta_l}{\delta\Phi^\sigma}
\frac{\delta\Gamma}{\delta\psi^*_{\alpha a}}\bigg)
\frac{\delta_l}{\delta\Phi^\rho}
\nonumber\\
&&\hspace*{-0.5cm}
+
(-1)^{\varepsilon_i(\varepsilon_\alpha+\varepsilon_m)}
(G^{''-1})^{i\sigma}
\bigg(
\frac{\delta_l}{\delta\Phi^\sigma}
\frac{\delta\Gamma}{\delta\psi^*_{\alpha a}}\bigg)
\sigma^m_{,i}(\varphi,\psi)
\frac{\delta_l}{\delta\Sigma^m}
\bigg\}
\nonumber\\
&&\hspace*{-0.5cm}
-
\bigg[
\frac{\delta\Gamma}{\delta\phi^A}
-
\bigg(
\frac{\delta\Gamma}{\delta\Sigma^n}\sigma^n_{,A}(\varphi,\psi)
-
\frac{\delta\Gamma}{\delta\Sigma^n}\sigma^n_{,A}(\hat{\varphi},\psi)
\bigg)
\bigg]\times
\nonumber\\
&&\hspace*{-0.5cm}
\times
\bigg\{
\frac{\delta}{\delta\phi^*_{Aa}}
-(-1)^{\varepsilon_\rho(\varepsilon_A+1)}(G^{''-1})^{\rho\sigma}
\bigg(\frac{\delta_l}{\delta\Phi^\sigma}
\frac{\delta\Gamma}{\delta\phi^*_{Aa}}
\bigg)
\frac{\delta_l}{\delta\Phi^\rho}
\nonumber\\
&&\hspace*{-0.5cm}
+
(-1)^{\varepsilon_i(\varepsilon_A+\varepsilon_m)}(G^{''-1})^{i\sigma}
\bigg(\frac{\delta_l}{\delta\Phi^\sigma}Œ \frac{\delta\Gamma}{\delta\phi^*_{Aa}}
\bigg)
\sigma^m_{,i}(\varphi,\psi)
\frac{\delta_l}{\delta\Sigma^m}
\bigg\}
\nonumber\\
&&\hspace*{-0.5cm}
-
\frac{\delta\Gamma}{\delta\psi^*_{\alpha a}}
\bigg\{
(-1)^{\varepsilon_\alpha}
\frac{\delta_l}{\delta\psi^\alpha}
-
(-1)^{\varepsilon_\alpha\varepsilon_m}
\sigma^m_{,\alpha}(\varphi,\psi)\frac{\delta_l}{\delta\Sigma^m}
\nonumber\\
&&\hspace*{-0.5cm}
-(-1)^{\varepsilon_\alpha\varepsilon_\rho}
(G^{''-1})^{\rho\sigma}
\bigg[
\frac{\delta_l}{\delta\Phi^\sigma}
\bigg(
\frac{\delta\Gamma}{\delta\psi^\alpha}
-
\frac{\delta\Gamma}{\delta\Sigma^m}\sigma^m_{,\alpha}(\varphi,\psi)
\bigg)
\bigg]
\frac{\delta_l}{\delta\Phi^\rho}
\nonumber\\
&&\hspace*{-0.5cm}
+
(-1)^{\varepsilon_i(\varepsilon_\alpha+\varepsilon_m+1)}
(G^{''-1})^{i\sigma}
\bigg[
\frac{\delta_l}{\delta\Phi^\sigma}
\bigg(
\frac{\delta\Gamma}{\delta\psi^\alpha}
-
\frac{\delta\Gamma}{\delta\Sigma^n}\sigma^n_{,\alpha}(\varphi,\psi)
\bigg)
\bigg]
\sigma^m_{,i}(\varphi,\psi)\frac{\delta_l}{\delta\Sigma^m}
\bigg\}
\nonumber\\
&&\hspace*{-0.5cm}
-\varepsilon^{ab}\phi^*_{Aa}\bigg\{
\frac{\delta}{\delta\bar{\phi}_A}
-
(-1)^{\varepsilon_\rho\varepsilon_A}
(G^{''-1})^{\rho\sigma}
\bigg(\frac{\delta_l}{\delta\Phi^\sigma}Œ \frac{\delta\Gamma}{\delta\bar{\phi}_A}
\bigg)\frac{\delta_l}{\delta\Phi^\rho}
\nonumber\\
&&\hspace*{-0.5cm}
+
(-1)^{\varepsilon_i(\varepsilon_A+\varepsilon_m+1)}
(G^{''-1})^{i\sigma}
\bigg(\frac{\delta_l}{\delta\Phi^\sigma}
\frac{\delta\Gamma}{\delta\bar{\phi}_A}
\bigg)\sigma^m_{,i}(\varphi,\psi)
\frac{\delta_l}{\delta\Sigma^m}\bigg\}.\end{aligned}$$ Clearly, the above values $\hat{q}^a$ and $\langle\langle\delta\tilde{Y}\rangle\rangle$ are the Legendre transforms of the corresponding values $\hat{Q}^a$ and $\langle\delta\tilde{Y}\rangle$ in eq. (38). As far as the operators $\hat{q}^a$ are concerned, this implies, in particular, the violation of their generalized nilpotency in the general case of a theory with composite and external fields.
Conclusion
==========
In this paper we have presented an extension of the studies of arbitrary quantum gauge theories with composite [@LO; @LOR] and external [@ext] fields to the case of theories with composite and external fields combined. Namely, we considered the generating functionals of Green’s functions with composite and external fields in the framework of the BV [@BV] and BLT [@BLT] quantization methods for general gauge theories. For these functionals we obtained, using the technique developed in refs. [@LOR; @ext], the corresponding Ward identities, eqs. (10), (12), (13), (31), (33), (34), and derived the explicit dependence on the most general form of gauge-fixing, eqs. (16), (17), (20), (37), (38), (40). The gauge dependence is described with the help of fermionic operators (11), (19), (21) in the BV method, as well as with the help of doublets of fermionic operators (32), (39), (41) in the framework of the BLT formalism, which bears remarkable similarity to the cases where only composite [@LOR] or external [@ext] fields were present.
At the same time, it should be noted that the most general scheme combining composite and external fields proves to present some essentially new features, which set it apart from the previous cases [@LO; @LOR; @ext]. First of all, we refer to the fact that the form of the Ward identities and gauge dependence admits, in the case under consideration, an entirely different character and does not follow from those derived earlier. (The latter, naturally, can be obtained from the general case as the corresponding limits.) Another notable evidence is provided by the fact that in the most general case of composite and external fields combined, the operators describing gauge dependence do not possess the algebraic properties of (generalized) nilpotency which hold for their counterpartsŒ [@LOR; @ext]. The lack of generalized nilpotency has been traced back to expressions including $\sigma^m_{, i \alpha} (\varphi, \psi)$, i.e. to the explicit dependence of the composite fields upon the external ones. Of course, this fact deserves further considerations, especially with respect to the effect it may have on the unitarity of the theory. This, however, has to be postponed to another paper.
Let us finally remark that — despite being much more complicated — an extension of these considerations to the manifest osp(1,2)-covariant quantization [@GLM] would be desirable.
Acknowledgments {#acknowledgments .unnumbered}
===============
The authors would like to thank P.M. Lavrov, S.D. Odintsov, D. Mülsch and A.A. Reshetnyak for useful discussions. One of the authors (PM) is grateful for kind hospitality extended to him by the Graduate College “Quantum Field Theory” at the Center of Theoretical Sciences (NTZ) of Leipzig University during the winter half, 1997 – 1998.
The work has been partially supported by the Russian Foundation for Basic Research (RFBR), project 96–02–16017.
[99]{} I.A. Batalin and G.A. Vilkovisky, [*Phys. Lett.*]{} [**102B**]{} (1981) 27; Phys. Rev. [**D28**]{} (1983) 2567. C. Becchi, A. Rouet and R. Stora, [*Commun. Math. Phys.*]{} [**42**]{} (1975) 12; I.V. Tyutin, Lebedev Inst. preprint No. 39 (1975). I.A. Batalin, P.M. Lavrov and I.V. Tyutin, [*J. Math. Phys.*]{} [**31**]{} (1990) 1487; [**32**]{} (1991) 532; [**32**]{} (1991) 2513. G. Curci and R. Ferrari, [*Phys. Lett.*]{} [**B63**]{} (1976) 91; I. Ojima, [*Prog. Theor. Phys.*]{} [**64**]{} (1979) 625; B.L. Voronov, P.M. Lavrov and I.V. Tyutin, [*Sov. J. Nucl. Phys.*]{} [**36**]{} (1982) 498; P.M. Lavrov and I.V. Tyutin, [*Sov. J. Nucl. Phys.*]{} [**41**]{} (1985) 1658; P.M. Lavrov, [*Mod. Phys. Lett.*]{} [**6A**]{} (1991) 2051. P.M. Lavrov and I.V. Tyutin, [*Teor. Mat. Fiz.*]{} [**79**]{} (1989) 359;\
P.M. Lavrov and P.Yu. Moshin, [*Nucl. Phys.*]{} [**B486**]{} (1997) 565.Œ I. Batalin, R. Marnelius and A. Semikhatov, [*Nucl. Phys.*]{} [**B446**]{} (1995) 249; I. Batalin and R. Marnelius, [*Nucl. Phys.*]{} [**B465**]{} (1996) 521. P.M. Lavrov, P.Yu. Moshin and A.A. Reshetnyak, [*Mod. Phys. Lett.*]{} [**10A**]{} (1995) 2687; P.M. Lavrov, [*Phys. Lett.*]{} [**B366**]{} (1996) 160; I.A. Batalin, K. Bering and P.H. Damgaard, preprint hep-th 9708140. B. Geyer, P. M. Lavrov and D. Mülsch, [*osp(1,2)-covariant Lagrangian quantization of irreducible massive gauge theories*]{}, hep-th/9712024, to appear in [*JMP*]{} P.M. Lavrov and S.D. Odintsov, [*Int. J. Mod. Phys.*]{} [**A4**]{} (1989) 5205. P.M. Lavrov, S.D. Odintsov and A.A. Reshetnyak, [*J. Math. Phys.*]{} [**38**]{} (1997) 3466. S. Falkenberg, B. Geyer, P. Lavrov and P. Moshin, [*On the structure of quantum gauge theories with external fields*]{}, hep-th 9710050. J.M. Cornwall, R. Jackiw and E. Tomboulis, [*Phys. Rev.*]{} [**D10**]{}, (1974) 2428. R.W. Haymaker, [*Rivista Nuovo Cim.*]{} [**14**]{} (1991) 1; T. Inagaki, T. Muta and S.D. Odintsov, [*Progr. Theor. Phys. Suppl.*]{} [**127**]{}, (1997) 93. W. Bardeen, C. Hill and M. Lindner, [*Phys. Rev.*]{} [**D41**]{}, (1990) 1647. S.W. Hawking and I.G. Moss, [*Nucl. Phys.*]{} [**B224**]{} (1983) 180; S.D. Odintsov, [*Europhys. Lett.*]{} [**7**]{} (1988) 1. N. Seiberg, [*The power of duality*]{}, hep-th/9506077. A.A. Tseytlin, [*Phys. Lett.*]{} [**B178**]{} (1986) 34. B.S. de Witt, [*Dynamical Theory of Groups and Fields*]{} (Gordon and Breach, New York, 1965).
[^1]: E-mail: [email protected]
|
---
abstract: 'In 2011 Deshouillers and Ruzsa ([@deshruzsa]) tried to argument that the sequence of the last nonzero digit of $n!$ in base 12 is not automatic. This statement was proved few years later by Deshoulliers in [@desh2]. In this paper we provide alternate proof that lets us generalize the problem and give an exact characterization in which bases the sequence of the last nonzero digits of $n!$ is automatic.'
address: |
Graduate of Institute of Mathematics\
Jagiellonian University\
Kraków, Poland
author:
- Eryk Lipka
title: 'Automaticity of the sequence of the last nonzero digits of $n!$ in a fixed base'
---
Introduction
============
Let ${\left(\ell_{b}\left(n!\right)\right)_{n\in \mathbb{N}}}$ be the sequence of last nonzero digits of $n!$ in base $b$, in this paper we will answer the question for which values of $b$ is this sequence automatic. It was known that ${\left(\ell_{b}\left(n!\right)\right)_{n\in \mathbb{N}}}$ is automatic in many cases including bases being primes or powers of primes, one can also prove that ${\left(\ell_{b}\left(n!\right)\right)_{n\in \mathbb{N}}}$ is automatic for some small bases that have more prime factors, like 6 or 10. In general, for base of the form $b=p_1^{a_1}p_2^{a_2}$ where $p_1\neq p_2,\; p_1, p_2 \in \mathbb{P},\; a_1,a_2\in\mathbb{N}_+$, it can be shown that ${\left(\ell_{b}\left(n!\right)\right)_{n\in \mathbb{N}}}$ is automatic when $a_1\left(p_1-1\right)\neq a_2\left(p_2-1\right)$. The smallest base for which the answer is unclear is 12. This was the case analysed by Deshouillers and Ruzsa in [@deshruzsa]. They conjectured that ${\left(\ell_{12}\left(n!\right)\right)_{n\in \mathbb{N}}}$ can not be automatic, despite the fact that it is equal to some automatic sequence nearly everywhere. An attempt to prove that conjecture was done by Deshouliers in his paper [@desh1], and few years later he answered the question by proving the following, stronger result
[(Deshouillers [@desh2])]{} For $a \in \{3, 6, 9\}$, the characteristic sequence of $\{n\, ; \ell_{12}\left(n!\right) = a\}$ is not automatic.
Another way of proving similar fact (but for $a \in \{4,8\}$) was provided recently by Byszewski and Konieczny in [@kuba]. It seems that both proofs can be generalized to all cases when $a_1\left(p_1-1\right)=a_2\left(p_2-1\right)$, however it is not obvious if, or how, can it be extended to bases with more than two prime factors. This was our main motivation to write this paper, and we provide complete characterization in which bases is this sequence automatic, including those with many prime factors.
In this paper we will use the following notation: the string of digits of $n$ in base $k$ will be denoted $\left[n\right]_k$, by $v_b\left(n\right)$ we mean the largest integer $t$ such that $b^t|n$, and $s_b\left(n\right)$ is the sum of digits of $n$ in base $b$. This paper is composed of two main parts, first we recall some basic facts about automatic sequences for readers not familiar with the topic, in the latter part we present our results about automaticity of ${\left(\ell_{b}\left(n!\right)\right)_{n\in \mathbb{N}}}$.
We would like to thank Piotr Miska and Maciej Ulas for proof reading and helpful suggestions while preparing this paper.
Basics of automatic sequences
=============================
In this section we will give short summary of topics from automatic sequence theory that we will be using later. If the reader is interested in getting more insight into this topic, we strongly recommend book of Allouche and Shallit [@alloucheshallit] that covers all important topics in this area.
[**Deterministic finite automaton with output**]{} is a 6-tuple $\left(Q, \Sigma, \rho, q_0, \Delta, \tau \right)$ such that
- $Q$ is a [**finite**]{} set of states;
- $\Sigma$ is an input alphabet;
- $\rho: Q\times\Sigma \rightarrow Q$ is a transition function;
- $q_0 \in Q$ is an initial state;
- $\Delta$ is an output alphabet (finite set);
- $\tau:Q\rightarrow\Delta$ is an output function.
Transition function can be generalized to take strings of characters instead of single ones. For string $s_1s_2s_3\ldots$ we define $\rho\left(q, s_1s_2s_3\ldots\right)=\rho\left(\ldots\rho\left(\rho\left(q, s_1\right), s_2\right)\ldots\right)$.
For any finite alphabet $\Sigma$, function $f: \Sigma^* \rightarrow \Delta$ is called a [**finite-state function**]{} if there exists a deterministic finite automaton with output $\left(Q, \Sigma, \rho, q_0, \Delta, \tau \right)$ such that $f(\omega) = \tau\left(\rho\left(q_0,\omega\right)\right)$.
\[reverse\] If $f: \Sigma^* \rightarrow \Delta$ is a finite-state function then function $g: \Sigma^* \rightarrow \Delta$ defined as $g\left(\omega\right) = f\left(\omega^R\right)$ is also finite-state. ($ ^R$ denotes taking reverse of a word).
(Sketch) Let $\left(Q, \Sigma, \rho, q_0, \Delta, \tau \right)$ be automaton that is related to $f$, we will define another automaton $\left(Q', \Sigma, \rho', q_0', \Delta, \tau' \right)$. Let $Q' = \Delta^Q$ be all functions from $Q$ to $\Delta$ and $q_0'\equiv \tau$. For any $g\in Q'$ we define $\tau'\left(g\right)=g\left(q_0\right)$, and for any $\sigma \in \Sigma, q\in Q$ we put $\rho'\left(g,\sigma\right)\left(q\right)=g\left(\rho\left(q,\sigma\right)\right)$. By induction on length of word $\omega\in\Sigma^*$ one can prove that equation $$\rho'\left(g,\omega\right)\left(q\right)=g\left(\rho\left(q,\omega^R\right)\right)$$ holds for any $g \in Q', q\in Q$. And finally $$g\left(\omega\right)=\tau'\left(\rho'\left(q_0',\omega\right)\right)=\rho'\left(q_0',\omega\right)\left(q_0\right)=g\left(\rho\left(q_0,\omega^R\right)\right)=f\left(\omega^R\right).$$
$\left(a\left(n\right)\right)_{n\in\mathbb{N}}$ is an [**$k$-automatic sequence**]{} if function $\left[n\right]_k \rightarrow a_n$ is finite-state. By Lemma \[reverse\] it is not important whether we read representation of $n$ from the right or from the left side.
Now we present some simple examples of sequences that are automatic.
\[simpleexa\] The sequence $a_n = n \pmod{m}$ is $k$-automatic for any $k\geq 2, m\in \mathbb{Z}_+$. In order to see this it is enough to take $Q=\left\{0,1,\ldots,m-1\right\}, \rho\left(q,\sigma\right) = kq+\sigma \pmod{m}$ and read input “from left to right”.\
The sequence $a_n = s_k\left(n\right) \pmod{m}$ is $k$-automatic for any $k\geq 2, m\in \mathbb{Z}_+$. Take $Q=\left\{0,1,\ldots,m-1\right\}$ and $\rho\left(q,\sigma\right) = q+\sigma \pmod{m}$.\
For any $k\geq 2$ and $x\in \mathbb{N}$, the characteristic sequence $a_n = \delta_x\left(n\right)$ is $k$-automatic. Automaton that computes it can be constructed by taking $\lceil \log_k\left(x\right) \rceil$ states that count how many digits were correct plus one “sinkhole” state that accepts all numbers other than $x$.
We can also obtain automatic sequences by modifying existing ones.
If $\left(a\left(n\right)\right)_{n\in\mathbb{N}}$ is $k$-automatic sequence then so is $b_n = f(a_n)$ for any function $f$ taking values from the image of $a_n$. The difference will be only in the output function of related automaton.\
If $\left(a\left(n\right)\right)_{n\in\mathbb{N}}, \left(b\left(n\right)\right)_{n\in\mathbb{N}}$ are $k$-automatic sequences, then so is $c_n = f(a_n, b_n)$ for any function $f$ as long as it is well defined on all possible pairs $(a_n, b_n)$. To obtain such automaton $\left(Q_c, \Sigma, \rho_c, q_c, \Delta_c, \tau_c \right)$ we can take the “product” of automatons $\left(Q_a, \Sigma, \rho_a, q_a, \Delta_a, \tau_a \right)$, $\left(Q_b, \Sigma, \rho_b, q_b, \Delta_b, \tau_b \right)$ defined by
- $Q_c=Q_a \times Q_b$;
- $\rho_c\left(a,b\right) = \left(\rho_a\left(a\right),\rho_b\left(b\right)\right)$;
- $q_c = \left(q_a,q_b\right)$;
- $\Delta_c=f(\Delta_a\times \Delta_b)$;
- $\tau_c\left(a,b\right) = f\left(\tau_a\left(a\right),\tau_b\left(b\right)\right)$.
This can be easily generalized to the case with $f$ taking any finite number of sequences as an input.
By combining above examples together we can get some additional facts.
Let $k\in \mathbb{N}_{\geq2}$ be fixed, then:
- characteristic sequence of a finite set is $k$-automatic;
- if sequence $\left(a\left(n\right)\right)_{n\in\mathbb{N}}$ differs from $\left(b\left(n\right)\right)_{n\in\mathbb{N}}$ only on finitely many terms and one of them is $k$-automatic so does the other one;
- periodic sequence is $k$-automatic;
- ultimately periodic sequence is $k$-automatic;
Of course this does not exhaust all possible automatic sequences, but is enough to give some insight and be useful in our work. We should also notice what is the relation between automaticity in different bases.
\[automatonpower\] Sequence $\left(a\left(n\right)\right)_{n\in\mathbb{N}}$ is $k$-automatic if and only if it is $k^m$-automatic for all $m\in\mathbb{N}_{\geq 2}$.
(Sketch) If we have $k$-automaton generating a sequence, then we can easily manipulate it to create $k^m$-automaton generating the same sequence, main idea is to take transition function to be $m$-th composition of the original transition function with itself (digit in base $k^m$ can be seen as $m$ digits in base $k$).\
On the other hand, let $Q$ be set of states of the $k^m$-automaton generating a sequence, and $\rho$ be its transition function. We take $Q' = Q\times\{0,1,\ldots,k^{m-1}-1\}\times\{0,1,\ldots,m-1\}$ and $$\rho'\left(\left(q,r,s\right),\sigma\right)=\begin{cases}
(q,kr+\sigma,s+1) & \text{if } s<m-1\\
(\rho(q,kr+\sigma),0,0) & \text{if } s=m-1\end{cases},$$ this way we accumulate base $k$ digits until we collect $m$ of them and then use the original transition function.
New results
===========
Lets start with some facts that we will be using in our proof
(Legendre’s formula [@legendre]) \[legendre\] for any prime $p$ and positive integers $a,n$, we have $$v_{p^a}(n!) = \left\lfloor\frac{n-s_p\left(n\right)}{a\left(p-1\right)}\right\rfloor.$$
\[sum\] (Result from [@stewart]) For any positive integers $b, c$ such that $\frac{\ln(b)}{\ln(c)}\not\in\mathbb{Q}$ there exists a constant $d$ such that for each integer $n>25$ there holds $$s_b\left(n\right) + s_{c}\left(n\right) > \frac{\log\log n}{\log \log \log n + d}-1.$$
Next proposition is known fact, but We haven’t found it clearly stated anywhere, it can be easily proven using Dirichlet’s approximation theorem or Equidistribution theorem.
\[word\] For any positive integers $a, b, c$ such that $\frac{\ln(b)}{\ln(c)}\not\in\mathbb{Q}$ there exist infinitely many triples of non-negative integers $d,e,f$ with $1\leq f<b^e$ such that $$c^d = a\cdot b^e +f.$$ In other words, there are infinitely many powers of $c$ with base $b$ notation starting with given string of digits.
After such introduction we can finally state our results. The following lemma and theorem are the main steps in proving when ${\left(\ell_{b}\left(n!\right)\right)_{n\in \mathbb{N}}}$ is not automatic.
\[alwaysword\] Let $P$ be a non-empty finite set of prime numbers and $p$ be its biggest element. Let $a>0, k>1$ be integers. Then there exist an integer $a'$ such that $\max_{i\in P}\left\{s_i\left(a'\right)\right\}=s_p\left( a'\right)$ and $\left[a\right]_k$ is prefix of $\left[a'\right]_k$.
If $k$ is not a power of $p$, then by Proposition \[word\] there exist infinitely many triples $(d,e,f)$ of non-negative integers with $1\leq f+1<k^e$ such that $$p^d = a\cdot k^e + (f+1).$$ Furthermore we have $$s_p\left(p^d-1\right) = d\left(p-1\right)>\left(p-1\right)\frac{\ln\left(p^d-1\right)}{\ln(p)},$$ and from the definition of $s_q$, for any prime $q$ the following holds $$s_q\left(p^d-1\right) < \left(q-1\right)\left(\frac{\ln\left(p^d-1\right)}{\ln(q)}+1\right).$$ Because $p$ is the biggest number in $P$, then for any $q\in P, q\neq p$, we have $$s_q\left(p^d-1\right) - s_p\left(p^d-1\right)< \ln\left(p^d-1\right)\left(\frac{q-1}{\ln\left(q\right)}-\frac{p-1}{\ln\left(p\right)}\right) + q -1.$$ Right side of this inequality is negative for $d$ big enough, so because $0\leq f<k^e$ we can take $a' = p^d-1$. When $k=p^t$ we can notice that for any integer $d$ $$s_p\left( a\cdot p^{td}+p^{td}-1\right) =s_p\left(a\right)+ td\left(p-1\right)>\left(p-1\right)\left(\frac{\ln\left(a\cdot p^{td}+p^{td}-1\right)}{\ln(p)}-\frac{\ln\left(a\right)}{\ln(p)}-1\right),$$ and by similar argument it is enough to take $a'=a\cdot p^{td}+p^{td}-1$ for $d$ sufficiently large.
\[sets\] Let $P$ be a finite set of prime numbers with at least two elements and $p$ be its biggest element, also let $c>0$ be a real number. Let us define sets $$\begin{aligned}
A_- &= \left\{n\in\mathbb{Z}_+:\;\max_{i\in P}\left\{s_i\left(n\right)\right\}=s_p\left(n\right)\right\},\\
A_+ &= \left\{n\in\mathbb{Z}_+:\;\max_{i\in P}\left\{s_i\left(n\right)\right\}-s_p\left(n\right)\geq c\right\}.\end{aligned}$$ Then there does not exist deterministic finite automaton with output that assigns one value to integers in $A_-$ and other value to those in $A_+$.
Lets suppose that we have such an automaton $\left(Q, \Sigma_k, \rho, q_0, \Delta, \tau \right)$ for some $k$. Because $Q$ is finite, there exists some internal state $\mathcal{S}\in Q$ such that for infinitely many positive integers $c_1 < c_2 < \ldots$ we have $\rho\left(q_0,\left[p^{c_i}\right]_k\right) = \mathcal{S}$. Now, by Lemma \[alwaysword\], there exists an integer $a'\in A_-$ which can be obtained from $p^{c_1}$ by appending some suffix. Hence we can fix positive integers $e,f < k^e$ such that $a'=p^{c_1}\cdot k^e+f$. Let the sequence of digits $\left(f_1, f_2, \ldots, f_e\right)$ be a representation of $f$ in base $k$, possibly with added leading zeros. By $\mathcal{T}\in Q$ we denote an internal state such that $$\mathcal{T} = \rho\left(q_0, \left[a'\right]_k\right) = \rho\left(\mathcal{S},f_1f_2\ldots f_e\right).$$ This means that for every $i\in \mathbb N _+$ we have $$\rho\left(q_0, \left[p^{c_i}\cdot k^e +f\right]_k\right) =\rho\left(q_0, \left[p^{c_i}\right]_kf_1f_2\ldots f_e\right) =\rho\left(\mathcal{S},f_1f_2\ldots f_e\right) = \mathcal{T},$$ and this implies that $\tau\left(\rho\left(q_0, \left[p^{c_i}\cdot k^e +f\right]_k\right)\right)=\tau\left(\mathcal{T}\right)$ does not depend on value of $i$.\
On the other hand, when $c_i > \left\lceil \log_p(f)\right \rceil$ we have $s_p\left(p^{c_i}\cdot k^e +f\right)=s_p\left(k^e\right)+s_p\left(f\right)$ which is a constant. However, due to Proposition \[sum\] we know that for any $q\in P, q \neq p$, the value of $s_q\left(p^{c_i}\cdot k^e+f\right)$ is increasing with $c_i$. Hence for $c_i$ big enough there holds $p^{c_i}\cdot k^e+f \in A_+$.\
All but finitely many integers of the form $p^{c_i}\cdot k^e +f$ are elements of $A_+$ but at least one (namely $p^{c_1}\cdot k^e +f$) is an element of $A_-$. This proves that such automaton cannot assign different values to members of those two sets.
Now we will show that $l_b\left(n!\right)$ can be automatic for some $b$.
\[bpa\] If $b=p^a, p\in \mathbb{P}, a\in \mathbb{N}$ then the sequence $\left(\ell_b\left(n!\right)\right)_{n\in\mathbb{N}}$ is $b$-automatic.
First, we notice that $\ell_b\left(xy\right)=\ell_b\left(\ell_b\left(x\right)\ell_b\left(y\right)\right)$, so $$\ell_b\left((bn)!\right) =\ell_b\left(\ell_b\left(n!\right)\prod_{i=n+1}^{bn} \ell_b\left(i\right) \right).$$ Because $\ell_b\left(bx\right) = \ell_b\left(x\right)$ we can rewrite the product in the following way $$\ell_b\left((bn)!\right) =\ell_b\left(\ell_b\left(n!\right)\prod_{\substack{i=1 \\ b \nmid i}}^{bn} \ell_b\left(i\right) \right)=\ell_b\left(\ell_b\left(n!\right)\prod_{i=1}^{n} \ell_b\left(\prod_{j=1}^{j=b-1}j\right) \right).$$ We denote $m_i=\ell_b\left(i!\right)$ and obtain $\ell_b\left((bn)!\right) =\ell_b\left(\ell_b\left(n!\right) m_{b-1}^n \right)$. Now we take the string of digits $n_1n_2\ldots n_l = \left[n\right]_b$ and obtain the following formula $$\ell_b\left(n!\right)=\ell_b\left(\left(n_1n_2\ldots n_l\right)!\right)=\ell_b\left(m_{n_l}\ell_b\left(\left(n_1n_2\ldots n_{l-1}\right)!\right)m_{b-1}^{\left(n_1n_2\ldots n_{l-1}\right)}\right),$$ which by iteration leads to $$\label{eq:factorial1}
\ell_b\left(\left(n_1n_2\ldots n_l\right)!\right)=\ell_b\left(m_{n_1}m_{n_2}\ldots m_{n_l} \ell_b\left(m_{b-1}^r\right)\right),$$ Where $r=\left(n_1n_2\ldots n_{l-1}\right) +\ldots +\left(n_1n_2\right) + \left(n_1\right)$. Now, by Euler’s Theorem $m_{b-1}^{\varphi(b)}\equiv m_{b-1}^{p^a-p^{a-1}}\equiv1 \pmod b$ so we only need to know the value of $r \pmod{p^a-p^{a-1}}$. $$\label{eq:factorial2}
r =\sum_{i=1}^{l-1}\left(b^{i-1}\sum_{j=1}^{l-i}n_j\right)
\equiv\sum_{i=1}^{l-1}n_i + p^{a-1}\sum_{i=1}^{l-2}\left(l-1-i\right)n_i \pmod{\left(p^a-p^{a-1}\right)}.$$ Finally, we can define an automaton $\left(Q, \Sigma_b, \rho, q_0, \Delta, \tau \right)$ generating the sequence $\left(\ell_b\left(n!\right)\right)_{n\in\mathbb{N}}$ in the following way:
- the input alphabet $\Sigma_b=\{0,1,2,\ldots,b-1\}$;
- the output alphabet $\Delta = \{1,2,\ldots,b-1\}$;
- the set of states $Q=\Delta\times\Sigma_{p^a-p^{a-1}}\times\Sigma_{p-1}$;
- the initial state $q_0=\left(1,0,0\right)$;
- the output function $\tau\left(u,v,w\right)=\ell_b\left(u \cdot m_{b-1}^{v+p^{a-1}w}\right)$;
- the transition function $$\rho\left(\left(u,v,w\right),s\right)=\left(\ell_b\left(u\cdot m_s\right),\left(v+s\right) \pmod{\left(p^a-p^{a-1}\right)},\left(w+v\right) \pmod{\left(p-1\right)}\right).$$
With such definition we have $\rho\left(q_0, \left[n\right]_b\right) = \left(u,v,w\right)$ where
- $u = \ell_b\left(m_{n_1}m_{n_2}\ldots m_{n_l}\right)$;\
- $v = \sum_{i=1}^{l-1}n_i \pmod {\left(p^a-p^{a-1}\right)}$;\
- $w = \sum_{i=1}^{l-2}\left(l-1-i\right)n_i \pmod{\left(p-1\right)}$.
Hence using equations and we see, that $\ell_b\left(n!\right) =\tau\left(u,v,w\right)$.
Now we are ready to prove the following
Let $b=p_1^{a_1}p_2^{a_2}\ldots$ with $a_1\left(p_1-1\right)\ge a_2\left(p_2-1\right)\ge \ldots$. The sequence $\left(\ell_b\left(n!\right)\right)_{n\in\mathbb{N}}$ is $p_1$-automatic if $a_1\left(p_1-1\right)> a_2\left(p_2-1\right)$ or $b=p_1^{a_1}$ and not automatic otherwise.
Let $n \gg 0$. For $b={p_1}^{a_1}$ the sequence is $b$-automatic from Lemma \[bpa\], by Lemma \[automatonpower\] it is also $p_1$-automatic. If $b$ has more than one prime factor and $a_1\left(p_1-1\right)> a_2\left(p_2-1\right)$ we take $b' = \frac{b}{p_1^{a_1}}$ so $p_1 \nmid b'$. From Proposition \[legendre\] and the definition of $v_{b'}$ we have $$v_{b'}\left(n!\right)=\min_{i>1}v_{p_i^{a_i}}\left(n!\right)=\min_{i>1}\left\lfloor\frac{n-s_{p_i}\left(n\right)}{a_i\left(p_i-1\right)}\right\rfloor > \left\lfloor\frac{n-s_{p_1}\left(n\right)}{a_1\left(p_1-1\right)}\right\rfloor=v_{p_1^{a_1}}\left(n!\right),$$ which leads to $b'|\ell_b\left(n!\right)$. Thus $\ell_b\left(n!\right)\in\left\{b',2b',3b',\ldots\left(p_1^{a_1}-1\right)b'\right\}$, so value of $\ell_b\left(n!\right)$ can be computed from value of $\ell_b\left(n!\right) \pmod{p_1^{a_1}}$. We also know, that there exist integers $c_1, c_2$ satisfying the equation $$n! = b^{\left(v_{p_1^{a_1}}\left(n!\right)\right)} \ell_b\left(n!\right) + b^{\left(v_{p_1^{a_1}}\left(n!\right)+1\right)} c_1 = p_1^{a_1\left(v_{p_1^{a_1}}\left(n!\right)\right)} \ell_{p_1^{a_1}}\left(n!\right) + p_1^{a_1\left(v_{p_1^{a_1}}\left(n!\right)+1\right)} c_2.$$ After division of the above equality by $p_1^{a_1\left(v_{p_1^{a_1}}\left(n!\right)\right)}$ we obtain the following $$\left(b'\right)^{\left(v_{p_1^{a_1}}\left(n!\right)\right)} \ell_b\left(n!\right) + p_1^{a_1} \left(b'\right)^{\left(v_{p_1^{a_1}}\left(n!\right)+1\right)} c_1 = \ell_{p_1^{a_1}}\left(n!\right) + p_1^{a_1} c_2.$$ Now, we can notice that $\ell_b\left(n!\right){\left(b'\right)}^{v_{p_1^{a_1}}\left(n!\right)} \equiv \ell_{p_1^{a_1}}\left(n!\right) \pmod{p_1^{a_1}}$, hence to finish this part of proof we just need to construct an $p_1$-automaton that returns the value of $v_{p_1^{a_1}}\left(n!\right) \pmod{\varphi(p_1^{a_1})}$. By Proposition \[legendre\] this value can be computed from $\left(n-s_{p_1}(n)\right) \pmod{\varphi(p_1^{a_1})\cdot a_1(p_1-1)}$, and such expression is $p_1$-automatic as we already mentioned in Example \[simpleexa\].
Now, in the last case, when $a_1\left(p_1-1\right)=a_2\left(p_2-1\right)$, let $I = \{i : a_i\left(p_i-1\right) =a_1\left(p_1-1\right)\}$. Without loss of generality we can assume $p_1=\max_{i\in I} p_i$. By Legendre formula (Proposition \[legendre\]) we have $$\begin{aligned}
\max_{i\in I}s_{p_i}\left(n\right)=s_{p_1}\left(n\right) &\Rightarrow v_{p_1^{a_1}}\left(n!\right) = \min_{i\in I} v_{p_i^{a_i}}\left(n!\right) \Rightarrow p_1^{a_1} \nmid \ell_b\left(n!\right),\\
\max_{i\in I}s_{p_i}\left(n\right)> a_1(p_1-1) + s_{p_1}\left(n\right) &\Rightarrow v_{p_1^{a_1}}\left(n!\right) > \min_{i\in I} v_{p_i^{a_i}}\left(n!\right) \Rightarrow p_1^{a_1} | \ell_b\left(n!\right).\end{aligned}$$ Hence, by Theorem \[sets\], there is no finite automaton that can, for given $n$, tell whether $p_1^{a_1}$ divides $\ell_b\left(n!\right)$ or not. This completes the proof, as finite automaton generating the sequence $\left(\ell_b\left(n!\right)\right)_{n\in\mathbb{N}}$ should distinguish those two sets.
[1]{} J.-P. Allouche, J. Shallit, *Automatic Sequences. Theory, Applications, Generalizations*, Cambridge University Press, Cambridge, 2003;
J. Byszewski, J. Konieczny, *A density version of Cobham’s theorem*, arXiv:1710.07261;
F. M. Dekking *Regularity and irregularity of sequences generated by automata*, Séminarie de Théorie des Nombres, Bordeaux 1979-1980, Exposé 9;
J.-M. Deshouillers, *A footnote to The least non zero digit of n! in base 12*, Uniform Distribution Theory 7 (2012), 71-73;
J.-M. Deshouillers, *Yet another footnote to The least non zero digit of n! in base 12*, Uniform Distribution Theory 11 (2016), 163-167;
J.-M. Deshouillers, Imre Ruzsa, *The least non zero digit of n! in base 12*, Publ. Math-Debrecen 79 (2011), 395-400;
A. M. Legendre, *Théorie des nombres*, Firmin Didot frères, Paris, 1830;
C. L. Stewart, *On the representation of an integer in two different bases*, J. Reine Angew. Math. 319 (1980), 63-72;
|
---
abstract: 'Let $f:S\to B$ be a finite cyclic covering fibration of a fibered surface. We study the lower bound of slope $\lambda_{f}$ when the relative irregularity $q_{f}$ is positive.'
author:
- Hiroto Akaike
title: Slope inequarities for irregular cyclic covering fibrations
---
Introduction {#introduction .unnumbered}
============
Let $f:S\to B$ be a surjective morphism from a smooth projective surface $S$ to a smooth projective curve $B$ with connected fibers. We call it a fibration of genus $g$ when a general fiber is a curve of genus $g$. A fibration is called relatively minimal, when any $(-1)$-curve is not contained in fibers. Here we call a smooth rational curve $C$ with $C^2=-n$ a $(-n)$-curve. A fibration is called [*smooth*]{} when all fibers are smooth, [*isotrivial*]{} when all of the smooth fibers are isomorphic, [*locally trivial*]{} when it is smooth and isotrivial.
Assume that $f:S\to B$ is a relatively minimal fibration of genus $g \geq 2$. We denote by $K_{f}=K_{S}-f^{\ast}K_{B}$ a relative canonical divisor. We associate three relative invariants with $f$: $$\begin{aligned}
&K_{f}^2=K_{S}^2-8(g-1)(b-1),\\
&\chi_{f}:=\chi(\mathcal{O}_{S})-(g-1)(b-1),\\
&e_{f}:=e(S)-4(g-1)(b-1),\end{aligned}$$ where $b$ and $e(S)$ respectively denote the genus of the base curve $B$ and the topological Euler-Poincaré characteristic of $S$. Then the following are well-known:
- (Noether) $12\chi_{f}=K_{f}^2+e_{f}$.
- (Arakelov)$K_{f}$ is nef.
- (Ueno)$\chi_{f}\geq0$ and $\chi_{f}=0$ if and only if $f$ is locally trivial.
- (Segre)$e_{f}\geq0$ and $e_{f}=0$ if and only if $f$ is smooth.
When $f$ is not locally trivial, we put $$\lambda_{f}:=\frac{K_{f}^2}{\chi_{f}}$$ and call it the [*slope*]{} of $f$ according to [@Xiao2], in which Xiao succeeded in giving its effective lower bound as $$\lambda_{f} \geq \frac{4(g-1)}{g}.$$ Another invariant we are interested in is the [*relative irregularity*]{} of $f$ defined by $q_{f}:=q(S)-b$, where $q(S):={\mathrm{dim}}H^1(S,\mathcal{O}_{S})$ denotes the irregularity of $S$ as usual. When $q_{f}$ is positive, we call $f$ an [*irregular*]{} fibration. Xiao showed in [@Xiao2] that $\lambda_f\geq 4$ holds for irregular fibrations. It seems a general rule that the lower bound of the slope goes up, when the relative irregular gets bigger.
In the present papaer, we consider primitive cyclic covering fibrations of type $(g,h,n)$ introduced in [@Eno], where Enokizono gave the lower bound of the slope for them. Note that it is nothing more than a hyperelliptic fibration when $h=0$ and $n=2$. Recall that Lu and Zuo obtained the lower bound of the slope for irregular double covering fibrations in [@X.Lu2] and [@X.Lu1]. Inspired by their results, we try to generalize them to irregular primitive cyclic covering fibrations with $n\geq 3$. We give the lower bound of slope for those of type $(g,0,n)$ in Theorems \[prop2.3\] and \[thm4.7\], and for those of type $(g,h,n)$ with $h\geq 1$ in Theorem $\ref{thm2.7}$.
The key observation for the proof is Proposition \[prop2.2\]. We apply it to the anti-invariant part of the Albanese map with respect to the action of the Galois group canonically associated to the cyclic covering fibration, and derive the “negativity” of the ramification divisor when $q_f>0$. Recall that $\chi_f$ and (the essential part of) $K_f^2$ can be expressed in terms of the so-called $k$-th [*singularity index*]{} $\alpha_k$ defined for each non-negative integer $k$. The negativity referred above can be used to get some non-trivial restrictions on $\alpha_0$ which is the most difficult one to handle with among all $\alpha_k$’s. Thanks to such information together with an analysis of the Albanese map, we can obtain the desired slope inequalities.
We also give a small contribution to the modified Xiao’s conjecture that $q_{f} \leq \lceil \frac{g+1}{2} \rceil$ holds, posed by Barja, González-Alonso and Naranjo in [@Bar1]. It is known to be true for fibrations of maximal Clifford index [@Bar1] and for hyperelliptic fibrations [@X.Lu2] among others. We show in Theorem \[xiaoconj\] that $q_f\leq (g+1-n)/2$ holds, when $f$ is a primitive cyclic covering fibration of type $(g,0,n)$ under some additional assumptions. For the history around the conjecture, see the introduction of [@Bar1].
The author express his sincere gratitude to Professor Kazuhiro Konno for suggesting this assignment, his valuable advice and support. The author also thanks Dr. Makoto Enokizono for his precious advices, allowing him to use Proposition \[prop2.2\] freely.
Primitive cyclic covering fibrations
====================================
We recall the basis properties of primitive cyclic covering fibrations, most of which can be found in [@Eno].
Let $f:S \to B$ be a relatively minimal fibration of genus $g\geq2$. We call it a primitive cyclic covering fibration of type $(g,h,n)$, when there are $($not necessarily relatively minimal$)$ fibration $\tilde{{\varphi}}:{\widetilde{W}}\to B$ of genus $h\geq0$ and a classical $n$-cyclic covering $$\tilde{\theta}:{\widetilde{S}}=
\mathrm{Spec}_{{\widetilde{W}}}\left(\bigoplus_{j=0}^{n-1} \mathcal{O}_{{\widetilde{W}}}(-j{\widetilde{\mathfrak{d}}})\right)\to{\widetilde{W}}$$ branched over a smooth curve ${\widetilde{R}}\in |n{\widetilde{\mathfrak{d}}}|$ for some $n\geq2$ and ${\widetilde{\mathfrak{d}}}\in {\mathrm{Pic}}({\widetilde{W}})$ such that $f$ is the relatively minimal model of $\tilde{f}=\tilde{{\varphi}}\circ\tilde{\theta}$.
Let $f:S \to B$ be a primitive cyclic covering fibration of type $(g,h,n)$. Let $\widetilde{F}$ and $\widetilde{\Gamma}$ be general fibers of $\tilde{f}$ and $\tilde{{\varphi}}$, respectively. Then the restriction map $\tilde{\theta}|_{\widetilde{F}}:\widetilde{F} \to \widetilde{\Gamma}$ is a classical $n$-cyclic covering branched over ${\widetilde{R}}\cap\widetilde{{\Gamma}}$. By the Hurwitz formula for $\tilde{\theta}|_{\widetilde{F}}$, we get $$r:={\widetilde{R}}.\widetilde{{\Gamma}}=\frac{2\bigl(g-1-n(h-1)\bigr)}{n-1}.$$ From ${\widetilde{R}}\in |n{\widetilde{\mathfrak{d}}}|$, it follow that $r$ is a multiple of $n$.
Let $\tilde{\psi}: {\widetilde{W}}\to W$ be the contraction morphism to a relative minimal model $W\to B$ of $\tilde{{\varphi}}:{\widetilde{W}}\to B$. Since $\tilde{\psi}$ is a composite of blowing-ups, we can write $\tilde{\psi}=\psi_{1}\circ\cdots\psi_{N}$, where $\psi_{i}:W_{i}\to W_{i-1}$ denotes the blowing-up at $x_{i} \in W_{i-1}\;(i=1,\cdots,N)$, $W_{0}=W$ and $W_{N}={\widetilde{W}}$. We define a reduced curve $R_{i}$ inductively as $R_{i-1}=(\psi_{i})_{\ast}R_{i}$ starting from $R_{N}={\widetilde{R}}$ down to $R_{0}=R$. We also put $E_{i}=\psi_{i}^{-1}(x_{i})$ and $m_{i}=\mathrm{mult}_{x_{i}}R_{i-1}\;(i=1,\cdots,N)$.
\[lem1.2\] In the above situation, the following hold for any $i=1,\cdots,N$.
$(1)$ Either $m_{i} \in n{\mathbb{Z}}$ or $n{\mathbb{Z}}+1$. Furthermore, $m_{i}\in n{\mathbb{Z}}$ if and only if $E_{i}$ is not contained in $R_{i}$.
$(2)$ $R_{i}={\psi}_{i}^{\ast}R_{i-1}-n[\frac{m_{i}}{n}]E_{i}$, where $[t]$ denotes the greatest integer not exceeding $t$.
$(3)$ There exists ${\mathfrak{d}}_{i} \in {\mathrm{Pic}}(W_{i})$ such that ${\mathfrak{d}}_{i}=\psi_{i}^{\ast}{\mathfrak{d}}_{i-1}-[\frac{m_{i}}{n}]E_{i}$ and $R_{i}\sim n{\mathfrak{d}}_{i}$, ${\mathfrak{d}}_{N}={\widetilde{\mathfrak{d}}}$.
By [@Eno], we can assume the following for any primitive cyclic covering fibrations. Let ${\tilde{\sigma}}$ be a generator of the covering transformation group of $\tilde{\theta}$, and $\sigma$ the automorphism of $S$ over $B$ induced by ${\tilde{\sigma}}$. Then the natural morphism $\rho:{\widetilde{S}}\to S$ is a minimal succession of blowing-ups that resolves all isolated fixed points of $\sigma$.
We must pay a special attention when $h=0$, since we have various relatively minimal models for $\tilde{{\varphi}}:{\widetilde{W}}\to B$. Using elementary transformations, one can show the following.
\[lem1.4\] Let $f:S\to B$ be a primitive cyclic covering fibration of type $(g,0,n)$. Then there is a relatively minimal model of $\tilde{{\varphi}}:{\widetilde{W}}\to B$ such that $$\mathrm{mult}_{x}R_{h}\leq\frac{r}{2}=\frac{g}{n-1}+1$$ for all $x \in R_{h}$, where $R_{h}$ denotes the ${\varphi}$-horizontal part of $R$. Moreover if $\mathrm{mult}_{x}R>\frac{r}{2}$, then $\mathrm{mult}_{x}R \in n{\mathbb{Z}}+ 1$.
When $h=0$, we always assume that a relatively minimal model of $\tilde{{\varphi}}:{\widetilde{W}}\to B$ is as in the above lemma.
Let the situation be the same as in Lemma \[lem1.4\]. If $x$ is a singular point of $R$ and $m=\mathrm{mult}_{x}R$, then $$n\biggl[\frac{m}{n}\biggr]\leq\frac{r}{2}.$$ \[lem1.4’\]
When $m \in n{\mathbb{Z}}$, the inequality clearly holds by Lemma $\ref{lem1.4}$. If $m \in n{\mathbb{Z}}+1$, then $n[\frac{m}{n}]+1=m$. From Lemma $\ref{lem1.4}$, we have $m \leq \frac{r}{2}+1$. So we get $n[\frac{m}{n}]\leq\frac{r}{2}$.
In closing the section, we give an easy lemma that will be usuful in the sequel.
\[lem3.2\] Let $\pi:C_{1}\to C_{2}$ be a surjective morphism between smooth projective curves. Let $R_{\pi}$ and $\Delta$ be the ramification divisor and the branch locus of $\pi$, respectively. Then, $$(\deg(\pi)-1)\sharp\Delta \geq \deg R_{\pi},$$ where $\sharp \Delta$ denotes the cardinality of $\Delta$ as a set of points.
We put $\Delta=\{Q_{1},\;\dots\;,Q_{\sharp \Delta}\}$. For any $Q_{i} \in \Delta$, we put $\pi^{-1}(Q_i)=\{P_{1}^{i},\;\dots\;,P_{j_{i}}^{i}\}$. Note that $\deg(\pi) = r(P_{1}^{i})+\;\cdots\;+r(P_{j_{i}}^{i})$ for any $i=1,\;\dots\;,\sharp \Delta$, where $r(P)$ denotes the ramification index of $\pi$ around $P\in C_{1}$. Then, from the property of ramification divisor, $$\begin{aligned}
\deg R_{\pi} &= \sum_{i=1}^{\sharp \Delta}\sum_{j=1}^{j_i}(r(P_j^{i})-1)\\
&= \deg(\pi)\sharp \Delta-(j_{1}+\;\cdots\;j_{\sharp \Delta})\\
& \leq (\deg(\pi)-1)\sharp \Delta,\end{aligned}$$ which is what we want.
Singularity indices and the formulae for $K_{f}^2$ and $\chi_{f}$.
==================================================================
We let $f:S\to B$ be a primitive cyclic covering fibration of type $(g,h,n)$ and freely use the notation in the previous section. We obtain a classical $n$-cyclic covering $\theta_{i}:S_{i} \to W_{i}$ branched over $R_{i}$ by setting $$S_{i} = \mathrm{Spec}\;(\bigoplus_{j=0}^{n-1}\mathcal{O}_{W_{i}}(-j {\mathfrak{d}}_{i}))$$ Since $R_{i}$ is reduced, $S_{i}$ is a normal surface. There exists a natural birational morphism $S_{i}\to S_{i-1}$. Set $S'=S_{0}$, $\theta=\theta_{0}$, ${\mathfrak{d}}={\mathfrak{d}}_{0}$ and $f'={\varphi}\circ \theta $. Then we have a commutative diagram: $$\xymatrix{
{\widetilde{S}}=S_N \ar[d]^{\widetilde{\theta}} \ar[r] \ar@(ur,ul)[rrrr]^{\rho} & S_{N-1} \ar[d]^{\theta_{N-1}} \ar[r] & \cdots \ar[r] & S_0=S^{\prime} \ar[d]^{\theta} & S \ar[ddl]^f\\
{\widetilde{W}}=W_N \ar[r]^{\psi_N} \ar[drrr]^{\widetilde{\varphi}} & W_{N-1} \ar[r]^{\psi_{N-1}} & \cdots \ar[r]^{\psi_1} & W_0=W \ar[d]^{\varphi} \\
& & & B}$$ The well-known formulae for cyclic coverings give us $$\begin{aligned}
& K_{\tilde{f}}^2=n(K_{\tilde{{\varphi}}}^2+2(n-1)\tilde{{\mathfrak{d}}}.K_{\tilde{{\varphi}}}+(n-1)^2 \tilde{{\mathfrak{d}}}^2),
\label{(1.1)}\\
& \chi_{\tilde{f}}=n\chi_{\tilde{{\varphi}}}+\frac{1}{2}\sum_{j=0}^{n-1}j\tilde{{\mathfrak{d}}}(j\tilde{{\mathfrak{d}}}+K_{\tilde{{\varphi}}}),
\label{(1.2)}\end{aligned}$$ (see e.g., [@Eno]). From Lemma $\ref{lem1.2}$ and a simple calculation, we get $$\begin{aligned}
\tilde{{\mathfrak{d}}}^2={\mathfrak{d}}^2-\sum_{i=1}^{N}\biggl[ \frac{m_{i}}{n} \biggr]^2,
\label{(1.3)} \\
\tilde{{\mathfrak{d}}}.K_{\tilde{{\varphi}}}={\mathfrak{d}}.K_{{\varphi}}+\sum_{i=1}^{N}\biggl[ \frac{m_{i}}{n} \biggr]
\label{(1.4)}\end{aligned}$$ and $$K_{\tilde{{\varphi}}}^2=K_{{\varphi}}^2-N.
\label{(1.5)}$$
\[def1.1\] Let $\Gamma_p$ and $F_p$ respectively denote fibers of ${\varphi}:W\to B$ and $f:S\to B$ over a point $p\in B$. For any fixed $p\in B$, we consider all singular points $($including infinitely near ones$)$ of $R$ on $\Gamma_{p}$. For any positive integer $k$, we let $\alpha_{k}(F_{p})$ be the number of singular points of multiplicity either $kn$ or $kn+1$ among them, and call it the $k$-th [*singularity index*]{} of $F_{p}$. We put $\alpha_{k}:=\sum_{p\in B}\alpha_{k}(F_{p})$ and call it the $k$-th singularity index of the fibration. We also put $\alpha_{0}:=(K_{\tilde{{\varphi}}}+{\widetilde{R}}){\widetilde{R}}$ and call it the ramification index of $\tilde{{\varphi}}|_{{\widetilde{R}}}:{\widetilde{R}}\to B$.
By a simple calculation, we get
$$\begin{aligned}
& N=\sum_{k\geq1}\alpha_{k},
\label{(1.6)} \\
& \sum_{i=1}^{N} \biggl[ \frac{m_{i}}{n} \biggr]=\sum_{k\geq1}k \alpha_{k},
\label{(1.7)} \\
& \sum_{i=1}^{N} \biggl[ \frac{m_{i}}{n} \biggr]^2=\sum_{k\geq1}k^2 \alpha_{k}
\label{(1.8)}\end{aligned}$$
and $$\alpha_{0}=(K_{{\varphi}}+R)R-\sum_{k\geq1}nk(nk-1)\alpha_{k}.
\label{(1.9)}$$
Substituting (\[(1.3)\]) through (\[(1.8)\]) for (\[(1.1)\]) and (\[(1.2)\]), one gets $$K_{\tilde{f}}^2=n(K_{{\varphi}}^2+2(n-1){\mathfrak{d}}.K_{{\varphi}}+(n-1)^2 {\mathfrak{d}}^2)-n\sum_{k\geq1}((n-1)k-1)^2\alpha_{k}$$ and $$\chi_{\tilde{f}}=n\chi_{\tilde{{\varphi}}}+\frac{n(n-1)}{4}{\mathfrak{d}}.K_{{\varphi}}+\frac{n(n-1)(2n-1)}{12} {\mathfrak{d}}^2-\frac{n(n-1)}{12}\sum_{k\geq1}((2n-1)k^2-3k)\alpha_{k}.$$ Since $K_{f}^2\geq K_{\tilde{f}}^2$ , $\chi_{\tilde{f}}=\chi_{f}$ and $\chi_{\tilde{{\varphi}}}=\chi_{{\varphi}}$, we obtain $$K_{f}^2\geq n(K_{{\varphi}}^2+2(n-1){\mathfrak{d}}.K_{{\varphi}}+(n-1)^2 {\mathfrak{d}}^2)-n\sum_{k\geq1}((n-1)k-1)^2\alpha_{k}
\label{(1.10)}$$ and $$\chi_{f}=n\chi_{{\varphi}}+\frac{n(n-1)}{4}{\mathfrak{d}}.K_{{\varphi}}+\frac{n(n-1)(2n-1)}{12} {\mathfrak{d}}^2\\
-\frac{n(n-1)}{12}\sum_{k\geq1}((2n-1)k^2-3k)\alpha_{k}.
\label{(1.11)}$$
We treat the cases $h=0$ and $h>0$ separately.
\[prop1.2\] Let $f:S\to B$ be a primitive cyclic cover fibration of type $(g,0,n)$ and let $\alpha_{i}$ $(i\geq 0 )$ be the singularity index in Definition $\ref{def1.1}$. Then, $$\begin{aligned}
(r-1)K_{f}^2 & \geq \frac{(r-2)n-r}{n} (n-1)\alpha_{0}\label{(1.12)} \\
& + \sum_{k\geq 1}^{nk\leq \frac{r}{2}} \biggl( \frac{n^2-1}{n} nk(r-1-(nk-1))-(r-1)n \biggr) \alpha_{k},
\nonumber\end{aligned}$$ $$\begin{aligned}
(r-1)\chi_{f}^2= \frac{(2r-3)n-r}{12n} (n-1)\alpha_{0} + \sum_{k\geq 1}^{nk\leq \frac{r}{2}} \biggl( \frac{n^2-1}{12n}nk(r-1-(nk-1)) \biggr) \alpha_{k}.
\label{(1.13)}\end{aligned}$$
Note that if ${\alpha}_{k}>0$, then $nk\leq \frac{r}{2}$ from Corollary \[lem1.4’\].
We find that $R \equiv -\frac{r}{2}K_{{\varphi}}+M_0\Gamma$ for some $M_0 \in \frac{1}{2} {\mathbb{Z}}$, where the symbol $\equiv$ means the numerical equivalence, since ${\varphi}:W \to B$ is a $\mathbb{P}^1$-bundle and we have $K_{W}.\Gamma = -2$ and ${\widetilde{R}}.\widetilde{{\Gamma}}=R.{\Gamma}= r$. Hence we get $$\begin{aligned}
R.K_{{\varphi}}=-2M_0,
\label{(1.14)}\end{aligned}$$ $$\begin{aligned}
R^2=2rM_0.
\label{(1.15)}\end{aligned}$$ Therefore we have $$(K_{{\varphi}}+R).R=2(r-1)M_0.$$ From this equality and (\[(1.9)\]), we get $$\begin{aligned}
2(r-1)M_0=\alpha_{0}+\sum_{k\geq 1}^{nk\leq \frac{r}{2}}nk(nk-1)\alpha_{k}.
\label{(1.16)}\end{aligned}$$ On the other hand, substituting (\[(1.14)\]) and (\[(1.15)\]) for (\[(1.10)\]), we get $$K_{f}^2\geq \frac{(r-2)n-r}{n}2(n-1)M_0 -n \sum_{k\geq 1}^{nk\leq \frac{r}{2}} ((n-1)k-1)^2\alpha_{k}.$$ Multiplying this by $r-1$ and substituting $(\ref{(1.16)})$ for it, we get (\[(1.12)\]). Similarly one can show (\[(1.13)\]).
When $h>0$, we have the following:
\[prop1.3\] Let $f:S\to B$ be a primitive cyclic covering fibration of type $(g,h,n)$ such that $h\geq1$ and $\alpha_{i}$ $(i\geq 0 )$ the singularity index in Definition $\ref{def1.1}$. Put $$t:=2(g-1)-(h-1)(n+1),$$ $$T:=\left\{
\begin{array}{lcr}
2(g-1)K_{{\varphi}}.R, & \text{if }h=1, \\
-\frac{\bigl((g-1-n(h-1))K_{{\varphi}}-(n-1)(h-1)R\bigr)^2}{(n-1)(h-1)}, &\text{if }h\geq2.
\end{array}
\right.$$ Then $t>0$ and $T\geq 0$. Furthermore, $$\begin{aligned}
\numberwithin{equation}{section}
tK_{f}^2\geq tx'\frac{K_{{\varphi}}^2}{(n-1)(h-1)}+ty'T+tz'{\alpha}_{0}+\sum_{k\geq1}a_{k}{\alpha}_{k}
\label{1.17}\end{aligned}$$ and $$\begin{aligned}
\numberwithin{equation}{section}
t\chi_{f}=nt\chi_{{\varphi}}+ t\bar{x}'\frac{K_{{\varphi}}^2}{(n-1)(h-1)}+t\bar{y}'T+t\bar{z}'{\alpha}_{0}+\sum_{k\geq1}\bar{a}_{k}{\alpha}_{k}
\label{1.18}\end{aligned}$$ hold, where $$\begin{aligned}
x'&=\frac{(g-1)(n-1)\bigl((g-1)(n+1)-2n(h-1)\bigr)}{nt},
\quad\bar{x}'=\frac{(n-1)(n+1)(g-1-n(h-1))^2}{12nt},\\
y'&=\frac{(n-1)\bigl(2-\frac{n-1}{n}\bigr)}{t},
\quad\bar{y}'=\frac{(n-1)(n+1)}{12nt},\\
z'&=\frac{2(n-1)^2(g-1)}{nt},
\quad\bar{z}'=\frac{(n-1)\bigl(2(2n-1)(g-1)-n(n+1)(h-1)\bigr)}{12nt},\end{aligned}$$ $$\begin{aligned}
a_{k}=12\bar{a_{k}}-nt,\quad
\bar{a}_{k}=\frac{k}{12}(n^{2}-1)\bigl(2(g-1)+n(h-1)((n-1)k-2)\bigr).\end{aligned}$$ In $(\ref{1.17})$ and $(\ref{1.18})$, the quantity $\frac{K_{{\varphi}}^2}{(n-1)(h-1)}$ is understand to be zero, when $h=1$.
We get $t>0$ from $r\geq 0$, $n\geq 2$ and $g\geq 2$. We shall show that $T\geq0$. If $h=1$, by the canonical bundle formula, we have $$K_{{\varphi}}\equiv\chi(\mathcal{O}_{W}){\Gamma}+ \sum_{i=1}^{l}\bigl(1-\frac{1}{k_{i}}\bigr){\Gamma}$$ where $\{ k_{i}\mid i=1, \dots,l \}$ denotes the set of multiplicities of all multiple fibers of ${\varphi}$, $k_{i}\geq 2$. Hence we get $$\begin{aligned}
\numberwithin{equation}{section}
K_{{\varphi}}.R \geq \chi(\mathcal{O}_{W}){\Gamma}.R=\frac{2(g-1)}{n-1}\chi(\mathcal{O}_{W})
\label{1.19}\end{aligned}$$ Since $W$ is an elliptic surface, we have $\chi(\mathcal{O}_{W})\geq0$ and, hence, $T\geq0$. If $h\geq2$, we consider the intersection matrix $$\left(
\begin{array}{ccc}
K_{{\varphi}}^2 & K_{{\varphi}}.{\mathfrak{d}}& K_{{\varphi}}.{\Gamma}\\
K_{{\varphi}}.{\mathfrak{d}}& {\mathfrak{d}}^2 & {\mathfrak{d}}.{\Gamma}\\
K_{{\varphi}}.{\Gamma}& {\mathfrak{d}}.{\Gamma}& 0
\end{array}
\right)$$ for $\{K_{{\varphi}},\;{\mathfrak{d}},\;{\Gamma}\}$. Since we have $K_{{\varphi}}^2\geq0$ by Arakelov’s theorem, it is not negative definite. Hence its determinant is non negative by the Hodge index theorem, and we get $$\begin{aligned}
\numberwithin{equation}{section}
2(K_{{\varphi}}.{\mathfrak{d}})({\mathfrak{d}}.{\Gamma})(K_{{\varphi}}.{\Gamma})-{\mathfrak{d}}^2(K_{{\varphi}}.{\Gamma})^2-({\mathfrak{d}}{\Gamma})^2K_{{\varphi}}^2\geq 0.
\label{1.20}\end{aligned}$$ Since $$\begin{aligned}
{\mathfrak{d}}.{\Gamma}&=\frac{r}{n}=\frac{2(g-1-n(h-1))}{n(n-1)},\;\;
K_{{\varphi}}.{\Gamma}=2(h-1),\end{aligned}$$ the inequality $(\ref{1.20})$ is equivalent to $$2\bigl(g-1-n(h-1)\bigr)K_{{\varphi}}.{\mathfrak{d}}-n(n-1)(h-1){\mathfrak{d}}^2
\geq \frac{1}{n(n-1)(h-1)}\bigl(g-1-n(h-1)\bigr)^2K_{{\varphi}}^2.$$ So we get $$0\geq \bigl((g-1-n(h-1))K_{{\varphi}}-(n-1)(h-1)R\bigr)^2$$ and, hence, $T \geq 0$.
Now, by a direct calculation, one has $$n\bigl(K_{{\varphi}}^2+\frac{2(n-1)}{n}K_{{\varphi}}.R+(\frac{n-1}{n})^2R^2\bigr)
= x'\frac{K_{{\varphi}}^2}{(n-1)(h-1)}+y'T+z'(K_{{\varphi}}+R)R$$ and $$n\chi_{{\varphi}}+\frac{n-1}{4}K_{{\varphi}}.R+\frac{(n-1)(2n-1)}{12n}R^2=
n\chi_{{\varphi}}+\bar{x}'\frac{K_{{\varphi}}^2}{(n-1)(h-1)}+\bar{y}'T+\bar{z}'(K_{{\varphi}}+R)R.$$ Hence we obtain from $(\ref{(1.10)})$ and $(\ref{(1.11)})$ that $$\begin{aligned}
\numberwithin{equation}{section}
K_{f}^2\geq x'\frac{K_{{\varphi}}^2}{(n-1)(h-1)}+y'T+z'(K_{{\varphi}}+R)R-n\sum_{k\geq1}\bigl((n-1)k-1\bigr)^2{\alpha}_{k}
\label{1.21}\end{aligned}$$ and $$\begin{aligned}
\numberwithin{equation}{section}
\chi_{f}=n\chi_{{\varphi}}+\bar{x}'\frac{K_{{\varphi}}^2}{(n-1)(h-1)}+\bar{y}'T+\bar{z}'(K_{{\varphi}}+R)R-\frac{n(n-1)}{12}\sum_{k\geq1}\bigl((2n-1)k^2-3k\bigr){\alpha}_{k}.
\label{1.22}\end{aligned}$$ From ($\ref{(1.9)}$) and $(\ref{1.21})$, we get $$tK_{f}^2\geq tx'\frac{K_{{\varphi}}^2}{(n-1)(h-1)}+ty'T+tz'{\alpha}_{0}+
\sum_{k\geq1}\bigl(2(n-1)^2(g-1)k(nk-1)-nt((n-1)k-1)^2\bigr){\alpha}_{k}$$ Since one sees $a_{k}=2(n-1)^2(g-1)k(nk-1)-nt((n-1)k-1)^2$, we obtain $(\ref{1.17})$. Similarly, we obtain $(\ref{1.18}).$
Slope inequality for irregular cyclic covering fibrations.
==========================================================
The purpose of this section is to show the slope inequalities for irregular cyclic covering fibrations of type $(g,h,n)$, $n\geq 3$. We start things in a more general setting.
\[def2.1\] Let $\tilde{\theta}:\tilde{S} \to \tilde{W} $ be a finite Galois cover $($not necessarily primitive cyclic$)$ between smooth projective varieties with Galois group $G$. Let $\alpha:{\widetilde{S}}\to {\mathrm{Alb}}({\widetilde{S}})$ be the Albanese map. For any ${\tilde{\sigma}}\in G$, we denote by $\alpha({\tilde{\sigma}}):{\mathrm{Alb}}({\widetilde{S}})\to {\mathrm{Alb}}({\widetilde{S}})$ the morphism induced from ${\tilde{\sigma}}:{\widetilde{S}}\to {\widetilde{S}}$ by the universality of the Albanese map. We put $${\mathrm{Alb}}_{{\tilde{\sigma}}}({\widetilde{S}}):={\mathrm{Im}}\{\alpha({\tilde{\sigma}})-1:{\mathrm{Alb}}({\widetilde{S}})\to {\mathrm{Alb}}({\widetilde{S}})\}$$ and let $$\alpha_{{\tilde{\sigma}}}:{\widetilde{S}}\to {\mathrm{Alb}}_{{\tilde{\sigma}}}({\widetilde{S}})$$ be the morphism defined by $\alpha_{{\tilde{\sigma}}}:=(\alpha({\tilde{\sigma}})-1) \circ \alpha$.
The following is due to Makoto Enokizono.
\[prop2.2\] Suppose that $G$ is a cyclic group generated by ${\tilde{\sigma}}$ in the above situation. If $q_{\widetilde{\theta}}:=q(\widetilde{S})-q(\widetilde{W})>0$, then the following hold.
$(1)$ $\dim {\mathrm{Alb}}_{{\tilde{\sigma}}}({\widetilde{S}})=q_{\widetilde{\theta}}$.
$(2)$ If ${\mathrm{Fix}}(G):=\{x\in{\widetilde{S}}\mid {\tilde{\sigma}}(x)=x \}\neq \emptyset$, then it is contracted by $\alpha_{{\tilde{\sigma}}}$ to the unit element $0 \in {\mathrm{Alb}}_{{\tilde{\sigma}}}({\widetilde{S}})$.
$(3)$ If $\alpha_{{\tilde{\sigma}}}({\widetilde{S}})$ is a curve, then the geometric genus of $\alpha_{{\tilde{\sigma}}}({\widetilde{S}})$ is not less than $q_{\tilde{\theta}}$.
Firstly, we show (1). By the construction of $\alpha({\tilde{\sigma}})-1:{\mathrm{Alb}}({\widetilde{S}})\to {\mathrm{Alb}}({\widetilde{S}})$, we get the following commutative diagram: $$\nonumber
\vcenter{ \xymatrix{
H^0({\mathrm{Alb}}({\widetilde{S}}),\Omega_{{\mathrm{Alb}}({\widetilde{S}})}^1) \ar[r]^{({\alpha}({\tilde{\sigma}})-1)^{\ast}} \ar[d]_{{\alpha}^{\ast}}
& H^0({\mathrm{Alb}}({\widetilde{S}}),\Omega_{{\mathrm{Alb}}({\widetilde{S}})}^1) \ar[d]^{{\alpha}^{\ast}} \\
H^0({\widetilde{S}},\Omega_{{\widetilde{S}}}^1) \ar[r]_{{\tilde{\sigma}}^{\ast}-1}
& H^0({\widetilde{S}},\Omega_{{\widetilde{S}}}^1).} }$$ Since $\alpha^*$ is an isomorphism, we have $\dim {\mathrm{Ker}}(\alpha({\tilde{\sigma}})-1)^{\ast} = \dim {\mathrm{Ker}}({\tilde{\sigma}}^{\ast}-1)$. Since $G$ is a cyclic group generated by ${\tilde{\sigma}}$, we see that ${\mathrm{Ker}}({\tilde{\sigma}}^{\ast}-1)$ coincides with the $G$-invariant part $H^0({\widetilde{S}},\Omega^1_{{\widetilde{S}}})^{G}$ of ${H}^0({\widetilde{S}},\Omega^1_{{\widetilde{S}}})$. On the other hand, since $\tilde{\theta}^{\ast}:\mathrm{H}^0({\widetilde{W}},\Omega^1_{{\widetilde{W}}})\to \mathrm{H}^0({\widetilde{S}},\Omega^1_{{\widetilde{S}}})^{G}$ is an isomorphism, we have $\dim({\mathrm{Ker}}(\alpha({\tilde{\sigma}})-1)^{\ast} )= q({\widetilde{W}})$ and, hence, $$\dim({\mathrm{Im}}(\alpha({\tilde{\sigma}})-1)^{\ast}) = q({\widetilde{S}})-q({\widetilde{W}}).$$ It follows that ${\mathrm{Alb}}_{{\tilde{\sigma}}}({\widetilde{S}})$ is of dimension $q_{\widetilde{\theta}}$.
Secondly, we show (2). We take a point $x_{0}$ in ${\mathrm{Fix}}(G)$ as the base point of the Albanese map $\alpha:{\widetilde{S}}\to {\mathrm{Alb}}({\widetilde{S}}).$ Let $x \in {\mathrm{Fix}}(G)$. Note that we have $$\alpha_{{\tilde{\sigma}}}(x)=(\alpha({\tilde{\sigma}})-1)(\alpha(x))=\alpha({\tilde{\sigma}})(\alpha(x))-\alpha(x)$$ and that $\alpha({\tilde{\sigma}})(\alpha(x))-\alpha(x)$ is the function given by $\omega \mapsto \int_{x_{0}}^{x}{\tilde{\sigma}}^ {\ast}\omega - \int_{x_{0}}^{x}\omega $ for $\omega \in \mathrm{H}^0({\widetilde{S}},\Omega^1_{{\widetilde{S}}})$ modulo periods. Since $x$ and $x_{0} $ are both in ${\mathrm{Fix}}(G)$, we find $$\int_{x_{0}}^{x}{\tilde{\sigma}}^ {\ast}\omega - \int_{x_{0}}^{x}\omega=\int_{{\tilde{\sigma}}(x_{0})}^{{\tilde{\sigma}}(x)}\omega - \int_{x_{0}}^{x}\omega=0.$$ Hence we get $\alpha_{{\tilde{\sigma}}}(x)=\alpha({\tilde{\sigma}})(\alpha(x))-\alpha(x)=0$. Since $x$ can be taken arbitrarily in ${\mathrm{Fix}}(G)$, we get (2).
When $\alpha_{{\tilde{\sigma}}}({\widetilde{S}})$ is a curve, one can check easily that the geometric genus of $\alpha_{{\tilde{\sigma}}}({\widetilde{S}})$ is not less than $q_{\tilde{\theta}}$, by (1) and the universality of the Abel-Jacobi map for the normalization of $\alpha_{{\tilde{\sigma}}}({\widetilde{S}})$. Hence (3).
The slope inequality in the case of $h=0$
-----------------------------------------
We consider the primitive cyclic covering fibration $f:S \to B$ of type $(g,0,n)$ with $q_{f}>0$. Since ${\varphi}:W \to B$ is a ruled surface, we have $q(W)=b$ and it follows $q_{\tilde{\theta}}=q_{f}$. We apply Proposition \[prop2.2\] to the cyclic covering $\tilde{\theta}:{\widetilde{S}}\to {\widetilde{W}}$ to find that its ramification divisor ${\mathrm{Fix}}(G)$ is contracted to a point by ${\alpha}_{{\tilde{\sigma}}}: {\widetilde{S}}\to {\mathrm{Alb}}_{{\tilde{\sigma}}}({\widetilde{S}})$, where ${\tilde{\sigma}}$ denotes a generator of the Galois group $G:=\mathrm{Gal}({\widetilde{S}}/{\widetilde{W}})$. So if $\alpha_{{\tilde{\sigma}}}({\widetilde{S}})$ is a surface (resp. a curve), from Mumford’s theorem (resp. Zariski’s Lemma), the intersection form is negative definite (resp. semi-definite) on ${\mathrm{Fix}}(G)$, and we in particular get $${\mathrm{Fix}}(G)^2 < 0\quad(\textrm{resp.}\leq0).$$ Hence, in any way, we have $$\begin{aligned}
{\widetilde{R}}^2 \leq 0,
\label{(2.1)}\end{aligned}$$ since $\tilde{\theta}^*{\widetilde{R}}=n{\mathrm{Fix}}(G)$.
Here, we remark the following.
\[lem\_added\] Let $f:S\to B$ be a primitive cyclic covering of type $(g,0,n)$. If it is not locally trivial and $q_f>0$, then $r\geq 2n$.
We assume that $r<2n$ and show that this leads us to a contradiction.
Recall that $r$ is a multiple of $n$. If $r<2n$, then $r\leq n$ and $R$ has to be smooth by Lemmas \[lem1.2\] and \[lem1.4\]. On the other hand, since $q_f>0$, we already know from (\[(2.1)\]) that the self-intersection number of any irreducible component $C$ of $R$ is non-positive.
Let $C_0$ be the minimal section with $C_0^2=-e$ and $\Gamma$ a fiber of $\varphi:W\to B$. Note that we can choose the normalized vector bundle of rank $2$ associated with $W$ so that there are no effective divisor numerically equivalent to $C_0-c\Gamma$ for any positive integer $c$. Put $C\equiv aC_0+b\Gamma$ with two integers $a$, $b$. We have $a\geq 0$. If $a=0$, then we have $b=1$, that is, $C$ is a single fiber by its irreducibility. So we may assume that $a$ is positive.
We have $C^2=a(2b-ae)\leq 0$. Hence $2b\leq ae$. Furthermore, since $C$ is irreducible and $a>0$, we have $(K_\varphi+C)C\geq 0$ by the Hurwitz formula applied for the normalization of $C$. Since $K_\varphi\equiv -2C_0-e\Gamma$, we have $$0 \leq (K_\varphi+C)C =(a-1)(2b-ae)\leq 0,$$ by $2b\leq ae$ and $a\geq 1$. Hence we get $(K_\varphi+C)C=0$ and, either $a=1$ or $2b=ae$. In particular, as the first equality shows, $C$ is smooth and $\varphi|_C:C\to B$ is unramified. Furthermore, we get $C^2=0$ when $a\geq 2$ by $2b=ae$. In this case, we also have $b\leq 0$, because $0\leq CC_0=b-ae=-b$. If $a=1$ and $2b<e$, then $b\geq 0$ and it follows from $CC_0=b-e<-b\leq 0$ that we have $C=C_0$ by the irreducibility of $C$. We remark here that we have $(a_1C_0+b_1\Gamma)(a_2C_0+b_2\Gamma)=0$ when $a_i>0$ and $2b_i=a_ie$ for $i=1,2$.
In summary, the only possibilities left for smooth $R$ are (i) $R$ consists of several fibers (including the case $R=0$), (ii) $R$ is the minimal section with $R^2<0$, and (iii) $R$ consists of several smooth curves with self-intersection numbers $0$ which are unramified over $B$ (via $\varphi$). If (i) or (ii) is the case, then we have either $g=0$ or $r=1$, any of which is absurd. If (iii) is the case, then $f:S\to B$ is a locally trivial fibration, which is again inadequate.
From ${\widetilde{R}}^2 = R^2 - \sum_{k \geq 1} n^2k^2 \alpha_{k}$, (\[(1.15)\]) and (\[(2.1)\]), we get $$2rM_0 \leq \sum_{k \geq 1}^{nk\leq \frac{r}{2}} n^2k^2 \alpha_{k}.$$ Hence from this and (\[(1.16)\]), we get $$\begin{aligned}
\alpha_{0} \leq \sum_{k \geq 1}^{nk\leq \frac{r}{2}} \frac{nk}{r}(r-1-(nk-1))\alpha_{k}.
\label{(2.2)}\end{aligned}$$
\[prop2.3\] Let $f:S \to B$ be a primitive cyclic covering fibration of type $(g,0,n)$ which is not locally trivial and $q_{f}>0$. Then $$\lambda_{f} \geq \lambda_{g,n}^1:= 8- \frac{8(g+n-1)}{(n-1)(2g-(n-1)(n-2))}\biggl(=8-\frac{4r}{(n-1)(r-n)}\biggr).$$
For $\lambda \in \mathbb{R}$, we put $$A(\lambda):= \frac{n-1}{n}\biggl((r-2)n-r-\lambda\frac{(2r-3)n-r}{12}\biggr).$$ From Proposition $\ref{prop1.2}$, we get $$(r-1)(K_{f}^2-\lambda_{g,n}^1\chi_{f})\geq A(\lambda_{g,n}^1)\alpha_{0}
+ \sum_{k\geq1}^{nk\leq \frac{r}{2}}a_{k}\alpha_{k}-\sum_{k\geq1}^{nk\leq \frac{r}{2}}\lambda_{g,n}^1\bar{a}_{k}\alpha_{k}$$ where $$a_{k}:=(n^2-1)k(r-1-(nk-1))-(r-1)n,$$ $$\bar{a}_{k}:=\frac{n^2-1}{12}k(r-1-(nk-1)).$$ We can check that $A(\lambda_{g,n}^1)\leq 0$ as follows. Since $r\geq 2n$ by Lemma \[lem\_added\], a calculation shows that the inequality $$A(\lambda_{g,n}^1)= \frac{n-1}{n}\biggl((r-2)n-r-\bigl(8-\frac{4r}{(n-1)(r-n)}\bigr)\frac{(2r-3)n-r}{12}\biggr) \leq0$$ is equivalent to $$(r-n)(-(n-1)^2+2)+2n^2-3n-r\leq 0,$$ and we can check easily its validity. Therefore $A(\lambda_{g,n}^1) \leq 0$. Hence from $(\ref{(2.2)})$, we get $$\begin{aligned}
(r-1)(K_{f}^2-\lambda_{g,n}^1\chi_{f})\geq \sum_{k \geq1}^{nk\leq\frac{r}{2}}\biggl(\frac{(n-1)(r-1)}{4r}(8-\lambda_{g,n}^1)nk(r-nk)-(r-1)n\biggr)\alpha_{k}.
\label{2.2'}\end{aligned}$$ For any integer $k$ satisfying $\frac{r}{2n} \geq k \geq1$, we have $nk(r-nk)\geq n(r-n)$. Since we have $$\begin{aligned}
\frac{(n-1)(r-1)}{4r}(8-\lambda_{g,n}^1)n(r-n)-(r-1)n=0,\end{aligned}$$ the coefficient of ${\alpha}_{k}$ in the right hand side of $(\ref{2.2'})$ is not negative. Therefore, we get $K_{f}^2-\lambda_{g,n}^1\chi_{f}\geq 0$ as desired.
The slope inequality in the case of $h\geq 1$.
----------------------------------------------
Before showing the slope inequality when $h\geq1$ and $n\geq 3$, we study the upper bound of ${\alpha}_{0}$.
Recall that we decomposed $\psi$ into a succession of blowing-ups $\psi_{i}$ as, $$\psi:{\widetilde{W}}=W_{N} \overset{\psi_{N}}{\to} W_{N-1} \to \cdots \to W_1
\overset{\psi_{0}}{\to}W_{0} = W$$ We define the order of blowing-up $\psi'$ appearing in $\psi$ as follows. If the center of $\psi'$ is a point on the branch locus of multiplicity $m'$, we put $${\mathrm{ord}}(\psi'):=\biggl[ \frac{m'}{n} \biggr].$$ Moreover we introduce a partial order on these blowing-ups $\psi'$ and $\psi''$ appearing in $\psi$, $$\psi' \geq \psi''
\overset{\mathrm{def}}{\Longleftrightarrow}
{\mathrm{ord}}(\psi')\geq {\mathrm{ord}}(\psi'').$$
\[lem2.4\] Assume that $n\geq 3$. Let $x_{j}$ $( \in R_{j} \subset W_{j})$ be a singular point infinitely near to $x_{i} \in R_{i}$. Then the multiplicities satisfy $m_{j} \leq m_{i}$.
Though this can be found in [@Eno], Lemma 3.7, when $n=0$, we shall give a proof for the convenience of readers.
Let $x_{i+1}$ be the singular point of $R_{i+1}$ infinitely near to $x_{i} \in R_{i}$. If $m_{i} \in n{\mathbb{Z}}$, then $R_{i+1}$ coincied with $\widehat{R_{i}}$, the proper transform of $R_{i}$ by $\psi_{i+1}$, by Lemma $\ref{lem1.2}$. Hence $m_{i+1}\leq m_{i}$ in this case. If $m_{i} \in n{\mathbb{Z}}$+1, then $R_{i+1}=\widehat{R_{i}}+E_{i+1}$. Hence we get $m_{i+1}\leq m_{i}+1 \in n{\mathbb{Z}}+2$. From Lemma $\ref{lem1.2}$ and the assumption $n\geq 3$, we get $m_{i+1}\leq m_{i}$.
From Lemma \[lem2.4\], we can reorder those blowing-ups appearing in $\psi$ so that $\psi_{i}\geq\psi_{j}$ holds whenever $i<j$. We put, $$M:=\mathrm{max}\{{\mathrm{ord}}(\psi') \mid \text{$\psi'$ is a blowing up in $\psi$} \}.$$ Then we can decompose $\psi$ as $$\psi:{\widetilde{W}}=\widehat{W}_{M} \overset{\hat{\psi}_{M}}{\to} \widehat{W}_{M-1}
\quad \cdots \quad\overset{\hat{\psi}_{0}}{\to} {\widehat{W}}_{0} = W$$ in such a way that ${\mathrm{ord}}(\psi')=M+1-i$ holds for any $\psi'$ appearing in $\hat{\psi}_{i}$.
\[lem2.5\] Let $\psi'$ be a blowing-up appearing in $\hat{\psi}_{i}$ and $\widetilde{D}$ the proper inverse image of the exceptional curve of $\psi'$ on ${\widetilde{S}}$. Then the geometric genus of $\widetilde{D}$ satisfies $$g(\widetilde{D})\leq \frac{(n-1)(n(M-i)+n-2)}{2}.$$
Let $m'$ be the multiplicity of the singular point blown up by $\psi'$, and $\widetilde{E}$ the proper transform of the exceptional curve of $\psi'$ on ${\widetilde{W}}$.
When $m'\in n{\mathbb{Z}}+1$, since $\widetilde{E}$ is contained in ${\widetilde{R}}$, $\widetilde{D}$ is a smooth rational curve.
Assume that $m'\in n{\mathbb{Z}}$. From $m'=n(M+1-i)$, the intersection number of the exceptional curve of $\psi'$ and the branch locus is $n(M+1-i)$. Hence the intersection number of their proper transforms on ${\widetilde{W}}$ is at most $n(M+1-i)$. On the other hand, we consider the composite $\pi:\widetilde{D'} \to \widetilde{D} \overset{\tilde{\theta}|_{\widetilde{D}}}{\to} \widetilde{E}$, where $\widetilde{D'} \to \widetilde{D}$ is normalization of $\widetilde{D}$, and let $B_{\pi}$ be the branch locus $\pi$. From the Hurwitz formula for $\pi$ and Lemma $\ref{lem3.2}$, we get $$2g(\widetilde{D'})-2+2n \leq (n-1)\sharp B_{\pi}.$$ Since $\tilde{\theta}|_{\tilde{D}}$ is totally ramified, $$\begin{aligned}
\sharp B_{\pi} \leq \sharp B_{\tilde{\theta}|_{\tilde{D}}} \leq \widetilde{E}.{\widetilde{R}}\leq n(M+1-i).\end{aligned}$$ Therefore we get $$g(\widetilde{D})=g(\widetilde{D'}) \leq \frac{(n-1)(n(M-i)+n-2)}{2},$$ which is what we want.
\[prop2.6\] Let $f:S \to B$ be a primitive cyclic covering fibration of type $(g,h,n)$ such that $q_{\tilde{\theta}}>0$, $h\geq1$, $n\geq 3$ and let $\alpha_{i}$ $(i\geq 0 )$ be the singularity index in Definition $\ref{def1.1}$. Then, $$\begin{aligned}
\nonumber
&2\bigl(g-1-n(h-1)\bigr){\alpha}_{0}\\
&\leq\bigl(g-1-n(h-1)\bigr)^2 \frac{K_{{\varphi}}^2}{(n-1)(h-1)}+T+
\sum_{k\geq1}\frac{12n}{(n-1)(n+1)}\bar{a}_{k}{\alpha}_{k},
\label{2.4}\end{aligned}$$ where $\bar{a}_{k}$ is defined in Proposition $\ref{prop1.3}$. If the image ${\alpha}_{{\tilde{\sigma}}}({\widetilde{S}})$ is a curve and $\nu(q_{\tilde{\theta}})\geq 1$, where $$\nu(x):=\biggl[ \frac{2(x-1)}{n(n-1)}-\frac{n-2}{n}\biggr],$$ then $$\begin{aligned}
\nonumber
&2\bigl(g-1-n(h-1)\bigr)\bigl({\alpha}_{0}+\sum_{k=1}^{\nu(q_{\tilde{\theta}})}nk(nk-1){\alpha}_{k}\bigr) \\
&\leq\bigl(g-1-n(h-1)\bigr)^2 \frac{K_{{\varphi}}^2}{(n-1)(h-1)}+T+
\sum_{k\geq\nu(q_{\tilde{\theta}})+1}\frac{12n}{(n-1)(n+1)}\bar{a_{k}}{\alpha}_{k}.
\label{2.5}\end{aligned}$$ In $(\ref{2.4})$ and $(\ref{2.5})$, the quantity $\frac{K_{{\varphi}}^2}{(n-1)(h-1)}$ is understood to be zero when $h=1$.
Firstly assume that ${\alpha}_{{\tilde{\sigma}}}({\widetilde{S}})$ is a curve of geometric genus $g'$. In this case, by Proposition \[prop2.2\], we have $g' \geq q_{\tilde{\theta}}$ and see that any curve of geometric genus less than $g'$ on $\widetilde{S}$ is contracted by ${\alpha}_{{\tilde{\sigma}}}$. Hence, we know from Lemma $\ref{lem2.5}$ that for any $1\leq i \leq M$ satisfying $$\frac{(n-1)(n(M-i)+n-2)}{2}\leq g'-1,$$ the proper transform of the exceptional curve of $\hat{\psi}_{i}$ to ${\widetilde{S}}$ is contracted by ${\alpha}_{{\tilde{\sigma}}}$. Then, since $q_{\tilde{\theta}}\leq g'$, for any $1\leq i\leq M$ satisfying $$\frac{(n-1)(n(M-i)+n-2)}{2}\leq q_{\tilde{\theta}}-1,$$ the same holds true. So we conclude that the total inverse image of ${\widehat{R}}_{M-\nu(q_{\tilde{\theta}})}$ in ${\widetilde{S}}$ is contracted by ${\alpha}_{{\tilde{\sigma}}}$, where ${\widehat{R}}_{M-\nu(q_{\tilde{\theta}})}\subset {\widehat{W}}_{M-\nu(q_{\tilde{\theta}})}$ is the image of ${\widetilde{R}}$. Therefore, the total inverse image of ${\widehat{R}}_{M-\nu(q_{\tilde{\theta}})}$ forms a negative semi-definite configuration. In particular, we have $$\begin{aligned}
\numberwithin{equation}{section}
{\widehat{R}}_{M-\nu(q_{\tilde{\theta}})}^2 \leq 0.
\label{2.6}\end{aligned}$$ By the construction, we have $$\begin{aligned}
{\widehat{R}}_{M-\nu(q_{\tilde{\theta}})}^2&=
R^2-\sum_{k>\nu(q_{\tilde{\theta}})}n^2k^2{\alpha}_{k}\\
&=\hat{x}\frac{K_{{\varphi}}^2}{(n-1)(h-1)}+\hat{y}T+\hat{z}(K_{{\varphi}}+R)R
-\sum_{k>\nu(q_{\tilde{\theta}})}n^2k^2{\alpha}_{k},\end{aligned}$$ where $$\hat{x}=-\frac{\bigl(g-1-n(h-1)\bigr)^2}{t},
\;\hat{y}=-\frac{1}{t},
\;\hat{z}=\frac{\bigl(g-1-n(h-1)\bigr)}{t}.$$ Hence from $(\ref{(1.9)})$, $(\ref{2.6})$ and the above equality, we get $$t\hat{z}\bigl({\alpha}_{0}+\sum_{k=1}^{\nu(q_{\tilde{\theta}})}nk(nk-1){\alpha}_{k} \bigr)
\leq -t\hat{x}\frac{K_{{\varphi}}^2}{(n-1)(h-1)}-t\hat{y}T+
\sum_{k > \nu(q_{\tilde{\theta}})}t\bigl( n^2k^2-nk(nk-1)\hat{z}\bigr){\alpha}_{k}.$$ This is nothing more than $(\ref{2.5})$.
Even when ${\alpha}_{{\tilde{\sigma}}}({\widetilde{S}})$ is not a curve, we have ${\widetilde{R}}^2\leq0$ by Proposition $\ref{prop2.2}$. Using this instead of (\[2.6\]), we get $(\ref{2.4})$ by a similar argument as above.
\[thm2.7\] Let $f:S\to B$ be a primitive cyclic covering fibration of type $(g,h,n)$ which is not locally trivial and such that $q_{\tilde{\theta}}$ and $h$ are both positive, $n\geq 3$. Put $$F(g,h,l)=(g-1)^2-2n(g-1)\bigl((h+1)(n-1)(l+1)-1 \bigr)-\bigl((l+1)(n-1)^2-1\bigr)^2 n^2(h^2-1).$$
[(i)]{} If $F(g,h,0)\geq0$, then $$\lambda_{f}\geq\lambda_{g,h,n}:=8-\frac{2(n+1)\bigl(g-1-n(h-1)\bigr)}{3\bar{a}_{1}}.$$
[(ii)]{} Assume that ${\alpha}_{{\tilde{\sigma}}}({\widetilde{S}})$ is a curve and $\nu(q_{\tilde{\theta}})\geq1$. If $F(g,h,\nu(q_{\tilde{\theta}}))\geq0$, then $$\lambda_{f}\geq\lambda_{g,h,n,q_{\tilde{\theta}}}:=8-\frac{2(n+1)\bigl(g-1-n(h-1)\bigr)}{3\bar{a}_{\nu(q_{\tilde{\theta}})+1}}.$$
Here we restrict ourselves to the case that ${\alpha}_{{\tilde{\sigma}}}({\widetilde{S}})$ is a curve and show (ii) only, since (i) can be shown similarly. From (\[1.17\]) and (\[1.18\]), we obtain $$\begin{aligned}
\nonumber
t(K_{f}^2-\lambda_{g,h,n,q_{\tilde{\theta}}} \chi_{f})& \geq t(x'-\lambda_{g,h,n,q_{\tilde{\theta}}}\bar{x}')\frac{K_{{\varphi}}^2}{(n-1)(h-1)}
+t(y'-\lambda_{g,h,n,q_{\tilde{\theta}}}\bar{y}')T \\
& +t(z'-\lambda_{g,h,n,q_{\tilde{\theta}}}\bar{z}'){\alpha}_{0}
+\sum_{k\geq1}(a_{k}-\lambda_{g,h,n,q_{\tilde{\theta}}}\bar{a}_{k}){\alpha}_{k}
-n\lambda_{g,h,n,q_{\tilde{\theta}}} t\chi_{{\varphi}}
\label{2.7}\end{aligned}$$ To apply $(\ref{2.5})$ to the above inequality, we have to check that $z'-\lambda_{g,h,n,q_{\tilde{\theta}}} \bar{z}'\leq0$ in advance. Since $$\begin{aligned}
\label{2.7''}
\lambda_{g,h,n,q_{\tilde{\theta}}}
&\geq8-\frac{2(n+1)\bigl(g-1-n(h-1)\bigr)}{3\bar{a}_{1}}\\
\nonumber
&=8-\frac{8\bigl(g-1-n(h-1)\bigr)}{(n-1)\bigl( 2(g-1)+n(h-1)(n-3)\bigr)},\end{aligned}$$ it is sufficient to see that $$8-\frac{8\bigl(g-1-n(h-1)\bigr)}{(n-1)\bigl( 2(g-1)+n(h-1)(n-3)\bigr)}
\geq\frac{24(n-1)(g-1)}{2(2n-1)(g-1)-n(n+1)(h-1)},$$ which is equivalent to $$\begin{aligned}
(g-1-n(h-1))\bigl(16(g-1)n(n-2)+8n(n+1)(h-1)((n-1)(n-3)+1)\bigr)\geq 0,\end{aligned}$$ the validity of which can be checked directly. Therefore we get $z'-\lambda_{g,h,n,q_{\tilde{\theta}}}\bar{z}'\leq0$. Applying $(\ref{2.5})$ to $(\ref{2.7})$, we obtain $$\begin{aligned}
&t(K_{f}^2-\lambda_{g,h,n,q_{\tilde{\theta}}}\chi_{f}) \\
&\geq\frac{n-1}{12}t\biggl(12(g-1)-
\frac{3}{2}\lambda_{g,h,n,q_{\tilde{\theta}}}\bigl(g-1-n(h-1)\bigr)\biggr)
\frac{K_{{\varphi}}^2}{(n-1)(h-1)}-n\lambda_{g,h,n,q_{\tilde{\theta}}}t\chi_{{\varphi}}\\
&+\frac{(n-1)(8-\lambda_{g,h,n,q_{\tilde{\theta}}})}{8\bigl(g-1-n(h-1)\bigr)}tT\\
&+\frac{nt}{12}\sum_{k=1}^{\nu(q_{\tilde{\theta}})}
\big((n-1)k((2n-1)k-3)\lambda_{g,h,n,q_{\tilde{\theta}}}-12((n-1)k-1)^2\bigr){\alpha}_{k}\\
&+t\sum_{k>\nu(q_{\tilde{\theta}})}
\biggl( \frac{(8-\lambda_{g,h,n,q_{\tilde{\theta}}})3n}{2(n+1)\bigl(g-1-n(h-1)\bigr)}
\bar{a}_{k}-n\biggr){\alpha}_{k}.\end{aligned}$$ We will show that $\big((n-1)k((2n-1)k-3)\lambda_{g,h,n,q_{\tilde{\theta}}}-12((n-1)k-1)^2\bigr)
\geq0$ for $1\leq k \leq \nu(q_{\tilde{\theta}}).$ Note that $$\begin{aligned}
&(n-1)k((2n-1)k-3)\lambda_{g,h,n,q_{\tilde{\theta}}}-12((n-1)k-1)^2\\
=&\frac{1}{n^2}\biggl(nk\bigl( (n-1)(nk-1)(\lambda_{g,h,n,q_{\tilde{\theta}}} (2n-1)-12(n-1))+
(n^2-1)(12-\lambda_{g,h,n,q_{\tilde{\theta}}}) \bigr)-12n^2\biggr).\end{aligned}$$ Firstly we will show that $\lambda_{g,h,n,q_{\tilde{\theta}}}
\geq \frac{12(n-1)}{2n-1}$. From $(\ref{2.7''})$, it is sufficient to check that $$8-\frac{8\bigl(g-1-n(h-1)\bigr)}{(n-1)\bigl( 2(g-1)+n(h-1)(n-3)\bigr)}
\geq \frac{12(n-1)}{2n-1}.$$ A calculation shows that it is equivalent to $$8n(n-2)(g-1)+(4(n^2-1)(n-3)+8(2n-1))n(h-1)\geq 0,$$ which holds true clearly. So we have shown $\lambda_{g,h,n,q_{\tilde{\theta}}}
\geq \frac{12(n-1)}{2n-1}$ and it follows that $$(n-1)k((2n-1)k-3)\lambda_{g,h,n,q_{\tilde{\theta}}}-12((n-1)k-1)^2$$ is increasing in $k$. Evaluating at $k=1$, we get $$\begin{aligned}
&(n-1)k((2n-1)k-3)\lambda_{g,h,n,q_{\tilde{\theta}}}-12((n-1)k-1)^2 \\
&\geq2(n-2)\bigl((n-1)\lambda_{g,h,n,q_{\tilde{\theta}}}-6(n-2)\bigr)\\
&\geq0,\end{aligned}$$ by $\lambda_{g,h,n,q_{\tilde{\theta}}}\geq \frac{12(n-1)}{2n-1}$. Since $$\begin{aligned}
\frac{(8-\lambda_{g,h,q_{\tilde{\theta}}}^1)3n}{2(n+1)\bigl(g-1-n(h-1)\bigr)}\bar{a}_{k}-n
\geq
\frac{(8-\lambda_{g,h,q_{\tilde{\theta}}}^1)3n}{2(n+1)\bigl(g-1-n(h-1)\bigr)}
\bar{a}_{\nu(q_{\tilde{\theta}})+1}-n=0
\label{2.10}\end{aligned}$$ holds for any $k\geq\nu(q_{\tilde{\theta}})+1$, we obtain $$\begin{aligned}
\nonumber
& K_{f}^2-\lambda_{g,h,n,q_{\tilde{\theta}}}\chi_{f} \\
\geq & \;\frac{n-1}{12}\biggl(12(g-1)-
\frac{3}{2}\lambda_{g,h,n,q_{\tilde{\theta}}}\bigl(g-1-n(h-1)\bigr)\biggr)
\frac{K_{{\varphi}}^2}{(n-1)(h-1)} \label{2.12} \\
&-n\lambda_{g,h,n,q_{\tilde{\theta}}}\chi_{{\varphi}}
+\frac{(n-1)(8-\lambda_{g,h,n,q_{\tilde{\theta}}})}{8\bigl(g-1-n(h-1)\bigr)}T\nonumber\end{aligned}$$
If $F(g,h,\nu(q_{\tilde{\theta}}))\geq0$ and $h=1$, then by $(\ref{1.19})$ we have $T=2(g-1)K_{{\varphi}}.R\geq \frac{4(g-1)^2}{n-1}\chi_{{\varphi}}$. Hence it follows from $(\ref{2.12})$ that $$K_{f}^2-\lambda_{g,1,n,q_{\tilde{\theta}}}\chi_{f}\geq
\frac{n+1}{3\bar{a}_{\nu(q_{\tilde{\theta}})+1}}F(g,1,\nu(q_{\tilde{\theta}}))\chi_{{\varphi}}\geq0$$ which gives us (ii) for $h=1$. If $F(g,h,\nu(q_{\tilde{\theta}}))\geq 0$ and $h\geq 2$, then we can use Xiao’s slope inequality $K_{{\varphi}}^2 \geq \frac{4(h-1)}{h}\chi_{{\varphi}}$ and $T\geq 0$, to get $$K_{f}^2-\lambda_{g,h,n,q_{\tilde{\theta}}}\chi_{f}
\geq \frac{n+1}{3h\bar{a}_{\nu(q_{\tilde{\theta}})+1}}
F(g,h,\nu(q_{\tilde{\theta}}))\chi_{{\varphi}}
\geq0$$ from $(\ref{2.12})$. Hence we have shown (ii) also for $h\geq 2$.
Special irregular cyclic covering fibrations of ruled surfaces.
===============================================================
Let $f:S\to B$ be a primitive cyclic covering fibration of type $(g,0,n)$ with $q_f>0$ and suppose that it is not locally trivial. Let $\alpha_{{\tilde{\sigma}}}:{\widetilde{S}}\to {\mathrm{Alb}}_{{\tilde{\sigma}}}({\widetilde{S}})$ be the morphism defined as in Definition \[(2.1)\] for the generator ${\tilde{\sigma}}$ of the covering transformation group $G$ of $\tilde{\theta}:\widetilde{S}\to \widetilde{W}$. Moreover we assume that there is a component $C$ of ${\mathrm{Fix}}(G)$ such that $C^2=0$. Note then that $\alpha_{{\tilde{\sigma}}}({\widetilde{S}})$ is a curve by Proposition \[prop2.2\].
\[prop3.1\] In the above situation, there are a fibrations $\tilde{f'}:{\widetilde{S}}\to B'$, ${\varphi}':{\widetilde{W}}\to \mathbb{P}^1$ and a morphism $\theta':B' \to \mathbb{P}^1$, where $B'$ is s smooth curve, such that ${\widetilde{R}}$ is $\tilde{{\varphi}'}$-vertical, $q_{f}\leq g(B')$, and they fit into the commutative diagram: $$\nonumber
\vcenter{ \xymatrix{
{\widetilde{S}}\ar[r]^{\tilde{\theta}} \ar[d]_{\tilde{f}'}
& {\widetilde{W}}\ar[d]^{\tilde{{\varphi}}'} \\
B' \ar[r]_{\theta'}
& {\mathbb{P}}^1 . } }$$
We can obtain $\tilde{f'}:{\widetilde{S}}\to B'$ from the Stein factorization of $\alpha_{{\tilde{\sigma}}}:{\widetilde{S}}\to \alpha_{{\tilde{\sigma}}}({\widetilde{S}})$. Hence we have $g(B')\geq q_f$ by Proposition \[prop2.2\], (3). We will show that the automorphism ${\tilde{\sigma}}:{\widetilde{S}}\to {\widetilde{S}}$ induces an automorphism of $B'$. We assume that there is a fiber $F'$ of $\tilde{f'}$ such that ${\tilde{\sigma}}^{\ast}F'$ has a $\tilde{f}'$-horizontal component. Let $F_{C}'$ be the fiber of $f'$ which contains the curve $C$ with $C^2=0$. Then, from Zariski’s lemma, we see that $F_{C}'=aC$ for some positive integer $a$ and it follows $F_{C}'={\tilde{\sigma}}^{\ast}F_{C}'$, since $C$ is a component of ${\mathrm{Fix}}(G)$. Hence $$0<({\tilde{\sigma}}^{\ast}F'.F_{C}')=({\tilde{\sigma}}^{\ast}F'.{\tilde{\sigma}}^{\ast}F_{C}')=(F'.F_{C}')=0,$$ a contradiction. Therefore ${\tilde{\sigma}}$ maps fibers to fibers, and descends down to give an automorphism ${\tilde{\sigma}}_{B'}:B' \to B'$. Furthermore we have the commutative diagram $$\nonumber
\vcenter{ \xymatrix{
{\widetilde{S}}\ar[r]^{\tilde{\theta}} \ar[d]_{\tilde{f}'}
& {\widetilde{W}}\ar[d]^{\tilde{{\varphi}}'} \ar[r]^{\tilde{{\varphi}}}& B\\
B' \ar[r]_{\theta'}
& D' ,& } }$$ where $\theta':B'\to D':=B'/\langle{\tilde{\sigma}}_{B'}\rangle$ denotes the quotient map. In order to complete the proof, it suffices to see that $D'=\mathbb{P}^1$. This can be shown as follows. Any general fiber of $\tilde{{\varphi}}$ is $\tilde{{\varphi}'}$-horizontal by ${\widetilde{R}}.{\widetilde{\Gamma}}>0$. Since $\tilde{{\varphi}}$ is ruled, we see that ${\mathbb{P}}^1$ dominates $D'$ and it follows $D'={\mathbb{P}}^1$.
The contraction ${\varphi}:{\widetilde{W}}\to W$ is composed of several blowing-ups. We decompose it as $\psi=\check{\psi}\circ\bar{\psi}$ as follows. Let $\bar{\psi}:{\widetilde{W}}\to \overline{W}$ be the longest succession of blowing-downs such that we still have the morphism $\bar{{\varphi}'}$ satisfying $\tilde{{\varphi}'}=\bar{{\varphi}'}\circ\bar{\psi}$. Then we have the following commutative diagram. $$\xymatrix{
& {\mathbb{P}}^1 & \\
{\widetilde{W}}\ar[ru]^{\tilde{{\varphi}}'} \ar[r]^{\bar{\psi}} \ar[rd]_{\tilde{{\varphi}}}
& \overline{W} \ar[d]^{\bar{{\varphi}}} \ar[r]^{\check{\psi}} \ar[u]^{\bar{{\varphi}}'}
& W \ar[ld]^{{\varphi}}\\
& B &
}$$ Let $\overline{R}:=\bar{\psi}_{\ast}{\widetilde{R}}$ be the image of ${\widetilde{R}}$ by $\bar{\psi}$.
\[prop3.3\] The morphism $\check{\psi}:\overline{W}\to W$ is not the identity map.
We will prove this by contradiction. Suppose that $\check{\psi}$ is the identity map. As one sees from the proof of Lemma \[lem\_added\], any irreducible curve $D$ on $W$ with $D^2\leq 0$ is smooth, and $\varphi|_D:D\to B$ is an unramified covering when $D^2=0$ and $D$ is not a fiber of $\varphi$. Hence, any irreducible fiber of $\bar{{\varphi}'}:W\to {\mathbb{P}}^1$ has to be smooth. Suppose that there is a fiber of $\bar{{\varphi}'}$ whose reduced scheme is reducible, and take an irreducible component $D_0$. Then $D_0^2<0$ and, from the proof of Lemma \[lem\_added\], we conclude that $D_0$ coincides with the minimal section. The unicity implies that we cannot have such reducible singular fibers. Therefore, a singular fiber of $\bar{\varphi'}$, if any, is a multiple fiber whose support is a smooth irreducible curve. Since $R$ is a reduced divisor with support in fibers of $\bar{\varphi'}$ by Proposition \[prop3.1\], we see that $R$ is smooth and ${\varphi}|_{R}:R\to B$ is unramified. Then $f:S\to B$ is a locally trivial fibration, which is inadequate.
Assume that $\theta':B' \to \mathbb{P}^1$ is branched over $\Delta \subset \mathbb{P}^1$. For any $y \in \Delta $, let $\widetilde{\Gamma_{y}'}=\sum \tilde{n}_{C}C$ be the fiber of $\tilde{{\varphi}'}$ over $y$, and put $${\widetilde{R}}_{all}:= \tilde{{\varphi}'}^{\ast}\Delta,\quad
{\widetilde{R}}_{r}:=\sum_{y\in \Delta,\;C\subset \widetilde{\Gamma_{y}'},\;\tilde{n}_{C}=1}C.$$
In the above situation, ${\widetilde{R}}_{r} \preceq {\widetilde{R}}.$ \[lem3.4\]
We put $$G_{\tilde{f'}}:=\{\tau \in G \mid \tau(\widetilde{F}')=\widetilde{F}' \text{ for any fiber } \widetilde{F}'
\text{ of }\tilde{f'}\}$$ Since ${\tilde{f'}}\circ \tau = \tau$ for any $\tau \in G_{{\tilde{f'}}}$, the morphism ${\tilde{f'}}$ induces the morphism $\pi:{\widetilde{S}}/G_{{\tilde{f'}}} \to {\widetilde{W}}$ and we have the following commutative diagram. $$\begin{aligned}
\nonumber
\xymatrix{
{\widetilde{S}}\ar[rd] \ar@/^18pt/[rrd]^{\tilde{\theta}} \ar@/_18pt/[ddr]_{\tilde{f}'}& & \\
& {\widetilde{S}}/G_{\tilde{f'}} \ar[d] \ar[r]^{\pi}
& {\widetilde{W}}\ar[d]^{\tilde{{\varphi}}'} \\
& B' \ar[r]_{\theta'} & {\mathbb{P}}^1
}\end{aligned}$$ Note that the degree of $\pi$ is equal to that of $\theta'$. We claim that ${\mathrm{Fix}}(G_{{\tilde{f'}}})={\mathrm{Fix}}(G)$. This can be see as follows. It is clear that ${\mathrm{Fix}}(G_{{\tilde{f'}}})\supset{\mathrm{Fix}}(G)$. If there is a point $x \in {\mathrm{Fix}}(G_{{\tilde{f'}}})\setminus{\mathrm{Fix}}(G)$, then we have ${\tilde{\sigma}}(x)\neq x$ for the generator ${\tilde{\sigma}}$ of $G$. On the other hand, since $G_{{\tilde{f'}}}$ is a subgroup of $G$ of order $n/{\mathrm{deg}}\,\theta '$, we have $G_{{\tilde{f'}}}=\langle{\tilde{\sigma}}^{{\mathrm{deg}}\,\theta '}\rangle$. Hence the number of $G$-orbits of $x$ is at most $n/{\mathrm{deg}}\,\theta '$. This contradicts that $\tilde{\theta}: {\widetilde{S}}\to {\widetilde{W}}$ is totally ramified. Therefore ${\mathrm{Fix}}(G_{{\tilde{f'}}})={\mathrm{Fix}}(G)$. Hence ${\widetilde{S}}/G_{{\tilde{f'}}}$ is smooth. Let $R_{\pi}$ be the branch locus of $\pi$. Since $\tilde{\theta}$ is totally ramified, one can check easily that $R_{\pi}={\widetilde{R}}$. Hence it is sufficient to prove that ${\widetilde{R}}_{r}\leq R_{\pi}$. Let $C$ be any component of ${\widetilde{R}}_{r}$. We can take analytic local coordinates $(U_\mathbb{P}^1,x)$ on $\mathbb{P}^1$, $(U_{{\widetilde{W}}},y,z)$ on ${\widetilde{W}}$ and $(U_{B'},w)$ on $B'$ such that $\tilde{{\varphi}'}(C)$ is defined by $x=0$, $C$ is defined by $y=0$, $\theta'^{\ast}x=w^{{\mathrm{deg}}\,\theta'}$ and $\bar{{\varphi}'}^{\ast}x=y$. $U_{B'}\times_{\mathbb{P}^1}U_{{\widetilde{W}}}$ is defined by $y=w^{{\mathrm{deg}}\,\theta'}$ in $U_{B'}\times U_{{\widetilde{W}}}$. So $ U_{B'}\times_{\mathbb{P}^1}U_{{\widetilde{W}}} \to U_{{\widetilde{W}}}$ is ramified over $C\cap U_{{\widetilde{W}}}$ and $U_{B'}\times_{\mathbb{P}^1}U_{{\widetilde{W}}} $ is smooth. Hence the natural morphism ${\widetilde{S}}/G_{{\tilde{f'}}} \to B'\times_{\mathbb{P}^1}{\widetilde{W}}$ is an isomorphism around $U_{B'}\times_{\mathbb{P}^1}U_{{\widetilde{W}}} $. Therefore we get $C\preceq R_{\pi}={\widetilde{R}}$.
We suppose that $\check{\psi}=\check{\psi}_{1}\circ\;\cdots\;\circ\check{\psi}_{u}$, where $\check{\psi}_{i}:\check{W_{i}}\to\check{W}_{i-1}$ is a blowing-up at $\check{x}_{i-1}\in \check{W}_{i-1}$ with exceptional curve $\check{\mathcal{E}_{i}}\subset{\check{W}}$, ${\check{W}}_{0}=W$ and ${\check{W}}_{u}=\overline{W}$. Let $\check{R}_{i}$ be the image of $\overline{R}$ in ${\check{W}}_{i}$, and let $\check{x_{i}}$ be a singular point of $\check{R_{i}}$ of multiplicity $\check{m_{i}}$.
\[lemma3.5\] Assume that $n \geq 3$. For $1\leq i \leq u-1$, we have $\check{m}_{i} \geq \frac{2q_{f}}{n-1}+2$. Moreover if there is $\check{m}_{i}$ such that equality sign holds, then $ \frac{2q_{f}}{n-1}+2 \in n{\mathbb{Z}}$ and $\deg \theta'=n$.
Let $\mathcal{E}\subset\overline{W}$ be any $(-1)$-curve contracted by $\check{\psi}$. Note that $\bar{{\varphi}'}|:\mathcal{E} \to \mathbb{P}^1$ is surjective and $\mathcal{E}.\overline{R} \in n{\mathbb{Z}}$. We will show that $\mathcal{E}.\overline{R} \geq \frac{2q_{f}}{n-1}+2$. If $n\geq \frac{2q_f}{n-1}+2$, then the assertion is clear from Lemma \[lem1.2\], (1). Hence we may assume that $n<\frac{2q_f}{n-1}+2$ in the following. In particular, we have $q_f\geq n-1$.
By Lemma $\ref{lem3.4}$, it is enough to show that $\mathcal{E}.\overline{R}_{r} \geq \frac{2q_{f}}{n-1}+2$ where $\overline{R}_{r}$ is image of ${\widetilde{R}}_{r}.$ Let $\mathcal{R}_{\theta'}$ be the ramification divisor of $\theta':B' \to {\mathbb{P}}^1$. Since $g(B')\geq q_f>0$, we have $\deg \theta'>1$. From the Hurwitz formula, we get $$\begin{aligned}
\deg \mathcal{R}_{\theta'}=2g(B')-2+2\deg \theta'.
\label{(3.4)}\end{aligned}$$ By Lemma $\ref{lem3.2}$ and $(\ref{(3.4)})$, we get $(\deg\theta'-1)\sharp \Delta \geq 2g(B')-2+2\deg \theta'$, that is, $$\begin{aligned}
\sharp \Delta \geq \frac{2}{\deg \theta'-1}g(B')+2.\end{aligned}$$ We put $\overline{R}_{all}:=\bar{\psi'}^{\ast}\Delta$. For any $p\in\mathcal{E}\cap\overline{R}_{all}$, let $r_{p}:=I_{p}(\mathcal{E}.\overline{R}_{all})$ be the local intersection number. Since $\overline{R}_{all}=\bar{\psi'}^{\ast}\Delta$ consists of $\sharp \Delta$ fibers of $\bar{{\varphi}'}$, one has $$\begin{aligned}
\nonumber
\sum_{\mathcal{E}\cap\overline{R}_{all}} r_{p}=(\mathcal{E}.\overline{R}_{all}) & =\deg(\overline{{\varphi}'}|_{\mathcal{E}})\sharp \Delta \\
& \geq (\deg \bar{{\varphi}'}|_{\mathcal{E}})\left(\frac{2}{\deg\theta'-1}g(B')+2\right).
\label{(3.5)}\end{aligned}$$ By definition, $r_{p}\geq2$ for any $p\in(\mathcal{E}\cap\overline{R}_{all})\setminus(\mathcal{E}\cap\overline{R}_{r})$. On the other hand, by the Hurwitz formula for $\bar{{\varphi}'}|_{\mathcal{E}}:\mathcal{E} \to {\mathbb{P}}^1$, one has $$2\deg \bar{{\varphi}'}|_{\mathcal{E}}-2 = \deg \mathcal{R}_{\theta'} \geq \sum_{\mathcal{E}\cap\overline{R}_{all}} (r_{p}-1).$$ Hence, $$\begin{aligned}
2\deg \bar{{\varphi}'}|_{\mathcal{E}}-2&\geq\sum_{\mathcal{E}\cap\overline{R}_{all}} (r_{p}-1)
=\sum_{(\mathcal{E}\cap\overline{R}_{all})\setminus(\mathcal{E}\cap\overline{R}_{r})}(r_{p}-1) +\sum_{\mathcal{E}\cap\overline{R}_{r}}(r_{p}-1)\\
&\geq \sum_{(\mathcal{E}\cap\overline{R}_{all})\setminus(\mathcal{E}\cap\overline{R}_{r})}\frac{r_{p}}{2}
+\sum_{\mathcal{E}\cap\overline{R}_{r}}\frac{(r_{p}-1)}{2}
=\sum_{\mathcal{E}\cap\overline{R}_{all}}\frac{r_{p}}{2}-\frac{\sharp(\mathcal{E}\cap\overline{R}_{r})}{2}\\
&\geq (\deg \bar{{\varphi}'}|_{\mathcal{E}})\left(\frac{1}{\deg \theta'-1}g(B')+1\right)-\frac{\sharp(\mathcal{E}\cap\overline{R}_{r})}{2},\end{aligned}$$ where the last inequality comes from $(\ref{(3.5)})$.
Note that we have $g(B')\geq q_f\geq n-1\geq \deg \theta'-1$. Therefore $$\begin{aligned}
(\mathcal{E}.\overline{R})\geq(\mathcal{E}.\overline{R}_{r})\geq \sharp(\mathcal{E}\cap\overline{R}_{r})
& \geq (\deg \bar{{\varphi}'}|_{\mathcal{E}})\left(\frac{2}{\deg \theta'-1}g(B')-2\right)+4\\
&\geq\frac{2}{\deg \theta'-1}g(B')+2\\
&\geq\frac{2}{\deg \theta'-1}q_{f}+2.\end{aligned}$$ For any $\check{x_{i}}$, let $\check{x}_{i+j_{i}}$ be the last infinitely near singular point blown up by $\check{\psi}$, $\mathcal{E}_{i+j_{i}}$ the exceptional curve. Then, from the above argument, we get $$\frac{2}{\deg \theta'-1}q_{f}+2 \leq (\mathcal{E}_{i+j_{i}}.\overline{R})=\check{m}_{i+j_{i}}\leq \check{m_{i}}.$$ Moreover if the equality signs hold everywhere, then we get $$\check{m_{i}}= \frac{2}{\deg \theta'-1}q_{f}+2= (\mathcal{E}_{i+j_{i}}.\overline{R})\in n{\mathbb{Z}}.$$ Since $\deg \theta'\leq n$, we are done.
\[xiaoconj\] Let $f:S\to B$ be a locally non-trivial primitive cyclic covering fibration of type $(g,0,n)$ with $q_{f}>0$ and $n\geq 3$. Assume that there is a component C of ${\mathrm{Fix}}({\tilde{\sigma}})$ such that $C^2=0$. Then, $$\begin{aligned}
q_{f}\leq \frac{g-n+1}{2}.
\end{aligned}$$
There is a singular point $x$ of $R$ which is blown up by $\check{\psi}:\overline{W} \to W$ from Lemma $\ref{prop3.3}$. If $m:=\mathrm{mult}_{x}R \leq \frac{r}{2}$, then $$\begin{aligned}
\frac{2q_{f}}{n-1}+2\leq m \leq \frac{r}{2}=\frac{g}{n-1}+1\end{aligned}$$ from Lemma $\ref{lemma3.5}$. So we get $q_{f}\leq (g-n+1)/2$. If $m>\frac{r}{2}$, then we have $m\in n{\mathbb{Z}}+1$ from Lemma $\ref{lem1.4}$. Let $\check{x}$ be the last singular point, infinitely near to $x$, blown up by $\check{\psi}$ and $\check{m}$ its multiplicity. Then it holds that $\check{m} \in n{\mathbb{Z}}$. Indeed, if $\check{m} \in n{\mathbb{Z}}+1$, the exceptional curve arizing from $\check{x}$ is contained in branch locus. It contradicts the definition of $\check{\psi}$. So we get $\check{m}+1 \leq m$. Therefore we get $$\begin{aligned}
\frac{2q_{f}}{n-1}+3\leq \check{m}+1 \leq m\leq \frac{r}{2}+1=\frac{g}{n-1}+2\end{aligned}$$ from Lemma $\ref{lem1.4}$ and Lemma $\ref{lemma3.5}$. It follows $q_f\leq (g-n+1)/2$.
Therefore, the Modified Xiao’s Conjecture is true in this particular case.
Now, we turn our attention to the slope. Let $\check{\alpha}_{k}$ be the number of the singular points of $R$ with multiplicity $nk$ or $nk+1$ appearing in $\check{\psi}$. Then $\check{\alpha}_{k}\geq0$ and by Lemma $\ref{lemma3.5}$ one has $$\begin{aligned}
\check{\alpha}_{k}=0
\label{(3.6)}\end{aligned}$$ for any $k$ satisfying $nk+1 \leq \frac{2q_{f}}{n-1}+2$. We put $\bar{\alpha}_{k}:={\alpha}_{k}-{\check{\alpha}}_{k}$ then by $(\ref{(3.6)})$, $$\begin{aligned}
\numberwithin{equation}{section}
{\bar{\alpha}}_{k}={\alpha}_{k}
\label{(3.7)}\end{aligned}$$ for any $k$ satisfying $nk+1 \leq \frac{2q_{f}}{n-1}+2$. By the construction of $\bar{{\varphi}'}$, $\overline{R}$ is contained in fibers of $\bar{{\varphi}'}$, hence we get $$\overline{R}^2\leq0.$$ On the other hand, we have $$\overline{R}^2=R^2-\sum_{\frac{2q_{f}}{n-1}+2\leq nk}^{nk\leq\frac{r}{2}}n^2 k^2 {\check{\alpha}}_{k}$$ As $R^2=2rM_{0}$ by ($\ref{(1.15)}$), we get $$\begin{aligned}
\numberwithin{equation}{section}
2rM_{0}\leq\sum_{\frac{2q_{f}}{n-1}+2\leq nk}^{nk\leq\frac{r}{2}}n^2 k^2 {\check{\alpha}}_{k}
\label{(3.8)}\end{aligned}$$ Hence $$\begin{aligned}
{\alpha}_{0}+\sum_{k\geq1}^{nk+1 \leq \frac{2q_{f}}{n-1}+2}nk(nk-1){\alpha}_{k}&\leq{\alpha}_{0}+ \sum_{k\geq 1}^{nk\leq\frac{r}{2}}nk(nk-1)\bar{\alpha}_{k}\\
&\leq\sum_{\frac{2q_{f}}{n-1}+2\leq nk}^{nk\leq\frac{r}{2}}\frac{nk}{r}(r-1-(nk-1)){\check{\alpha}}_{k}\\
&\leq\sum_{\frac{2q_{f}}{n-1}+2\leq nk}^{nk\leq\frac{r}{2}}\frac{nk}{r}(r-1-(nk-1)){\alpha}_{k},\end{aligned}$$ where the first and the last inequalities above follow immediately from ${\bar{\alpha}}_{k}\geq0$ and $(\ref{(3.7)})$, and the second one follows from ($\ref{(1.16)})$ and ($\ref{(3.8)})$. Hence we have shown:
\[prop3.7\] Under the same assumptions as in Proposition $\ref{prop3.1}$, $${\alpha}_{0}+\sum_{k\geq1}^{nk+1 \leq \frac{2q_{f}}{n-1}+2}nk(nk-1){\alpha}_{k}\leq\sum_{\frac{2q_{f}}{n-1}+2\leq nk}^{nk\leq\frac{r}{2}}\frac{nk}{r}(r-1-(nk-1)){\alpha}_{k}.$$
Using this, we will prove the following:
Let $f:S \to B$ be a locally non-trivial primitive cyclic covering fibration of type $(g,0,n)$ such that there is a component $C \subset {\mathrm{Fix}}(G)$ with $C^2 = 0$. If $q_f>0$ and $n\geq 3$, then $$\begin{aligned}
\numberwithin{equation}{section}
\lambda_{f}\geq\lambda_{g,n,q_{f}}^{2}:=8-\frac{2n(g+n-1)}{(g-q_{f})(q_{f}+n-1)}
\biggl(=8-\frac{4rn}{(n-1)(2+\frac{2q_{f}}{n-1})(r-(2+\frac{2q_{f}}{n-1}))}\biggr).
\label{(3.9)}\end{aligned}$$ \[thm4.7\]
We first remark that, for two real numbers $x$, $y$ with $x+y\leq r$, we have $x(r-x)\geq y(r-y)$ if and only if $x\geq y$. Since we have $n+(2+2q_f/(n-1))\leq r$ by the proof of Theorem \[xiaoconj\] and $r\geq 2n$, this observation works for $x=n$, $y=2+2q_f/(n-1)$.
\(i) The case of $n\geq 2+\frac{2q_{f}}{n-1}$, i.e., $\frac{(n-2)(n-1)}{2}\geq q_{f}$.
Since $n\geq 2+\frac{2q_{f}}{n-1}$, we get $$n(r-n)\geq\bigl( 2+\frac{2q_{f}}{n-1} \bigr) \bigl( r-(2+\frac{2q_{f}}{n-1}) \bigr).$$ Therefore, $\lambda_{g,n}^1 \geq \lambda_{g,n,q_{f}}^2$, and $(\ref{(3.9)})$ follows from Theorem $\ref{prop2.3}$.
\(ii) The case of $n< 2+\frac{2q_{f}}{n-1}$.
In this case, we have $$\begin{aligned}
\bigl( 2+\frac{2q_{f}}{n-1} \bigr) \bigl( r-(2+\frac{2q_{f}}{n-1}) \bigr)\geq n(r-n)
$$ and, hence, $\lambda_{g,n}^1 \leq \lambda_{g,n, q_f}^2$. Then we have $A(\lambda_{g,n,q_{f}}^2)\leq0$, since the function $A(\lambda)$ defined in the proof of Proposition \[prop2.3\] is decreasing in $\lambda$ and we have already proved $A(\lambda_{g,n}^1)\leq0$ there.
From Proposition \[prop1.2\], we have $$\begin{aligned}
\numberwithin{equation}{section}
(r-1)(K_{f}^2-\lambda_{g,n,q_{f}}^2\chi_{f})\geq A(\lambda_{g,n,q_{f}}^2){\alpha}_{0}+
\sum_{k\geq1}^{nk \leq \frac{r}{2}}(a_{k}-\lambda_{g,n,q_{f}}^2\bar{a}_{k})\alpha_{k}
\label{(3.10)}\end{aligned}$$ where $a_{k}$ and $\bar{a_{k}}$ are the same as in the proof of Proposition \[prop2.3\]. Applying Proposition \[prop3.7\] to (\[(3.10)\]), we get $$\begin{aligned}
(r-1)(K_{f}^2-\lambda_{g,n,q_{f}}^2\chi_{f})&\geq\sum_{k\geq 1}^{nk< \frac{2q_f}{n-1}+2}
(-A(\lambda_{g,n,q_{f}}^2)nk(nk-1)+
a_{k}-\lambda_{g,n,q_{f}}^2\bar{a}_{k})\alpha_{k}\\
&+\sum_{2+\frac{2q_{f}}{n-1}\leq nk}^{nk \leq \frac{r}{2}}(A(\lambda_{g,n,q_{f}}^2)\frac{nk}{r}(r-nk)+a_{k}-\lambda_{g,n,q_{f}}^2\bar{a}_{k})\alpha_{k}.\end{aligned}$$ First, we will show $$\begin{aligned}
-A(\lambda_{g,n,q_{f}}^2)nk(nk-1)+
a_{k}-\lambda_{g,n,q_{f}}^2\bar{a}_{k}\geq0
\label{3.12'}\end{aligned}$$ for any positive integer $k$ with $nk+1\leq2+\frac{2q_{f}}{n-1}$. By a simple calculation, we get $$\begin{aligned}
\nonumber
&-A(\lambda_{g,n,q_{f}}^2)nk(nk-1){\alpha}_{0}+
a_{k}-\lambda_{g,n,q_{f}}^2\bar{a}_{k}\\
=&nk\biggl( \frac{(n-1)(r-1)}{12n}(nk-1)(\lambda_{g,n,q_{f}}^2 (2n-1)-12(n-1))\label{(3.12)} \\
&\hspace{4cm} +\frac{(n^2-1)(r-1)}{12n}(12-\lambda_{g,n,q_{f}}^2)\biggr)-(r-1)n. \nonumber\end{aligned}$$ We claim that $\lambda_{g,n,q_{f}}^2\geq\frac{12(n-1)}{2n-1}$. It is equivalent to $$4(n+1)q_{f}(g-n+1-q_f)+(2n-4)g-2n(n-1)(2n-1)\geq 0.$$ From $g-n+1\geq 2q_{f}$ and $q_f\geq \frac{(n-2)(n-1)}{2}+1$, we easily see that it holds true. Since $\lambda_{g,n,q_{f}}^2\geq \frac{12(n-1)}{2n-1}$, the right hand side of (\[(3.12)\]), which is incleasing in $k$, is not less than $$\begin{aligned}
&n\biggl( \frac{(n-1)(r-1)}{12n}(n-1)(\lambda_{g,n,q_{f}}^2 (2n-1)-12(n-1))+
\frac{(n^2-1)(r-1)}{12n}(12-\lambda_{g,n,q_{f}}^2) \biggr)-(r-1)n\\
&=\frac{n(n-1)(r-1)}{6}\biggl(\lambda_{g,n,q_{f}}^2(n-2)-6(n-3)\biggr)-(r-1)n\\
&= n(r-1)\frac{n(n-2)}{2n-1}\\
&\geq0.\end{aligned}$$ Therefore we get (\[3.12’\]).
Secondly, we will show $$\begin{aligned}
A(\lambda_{g,n,q_{f}}^2)\frac{nk}{r}(r-nk)+a_{k}-\lambda_{g,n,q_{f}}^2\bar{a}_{k}\geq0
\label{3.14'}\end{aligned}$$ for any positive integer $k$ satisfying $\frac{r}{2}\geq nk\geq 2+\frac{2q_{f}}{n-1}$. By a simple calculation, we get $$\begin{aligned}
\nonumber
&A(\lambda_{g,n,q_{f}}^2)\frac{nk}{r}(r-nk)+a_{k}-\lambda_{g,n,q_{f}}^2\bar{a}_{k}\\
=&\frac{(n-1)(r-1)}{4r}nk(r-nk)(8-\lambda_{g,n,q_{f}}^2)-(r-1)n
\label{(3.14)}\end{aligned}$$ Since we have $$nk(r-nk)\geq (2+\frac{2q_{f}}{n-1})\bigl( r-(2+\frac{2q_{f}}{n-1}) \bigr)$$ for any positive integer $k$ satisfying $2+\frac{2q_{f}}{n-1}\leq nk \leq \frac{r}{2}$, the right hand side of $(\ref{(3.14)})$ is not less than $$\begin{aligned}
\frac{(n-1)(r-1)}{4r}(2+\frac{2q_{f}}{n-1})\bigl( r-(2+\frac{2q_{f}}{n-1}) \bigr)
(8-\lambda_{g,n,q_{f}}^2)-(r-1)n=0.\end{aligned}$$ In sum, we have shown $K_{f}^2-\lambda_{g,n,q_{f}}^2\chi_{f}\geq0$.
An example.
===========
We construct primitive cyclic covering fibrations of type $(g,0,n)$ with relative minimal irregularity $q_{f}$ satisfying $g+n-1=m(q_{f}+n-1)$ for any integer $m \geq 2$. Hence, when $m=2$, this implies that the bound of $q_f$ in Theorem \[xiaoconj\] is sharp. Also, our examples show that the slope bound (\[(3.9)\]) in Theorem \[thm4.7\] is sharp.
Let ${\varphi}:W:={\mathbb{P}}(\mathcal{O}_{{\mathbb{P}}^{1}}\bigoplus \mathcal{O}_{{\mathbb{P}}^{1}}(e))\to B:={\mathbb{P}}^1$ be the Hirzebruch surface of degree $e \geq 0$. Denote by ${\Gamma}$ and $C_{0}$ a fiber of ${\varphi}$ and the section with $C_{0}^2=-e$, respectively. We know that $mC_{0}+b{\Gamma}$ is very ample if and only if $b>me$. So we take $b_{0}$ with $b_{0}>me$.
We take two general members $D,D'$ of $|mC_{0}+b_{0}{\Gamma}|$ which intersect each other transversely. Let $\Lambda$ be the pencil generated by $D$ and $D'$. Then $\Lambda$ define the rational map ${\varphi}_{\Lambda}:W \cdots\to {\mathbb{P}}^1$. Let $\psi$ be a minimal succession blowing-ups which eliminates the base points of $\Lambda$. We get a relatively minimal fibration $\tilde{{\varphi}'}:{\widetilde{W}}\to{\mathbb{P}}^1$ by putting $\tilde{{\varphi}'}={\varphi}_{\Lambda} \circ \psi$. Denote by ${\widetilde{\Gamma}}'$ a general fiber of $\tilde{{\varphi}}'$ and $K_{{\widetilde{W}}}$ a canonical divisor of ${\widetilde{W}}$. By a simple calculation, we get $$\begin{aligned}
\numberwithin{equation}{section}
K_{{\widetilde{W}}}^2=8-x,\;K_{{\widetilde{W}}}.{\widetilde{\Gamma}}'=\frac{m-1}{m}x-2m,\;{\widetilde{\Gamma}}^2=0,
\label{5.1}\end{aligned}$$ where $x$ is the a number of blowing-ups in $\psi$. Note that $x=(mC_{0}+b_{0}{\Gamma})^2$.
Let $\Delta \subset {\mathbb{P}}^1$ be a set of $\frac{2q}{n-1}+2$ general points, where $q$ is an integer satisfying $\frac{2q}{n-1}+2\in n{\mathbb{Z}}$. Then there is a divisor ${\mathfrak{d}}'$ on ${\mathbb{P}}^1$ such that $n{\mathfrak{d}}'=\Delta$. Let ${\widetilde{R}}=(\tilde{{\varphi}}')^{\ast}\Delta$ be the fiber of $\tilde{{\varphi}}'$ over $\Delta$. Since $\Delta$ is general, we can assume that ${\widetilde{R}}$ is both reduced and smooth.
We consider a classical cyclic $n$-covering $$\theta':B'=
\mathrm{Spec}_{{\mathbb{P}}^1}\biggl(\bigoplus_{j=0}^{n-1} \mathcal{O}_{{\mathbb{P}}^1}(-j{\mathfrak{d}}')\biggr)\to{\mathbb{P}}^1 .$$ Since $\Delta$ is general, we can assume that the fiber product ${\widetilde{S}}:=B\times_{{\mathbb{P}}^1}{\widetilde{W}}$ is smooth. Noting that the morphism $\tilde{\theta}:{\widetilde{S}}\to {\widetilde{W}}$ induced by $\theta'$ is nothing but the natural one $${\widetilde{S}}=\mathrm{Spec}_{{\mathbb{P}}^1}\biggl(\bigoplus_{j=0}^{n-1}
\mathcal{O}_{{\widetilde{W}}}(-j({\varphi}')^{\ast}{\mathfrak{d}}')\biggr)\to {\widetilde{W}},$$ one gets a commutative diagram $$\nonumber
\vcenter{ \xymatrix{
B & \\
{\widetilde{S}}\ar[u]^{\tilde{f}} \ar[r]^{\tilde{\theta}} \ar[d]_{\tilde{f}'} & {\widetilde{W}}\ar[lu]_{\tilde{{\varphi}}} \ar[d]^{\tilde{{\varphi}}'} \\
B' \ar[r]_{\theta'} & {\mathbb{P}}^1 } }$$ where $\tilde{f}:=\tilde{{\varphi}} \circ \tilde{\theta}$.
By the construction, we get $q_f=q=g(B')$. From the formulae $$\begin{aligned}
K_{{\widetilde{S}}}^2=n(K_{{\widetilde{W}}}+\frac{n-1}{n}{\widetilde{R}})^2,\quad
\chi(\mathcal{O}_{{\widetilde{S}}})=
n\chi(\mathcal{O}_{{\widetilde{W}}})+\frac{1}{2}\sum_{j=1}^{n-1}\frac{1}{n}j{\widetilde{R}}(\frac{1}{n}j{\widetilde{R}}+K_{{\widetilde{W}}}),\end{aligned}$$ and $(\ref{5.1})$, we get $$\begin{aligned}
\numberwithin{equation}{section}
K_{{\widetilde{S}}}^2=\left(4\frac{m-1}{m}\left(\frac{q}{n-1}+1\right)(n-1)-n\right)x+8\left(n-m(n-1)\left(\frac{q}{n-1}+1\right)\right),
\label{5.2}\end{aligned}$$ $$\begin{aligned}
\numberwithin{equation}{section}
\chi(\mathcal{O}_{{\widetilde{S}}})=\frac{m-1}{2m}\left(\frac{q}{n-1}+1\right)(n-1)x+n
-m(n-1)\left(\frac{q}{n-1}+1\right).
\label{5.3}\end{aligned}$$ Let $g$ be the genus of fibration $f:{\widetilde{S}}\to B$. Then it is easy to see that $$\begin{aligned}
\frac{2g}{n-1}+2=m\left(\frac{2q}{n-1}+2\right).\end{aligned}$$ Hence we get $$\begin{aligned}
&K_{\tilde{f}}^2=K_{{\widetilde{S}}}^2-8(g-1)(g(B)-1)=\left(4\frac{m-1}{m}\left(\frac{q}{n-1}+1\right)(n-1)-n\right)x,\\
&\chi_{\tilde{f}}=\chi(\mathcal{O}_{{\widetilde{S}}})-(g-1)(g(B)-1)=
\frac{m-1}{2m}\left(\frac{q}{n-1}+1\right)(n-1)x.\end{aligned}$$ Therefore we get $$\begin{aligned}
\lambda_{\tilde{f}}=\frac{K_{\tilde{f}}^2}{\chi_{\tilde{f}}}=
8-\frac{2n(g+n-1)}{(g-q_{f})(q_{f}+n-1)}\end{aligned}$$ by $q=q_{f}$.
We remark that $\tilde{f}$ is relatively minimal. In fact the singular points of $R$, the image of ${\widetilde{R}}$ in W, are all of multiplicity $\frac{2q_{f}}{n-1}+2\in n{\mathbb{Z}}$ and can be resolved by a single blowing-up. So there is no $\tilde{{\varphi}}$-vertical $(-n)$-curve in ${\widetilde{W}}$. Therefore there is no $\tilde{f}$-vertical $(-1)$-curve in ${\widetilde{S}}$.
[9]{} M. A. Barja, V. G.Alonso, and J. C. Naranjo, Xiao’s Conjecture for general fibred surfaces, J. reine angew Math. **739** (2018), 297–308. M. Enokizono, Slopes of fibered surfaces with a finite cyclic automorphism, Michigan Math. J. **66** (2017), 125–154. X.Lu and K.Zuo, On the slope conjecture of Barja and Stoppino for fibred surfaces, preprint (arXiv:1504.06276v1\[math.A.G\]). X.Lu and K.Zuo, On the slope of hyperelliptic fibrations with positive relative irregularity, Trans. Amer. Math. Soc. **369** (2017), 909–934. G.Xiao, Fibred algebraic surfaces with low slope, Math. Ann. **276** (1978), 449–466.
Department of Mathematics, Graduate School of Science, Osaka University,
1–1 Machikaneyama, Toyonaka, Osaka 560-0043, Japan
e-mail address: [email protected]
|
---
abstract: 'The $\Omega$-phase of the liquid sodium $\alpha$-$\Omega$ dynamo experiment at NMIMT in cooperation with LANL has been successfully demonstrated to produce a high toroidal field $B_{\phi}$ that is $\simeq 8\times B_r$, where $B_r$ is the radial component of an applied poloidal magnetic field. This enhanced toroidal field is produced by the rotational shear in stable Couette flow within liquid sodium at magnetic Reynolds number $Rm \simeq 120$. The small turbulence in stable Taylor-Couette flow is caused by Ekman flow with an estimated turbulence energy fraction of $(\delta v/v)^2 \sim 10^{-3}$. This high $\Omega$-gain in low turbulence flow contrasts with a smaller $\Omega$-gain in higher turbulence shear flows. This result supports the ansatz that large scale astrophysical magnetic fields can be created by semi-coherent large scale motions in which turbulence plays only a smaller diffusive role that enables magnetic flux linkage.'
author:
- 'Stirling A. Colgate$^1$,$^2$'
- 'Howard Beckley$^2$, (deceased), Jiahe Si$^2$, Joe Martinic$^2$, David Westpfahl$^2$, James Slutz$^2$, Cebastian Westrom$^2$, Brianna Klein$^2$, Paul Schendel$^2$, Cletus Scharle$^2$, Travis McKinney$^2$, Rocky Ginanni$^2$, Ian Bentley$^2$, Timothy Mickey$^2$, Regnar Ferrel$^2$'
- 'Hui Li$^1$ Vladimir Pariev$^1$, John Finn$^3$'
title: |
High Magnetic Shear Gain in a Liquid Sodium Stable Couette Flow Experiment\
A Prelude to an $\alpha - \Omega$ Dynamo
---
Major efforts are spent world-wide on astrophysical phenomena that depend upon magnetic fields, e.g., planetary, solar, and stellar magnetic fields, X-rays, cosmic rays, TeV gamma rays, jets and radio lobes powered by active galactic nuclei (AGNs), and in the interpretation of Faraday rotation maps of galaxy clusters. Yet, so far, there is no universally accepted explanation for the inferred larage-scale magnetic field. A process, called the $\alpha-\Omega$ dynamo [@parker55; @mof78; @krause80], has been proposed which involves two orthogonal conducting fluid motions, shear and helicity. When the two motions are comparable, it is often described as a stretch, twist, and fold or “fast” dynamo, and supposedly can be produced by turbulence alone [@nor92], [@zeldovich83]. The problem is that a turbulent dynamo must create these two orthogonal motions turbulent motions. Fluid turbulence maximizes entropy and the diffusion of the flux of momentum [@tob08], so that a field twisted one way by turbulent eddies at one moment of time may be partially twisted the opposite way with only a net bias, and hence partially diffusive.
One possible way to achieve dynamo is to use naturally occurring large scale near-stable rotational shear flows (as in AGN accretion disks and in stars) in combination with transverse, transient, rotationally coherent plumes. Such plumes may be driven either by star–disk collisions in AGNs [@par07a] or large scale (density scale height) convective elements in the base of the convective zone in stars [@cha60; @wil94; @mestel99]. These flows can stretch and wind-up an embedded, transverse magnetic flux through a large number of turns (the $\Omega$ effect). In both types of shear flows the turbulence is expected to be relatively small because of the stability imposed by either an angular momentum gradient or an entropy gradient. (Positive work must be done to turn over an eddy in such a gradient.) The advantage of a large $\Omega$-gain is that the corresponding $\alpha$-gain (the helicity deformation necessary to convert a fraction of the toroidal flux back into poloidal flux) can be correspondingly smaller. Parker, the originator of the $\alpha - \Omega$ dynamo concept for astrophysical dynamos, invoked cyclonic motions to produce the helicity, [@parker79]. Moffatt [@mof78] countered that a typical atmospheric cyclone makes very many turns before dissipating, therefore making the sign of the resulting poloidal flux incoherent. Moffat sought motions where the rotation angle was finite, $\sim \pi/4$. Willette [@wil94] and the first author suggested that buoyant plumes were a unique solution to the problem of coherent dynamo helicity. Beckley et al. [@bec03] demonstrated this property experimentally of finite, $\sim \pi/4$, coherent rotation of a driven plume before it dissipated through turbulent diffusion in the background fluid.
Two liquid sodium dynamo experiments have produced positive exponential gain, but the flows were constrained by rigid walls unlike astrophysical flows [@stig01; @gail00]. The rigid walls restrict turbulent eddy size by the distance from the wall, log-law of the walls [@lan59], thereby producing low turbulence. The three recent experiments use the Dudley-James [@dud89] or Von Karman flow (counter rotating flow converging at the mid-plane and driven by two counter rotating propellers or two vaned turbine impellers, respectively, [@for02; @norn06; @Spen07; @lat01; @pet03; @odi98; @pef00]). Turbulence is induced by the Helmholtz instability at the shearing mid-plane. This combination of coherent motions and unconstrained shear-driven (relatively high level) turbulence resulted in a maximum $\Omega$-gain of only $\sim \times 2$. Recognizing the enhanced resistivity of the mid-plane turbulence, the team at Wisconsin added a mid-plane baffle to reduce the turbulence, following which the $\Omega$-gain increased to $\simeq \times 4$ [@rah10]. The von Karman Sodium 2 (VKS2) experiment in the same geometry produced exponential gain [@mon07]; However, the dynamo action was explained not by turbulence but primarily by the production of helicity by the large coherent vortices produced by the radial rigid vanes of the impeller [@lag08]. (Ferro magnetism added additional gain to this source of helicity.)
The New Mexico $\alpha-\Omega$ dynamo (NMD) experiment is a collaboration between the New Mexico Institute of Mining and Technology and Los Alamos National Laboratory [@col02; @col06]. It is designed to explore the possibility in the laboratory using coherent fluid motions in low-turbulence high shear Couette flow in the annular volume between two coaxial cylinders, $R_2/R_1 = 2$ rotating at different angular velocities (see Figure 1), closely analogous to natural fluid motions that occur in astrophysical bodies [@Bra98; @par07a; @par07b]. The results of the first phase ($\Omega$-phase) of this two-phase experiment are presented here.
Fig. 1 also indicates the Ekman flow, a thin fluid layer flowing along the boundaries of the annular volume and including a small radial return flow from inner to outer cylinder through the fluid volume. The Ekman flow produces both a torque and a small but finite level of turbulence [@ji01]. This turbulence adds a small turbulent resistivity to the resistivity of metallic sodium. Five pressure sensors are distributed radially at one end. They measure the pressure distributions, giving information of the actual rotational flow angular velocity distribution.
![A schematic drawing of the fluid flow of the experiment shows the inner cylinder of $R_1 = 15.2$ cm rotating at $\Omega_1/2\pi = 68$Hz relative to the outer or confining cylinder of $R_{2} = 30.5$ cm at $\Omega_2 = \Omega_1/4 $. Liquid sodium in the annular space undergoes Couette rotational flow. In addition, a thin layer in contact with the cylinder walls and end walls undergoes Ekman flow, circulating orthogonally around and then through the Couette flow. Pressure sensors in the end wall and the magnetic probe housing are also shown schematically.[]{data-label="fig1"}](f1.pdf){width="2in"}
For the magnetic measurements, a 50 kW AC induction motor (20kW is used for stable Couette flow) drives a gear train with clutches and power take-off that rotates the two cylinders at the fixed ratio of $\Omega_1 = 4 \Omega_2$. The outer cylinder can also be disengaged from the gear train by a clutch, and a DC motor is used to accelerate or brake the outer cylinder independently from the driven inner cylinder. This allows different $\Omega_1 /\Omega_2$ ratios to be explored. The DC motor housing (stator) is mounted on bearings. A torque arm with two force sensors ($+-$) connects the motor stator to ground, so that the torque on the DC motor can be measured separately from the drive of the inner cylinder. This arrangement allows us to measure the torque between the two cylinders due to the Ekman flow along the surface of the end plates which rotate more slowly at $\Omega_2$.
In particular, when the inner cylinder is driven at higher speed by the AC motor, the Ekman fluid torque tries to spin up or accelerate the outer cylinder. Two torques counteract this acceleration: 1) the friction in the bearings that support the rotation of the outer cylinder and 2) the torque on the DC motor when used as a generator, or brake. (The generated power is dissipated in a resistor.) In Fig. 2 (left) the crosses are the measurements of the braking force exerted by the DC motor torque arm; the dashed line is the calibrated bearing torque (measured by disengaging the inner cylinder drive and rotating the outer cylinder with the DC motor alone). The sum of these two torques is equal to the Ekman fluid torque spinning up the outer cylinder.
![(left) The lower crosses are the measured DC generator torques; the dashed line is the measured bearings torque, and the sum of the two is the top solid curve, the torque due to the Ekman flow. (right) Shown is the measured and theoretical pressure distribution. The pressure measurement starts at $R_1$ and extends to $R_2$. The finite pressure at $R_1$, $P_1 \simeq 100$ psi is due to the centrifugal pressure of rotating fluid in the narrow gap between the inner cylinder rotating at $\Omega_1$ and the end wall rotating at $\Omega_2$ (dot-dash curve). This thin end-gap is filled with rotating fluid, which extends from the shaft seal (where $P=1$ atm) at radius, $R_3 \simeq R_1/3$ to $R_1$ at which radius the Couette flow in the annular volume is initiated. From there, the pressure in the annular volume is extrapolated either as $\Omega \propto R^{-2}$ or $R^{-3/2}$. The measurements fall along the $R^{-3/2}$ pressure distribution curve indicating less shear than the ideal Couette flow.[]{data-label="fig2"}](f2.pdf){width="3.5in"}
The experiment was originally designed upon the supposition that the primary torque between the two cylinders would be due to the radial Ekman flow at the end walls and would be small, $\sim 1/10$, compared to pipe flow wall friction stress, $\tau = \rho C_D v^2$. In Fig. 2 (left) the minimal Ekman flow torque occurs at $\Omega_1/\Omega_2 \sim <4$, somewhat less than the limit of stable Couette flow. The torque value is $2\times 10^8$ dyne-cm, close to the approximation that each Ekman layer thickness [@ji01] is $\Delta \simeq R_1 /Re^{1/2} \simeq 5 \times 10^{-3}$ cm and that the average radial flow velocity within the layer is $v_r \simeq (\Omega_1 R_1) /2$. In Fig 2 (right), the measured pressures are compared to the calculated pressures corresponding to two different angular velocity power laws. The upper (dashed) curve corresponds to ideal, maximum shear, stable Couette flow, or $\Omega \propto r^{-2}$, but the experimental points follow the lower solid curve corresponding to $\Omega \propto r^{-3/2}$. The Ekman flow torque has distorted the angular velocity profile and reduced the shear relative to the ideal Couette flow.
Turbulent flow at such high Reynolds number $Re \simeq10^7$ is still well beyond current simulation capability. The Ekman flow is a flux of fluid of reduced angular momentum deposited at the inner cylinder. The torque reducing this angular momentum is friction with the end walls, which in turn reduces the angular velocity of the Couette flow in the annular volume. Unstable flow, turbulent friction with the inner cylinder surface, counteracts this torque by speeding up this flow. The torque in the Couette volume is a constant, independent of radius and axial position. Then this shear stress is maintained constant in two laminar sub layers [@sch60] with $Re \simeq 100$ and in two turbulent boundary layers in a log-law-of-the-walls solution [@lan59]. When the distance from the walls corresponds to an eddy scale of the radial gradient, a transition takes place to a scale independent eddy size turbulent torque connecting the two regions at inner and outer boundaries. The mean velocity distribution in the above analysis can give a reduced shear, $\Omega / \Omega_1 \propto (R_1/R)^{1.64}$, rather than stable Couette flow.
![A schematic drawing of the poloidal magnetic flux produced by two coils (left). Superimposed (right) are the flux lines from the Maxwell calculation [@max80], with the iron shield and steel shaft included. This radial flux crosses the high shear of the Couette flow producing the enhanced toroidal field. The $B_r$ field from the Maxwell calculations agrees with the calibrated probe measurements to $10\%$. []{data-label="fig3"}](f3.pdf){width="3in"}
Fig. 3 shows the magnetic field configuration. The magnetic field Hall detector probe, internal to the liquid sodium at the mid-plane, is the primary diagnostic of the experiment. It consists of 6 multiple, 3-axis, magnetic field Hall effect detectors at the mid-plane in the annular space between the two cylinders and contained in an aero-dynamically shaped housing. (The fluid friction drag produced by this housing, primarily Ekman flow, is estimated to be $\sim 0.1$ of the end-wall Ekman torque.) The $\Omega$-gain was then measured using an applied calibrated $B_r$ magnetic field as a function of the coil currents. Because of the high gain in $B_{\phi}$ and the lack of perfect orthogonality of the Hall detectors, the measured $B_r$ would be expected to be contaminated by a small fraction of the much larger $B_{\phi}$ field.
![ (left) The lower curve shows the applied radial field $B_r \simeq 12$ G and the upper curves show the measured toroidal magnetic field for four over-laid experiments at $\Omega_1 = 68$ Hz. The right panel shows the ratio $B_{\phi}/B_r$.[]{data-label="fig4"}](f4.pdf){width="3.5in"}
![The top trace shows the toroidal field $B_{\phi}\sim 750$G vs. time from an applied radial field of $\sim 250$ G. The $\Omega$-gain is now reduced to $\times 3$. The bottom traces are the AC motor torque and current respectively, each showing a $50\%$ increase due to back reaction. The delay of several seconds in the back reaction torque corresponds to the spin-down time for the Couette flow to reach a new, modified velocity profile.[]{data-label="fig5"}](f5.pdf){width="2in"}
Fig 4 confirms that the measurements are repeatable by showing four experiments over-laid where the $\Omega$-gain ratio of $\times 8$ is repeated all with a low bias field of $ B_r\simeq 12$ G. The $B_{\phi}$ produced by the Couette flow shear is about $\simeq 8 \times B_r$. Note that the ideal $\Omega$-gain could be as high as $\sim R_m/4\pi \simeq \times 20$ [@par07b], but the reduced shear in the sodium (i.e., not exactly Couette profile) partially accounts for the reduction in the $\Omega$-gain. We then measured whether an increase in the velocity shear in the Couette volume might increase the $\Omega$-gain ratio, as one might naively expect. The velocity shear is increased by slowing the outer cylinder by using the DC motor as a generator or brake. Therefore the ratio $\Omega_1/\Omega_2$ was increased by 20%, but the $\Omega$-gain remains constant. This suggests the gain expected due to shear is balanced by the greater flux diffusion due to the increased turbulence of the now unstable Couette flow (see Fig 2). Roughly 100 electronic samples are recorded and averaged during the several seconds recording cycle of each of four magnetic experiments. The time variation between each run reflects slight changes in the Couette flow relaxation time and hence, the angular velocity distribution. The repeatability among these four runs as well as several earlier runs in the previous six months gives us confidence that the conclusion of high $\Omega$-gain is valid.
The expected back-reaction should occur at a larger value of $B_r$ where the dissipation and torque of the enhanced $B_{\phi}$ affects the flow field and the added torque shows up as additional current and torque in the AC drive motor. This effect is shown in Fig 5, where, in addition, the expected reduction in the $\Omega$-gain ratio, from $\times 8$ to $\sim \times 3$ is observed when the applied poloidal field is $B_r \simeq 250$G. The back reaction stress will modify the Couette flow profile such as to reduce the $\Omega$-gain and hence reduce the back reaction stress until a new steady state is achieved at reduced $B_{\phi}$. Fig 5 shows the measured difference in power, $\simeq 10$kW, due to the back reaction. We estimate that the specific magnetic field energy density, $B_{\phi}^2/8\pi$ is dissipated at $<\Omega> \times(Vol)$ where the volume of high $B_{\phi}$ is estimated as 1/3 the length, $Vol \simeq (L/3) (\pi/2) (R_2^2 - R_1^2)$. This results in a power of $\sim 8$ kW, roughly the measured 10 kW.
In conclusion, a large $\Omega$-gain in low turbulent shear in a conducting fluid has been demonstrated. This is likely to be the mechanism of the $\Omega$-gain of a coherent $\alpha-\Omega$ astrophysical dynamo. Driven plumes are planned to be used in the $\alpha$ phase of the NMD experiment.
This experiment has been funded by NSF, Univ. of Calif. & LANL, IGPP LANL, and NMIMT.
Beckley, HF., Colgate, SA., Romero, VD., and Ferrel, R., 2003, Astrophys. J., 599, 702
Brandenburg, A. 1998, in “Theory of Black Hole Accretion Disks”, edited by M.A. Abramowicz, G. Bjornsson, & J.E. Pringle, Cambridge University Press
Chandrasekhar, S., 1961, Hydrodynamic and Hydromagnetic Stability, Oxford: Clarendon, Oxford
Colgate, S.A., 2006 Astron. Nachr. / AN 327, No. 5/6, 456 Ð 460
Colgate, S.A., Pariev, V.I., Beckley, H.F., Ferrel R., Romero V.D., and Weatherall, J.C. 2002, Magnetohydrodynamics, 38, 129.
Dudley, M.L., James, R.W.: 1989, Proc. R. Soc. London, Ser. A 425, 407
Forest, C.B., Bayliss, R.A., Kendrick, R.D., Nornberg, M.D., O’Connell, R., Spence, E.J.: 2002, Magnetohydrodynamics 38, 107
Gailitis, A., Lielausis, O., Platacis, E., et al.: 2001, Phys. Rev. Lett. 84, 4365
Ji, H., Goodman, J. & Kageyama, A., 2001, MNRAS, 325, L1
Krause, F., & Rädler, K.H. 1980, Mean-Field Magnetohydrodynamics and Dynamo Theory. (Oxford: Pergamon Press)
Laguerre, R., Nore, C., Riberiro, A., Guermond, J.L., Plunian, F., 2008, PRL, 101, 104501
Landau, L.D. & Lifshitz, E.M., 1959, Fluid Mechanics. Pergamon Press, London
Lathrop, D.L., Shew,W. L., and Sisan, D.R., 2001, Plasma Phys. & Controlled Fusion 43, A151
Ansoft Maxwell corp, 1980 (http://www.ansoft.com/)
Mestel, L. 1999, Stellar Magnetism. (Oxford: Clarendon)
Moffatt, H.K. 1978, Magnetic Field Generation in Electrically Conducting Fluids. (Cambridge: Cambridge University Press)
Monchaux, R ; Berhanu, M ; Bourgoin, M ; Moulin, M ; Odier, P ; et al., 2007, Phys. Rev. Lett. 98, 044502
Nordlund, A.; Brandenburg, A.; Jennings, R.L.; Rieutord, M.; and others., 1992, APJ, 392, 647.
Nornburg, M.D., Spence, E.J., Kendrick, R.D., Jacobsonn, C.M., Forest, C.B., 2006 arXiv:physics/0510265 v2 23 Dec 2005
Odier,P., Pinton, J.-F. , and Fauve, S., 1998, Phys. Rev. E 58, 7397.
Pariev, V.I., and Colgate, S.A., 2007, APJ, 658: 114-128.
Pariev, V.I., Colgate, S.A., and Finn, J.M., 2007, APJ, 658: 129-160
Parker, E.N. 1955, APJ, 121, 29
Parker, E.N. 1979, Cosmical Magnetic Fields, their Origin and their Activity. (Oxford: Claredon)
Peffley, N.L., Cawthorne, A. B. and Lathrop, D.P., 2000, Phys Rev. E, 61, 5287
Petrelis, F., Bourgoin, M., Marie, L., Burguete, J., Chiffaudel, A., Daviaud, F., Fauve, S., Odier, P., and Pinton, J.-F., 2003, Phys Rev. Lett, 90, 174501
Rahbarnia, K., et al., 2010, Bul. APS, DPP, NP9-73, 218
Schlichting, H., 1960, Boundary-layer Theory. Mc Graw Hill, New York
Spence, E.J., Nornberg, M.D., Jacobson, C.M., Parada, C.A., Taylor, N.Z., Kendrick, R.D., and Forest, C.B., 2007, Phys. Rev. Lett. 98 164503.
Stieglitz, R., Müller, U.: 2001, Physics of Fluids 13, 561
Tobias, S.M., and Cattaneo, F., 2008, PRL 101, 125003
Willette, G. T., Stochastic excitation of the solar oscillations by convection, (Ph.D. 1994), Cal Tech
Zeldovich, Ya.B., Ruzmaikin, A.A., & Sokoloff, D.D. 1983, Magnetic Fields in Astrophysics. (New York: Gordon and Breach Science Publishers)
|
---
abstract: 'We propose a novel method for variable selection in functional linear concurrent regression. Our research is motivated by a fisheries footprint study where the goal is to identify important time-varying socio-structural drivers influencing patterns of seafood consumption, and hence fisheries footprint, over time, as well as estimating their dynamic effects. We develop a variable selection method in functional linear concurrent regression extending the classically used scalar on scalar variable selection methods like LASSO, SCAD, and MCP. We show in functional linear concurrent regression the variable selection problem can be addressed as a group LASSO, and their natural extension; group SCAD or a group MCP problem. Through simulations, we illustrate our method, particularly with group SCAD or group MCP penalty, can pick out the relevant variables with high accuracy and has minuscule false positive and false negative rate even when data is observed sparsely, is contaminated with noise and the error process is highly non-stationary. We also demonstrate two real data applications of our method in studies of dietary calcium absorption and fisheries footprint in the selection of influential time-varying covariates.'
address:
- '$^{1}$ North Carolina State University (Department of Staistics)'
- '$^{2}$ North Carolina State University (Department of Sociology and Anthropology)'
author:
- 'Rahul Ghosal$^1$, Arnab Maity$^1$, Timothy Clark$^2$ and Stefano B Longo$^2$'
bibliography:
- 'sel.bib'
title: |
Variable Selection in Functional Linear Concurrent\
Regression
---
Introduction {#sec:intro}
============
Function on function regression is an active area of research in functional data with new statistical methods frequently emerging to address data where both the response variable and the covariates are functions over some continuous index such as time. The functional concurrent regression model is a special case of function on function regression where the predictor variables influence the response variable only through their value at the current time point [@kim2016general]. The commonly used functional linear concurrent regression model assumes a linear relationship between the response and the predictors, where the value of the response at a particular time point is modelled as a linear combination of the covariates at that specific time point, and the coefficients of the functional covariates are univariate smooth functions over time [@Ramsay05functionaldata]. Multiple methods exist in literature for estimation of these regression functions in functional linear concurrent regression and the closely related varying coefficient model [@hastie1993varying], using kernel-local polynomial smoothing [@wu1998asymptotic; @hoover1998nonparametric; @fan1999statistical; @kauermann1999model], polynomial spline [@huang2002varying; @huang2004polynomial], smoothing spline [@hastie1993varying; @hoover1998nonparametric; @chiang2001smoothing; @eubank2004smoothing] among many others. Similar to classical scalar regression, when there are a large number of covariates present, the primary interest might be to select only the set influential variables and estimate their effects. While doing significance testing and building confidence bands can help for assessing the individual effect of a predictor, they are computationally infeasible when the number of covariates is large. Thus arises the need to perform variable selection in functional linear concurrent regression.
Our research in this article is motivated by a fisheries footprint study where the goal is to identify important time-varying socio-structural and economic drivers influencing fisheries footprint (Global Footprint Network’s measure of the total marine area required to produce the amount of seafood products a nation consumes) and to estimate their time-varying effects. Although, a number of variable selection methods have been developed for scalar on function regression [@gertheiss2013variable; @fan2015functional] and function on scalar regression [@chen2016variable], literature for variable selection in functional linear concurrent regression is relatively sparse. Recently [@goldsmith2017variable] developed a variable selection method for functional linear concurrent model using a variational Bayes approach with sparsity being introduced through a spike and slab prior on the coefficients of the basis expansion of the regression functions. In this article, we propose a variable selection method in functional linear concurrent regression extending the classically used variable selection methods like LASSO [@tibshirani1996regression], SCAD [@fan2001variable], and MCP [@zhang2010nearly].
Our work is inspired by [@gertheiss2013variable], where they show the variable selection problem in scalar on function regression scenario can be reduced to a group LASSO [@yuan2006model] problem. We have shown in functional linear concurrent regression the variable selection problem can be addressed as a group LASSO, and their natural extension group SCAD or group MCP problem. [@chen2016variable] also used group MCP for their variable selection in function on scalar regression. Our model is fundamentally different from them in the sense the covariates we consider are time-varying functions and possibly observed with measurement error. Our method is similar to [@wang2008variable] in which they use a group SCAD penalty for variable selection in varying coefficient models, but we propose a different penalty on the coefficient functions which simultaneously penalizes departure from sparsity as well as roughness of the coefficient functions, and our research shows there is much to be gained by using the group MCP penalty. We employ a pre-whitening procedure similar to [@chen2016variable] to take into account the possible temporal dependence present within functions. We also consider that the covariates might be contaminated with measurement error and therefore use functional principal component analysis (FPCA) to get denoised trajectories of the covariates, which improves the estimation accuracy of our approach. Through simulations, we illustrate the proposed method, particularly with group SCAD or group MCP penalty, can pick out the relevant variables with very high accuracy and has minuscule false positive and false negative rates even when data is observed sparsely and is contaminated with measurement error. We demonstrate two real data applications of our method in the study of dietary calcium absorption [@davis2002statistical] and the fisheries footprint study in selection of the influential time-varying covariates.
The rest of the article is organized as follows. In Section \[sec:method\], we present our modelling framework and illustrate our variable selection method. In Section \[sec:sim\_stud\], we conduct a simulation study to evaluate the performance of our method and summarize the simulation results. In Section \[sec:real\_dat\], we go back to the two real data examples; calcium absorption study and fisheries footprint study, apply our variable selection method to find out the influential covariates and present our findings. We conclude in Section \[con\] with a discussion about some limitations and possible extensions of our work.
Methodology {#sec:method}
===========
Modelling Framework and Variable Selection Method {#sec:mf}
-------------------------------------------------
Suppose that the observed data for the $i$-th subject is $\{Y_i(t), Z_{i1}(t),Z_{i2}(t),\ldots,Z_{ip}(t)\}$ ($i=1,2,\ldots,n$), where $Y_i(\cdot)$ is a functional response and $Z_{i1}(\cdot)$,$Z_{i2}(\cdot)$,$\ldots,Z_{ip}(\cdot)$ are the corresponding functional covariates. We assume the covariates and the response are observed on a fine and regular grid of points $S= \{t_{1},t_{2},\ldots,t_{m} \} \subset S=[0,T] $ for some $T>0$, and the covariates are measured without any error. We discuss later in this section how our model and the proposed method can be easily extended to accommodate more general scenarios where the covariates are contaminated with measurement error and observed sparsely. We consider a functional linear concurrent regression model of the form, $$Y_i(t)=\sum_{j=1}^{p}Z_{ij}(t)\beta_{j}(t)+\epsilon_{i}(t),$$ where $\beta_{j}(t)$ ($j=1,2,\ldots,p$) are smooth functions (with finite second derivative) representing the functional regression parameters. We assume $Z_{ij}(\cdot)$ are independent and identically distributed (i.i.d.) copies of ${Z}_j(\cdot)$ ($j=1,2,\ldots,p$), where ${Z}_{j}(\cdot)$ s are underlying smooth stochastic processes. We further assume $\epsilon_{i}(\cdot)$ are i.i.d copies of $\epsilon(\cdot)$, which is a mean zero stochastic process. The model (1) in stacked form can be rewritten as $Y(t)=Z(t)\beta(t)+\epsilon(t)$. Generally in functional linear concurrent regression, estimation is done [@Ramsay05functionaldata] by minimizing the penalized residual sum of square, $ SSE(\beta)=\int r(t)^{T}r(t)dt + \sum_{j=1}^{p}\lambda_j\int (L_j\beta_j(t))^2dt$, where $r(t)=Y(t)-Z(t)\beta(t)$. For example when $L_j=I$, we minimize $\int r(t)^{T}r(t)dt + \sum_{j=1}^{p}\lambda_j\int (\beta_j(t))^2dt$. Now suppose $\{\theta_{kj}(t), k=1,2,\ldots,k_j\}$ is a set of known basis functions for $j=1,2,\ldots,p$ . We model the unknown coefficient functions using basis function expansion as $\beta_{j}(t)=\sum_{k=1}^{k_{j}} b_{kj}\theta_{kj}(t)=\bm\theta_{j}(t)^{T}\*b_{j}$, where $\bm\theta_{j}(t)=[\theta_{1j}(t),\theta_{2j}(t),\ldots ,\theta_{K_jj }(t)]^T$ and $\*b_{j}=(b_{1j},b_{2j},\ldots ,b_{k_j j})^T$ is a vector of unknown coefficients. In this article, we use B-spline basis functions, however, other basis functions can be used as well. Then the minimization in the example mentioned above can be carried out by minimizing $\int \{Y(t)-Z(t)\bm\Theta(t)\*b\}^{T}\{Y(t)-Z(t)\bm\Theta(t)\*b\}dt +\*b^{T}\^R\*b$. Here $\*b$, $\bm\Theta(t)$ and penalty matrix $\^R$ are defined in stacked form as $\*b= (\*b_1^T,\*b_2^T,\ldots,\*b_p^T)^T$, $\bm\Theta(t)=\{\bm\theta_{1}(t)^{T},\bm\theta_{2}(t)^{T},\ldots,\bm\theta_{p}(t)^{T}\}$ and $\^R=diag(\^R_1,\^R_2,\ldots,\^R_p)$, where $\^R_j= \lambda_j\*b_{j}^{T}\{\int\bm \theta_{j}(t)\bm\theta_{j}(t)^{T}dt\}\*b_{j}$. For our variable selection method, we define penalty on the regression functions $\beta_j(\cdot)$ as, $
P_{\lambda,\psi}\{\beta_j(\cdot)\}=\lambda[\int\beta_j(t)^2dt+\psi\int\{\beta_j^{''}(t)\}^2dt]^{1/2} =\lambda\left(\*b_j^{T}\^R_j\*b_j+\psi \*b_j^{T}\^Q_j\*b_j\right)^{1/2}
\\=\lambda\left(\*b_j^{T}\^K_{\psi,j}\*b_j\right)^{1/2},$ where $\^K_{\psi,j}=\^R_j+\psi\^ Q_j$, $\^R_j=\{\int\bm\theta_{j}(t)\bm\theta_{j}(t)^{T}dt\}$, $\^Q_j=\{\int\bm\theta_{j}^{''}(t)\bm\theta_{j}^{''}(t)^{T}dt\}$. This penalty was originally proposed by [@meier2009high] and later used by [@gertheiss2013variable] for their variable selection method in scalar on function regression. The parameter $\psi\geq0$ controls the amount of penalization on the roughness penalty. The proposed penalty simultaneously penalizes departure from sparsity and roughness of the coefficient functions ensuring the resulting coefficient functions are smooth and small coefficient functions are shrunk to zero introducing sparsity. Subsequently, we propose to minimize the following penalized mean sum of squares of the residuals for performing variable selection, $$L(\*b)=1/n\int \{Y(t)-Z(t)\bm\Theta(t)\*b\}^{T}\{Y(t)-Z(t)\bm\Theta(t)\*b\}dt +\lambda\sum_{j=1}^{p}(\*b_j^{T}\^K_{\phi,j}\*b_j)^{1/2}.$$ Since we assume data is observed on a dense equispaced grid, the variable selection in practice is carried out by minimizing the following equivalent criterion, $$\sum_{i=1}^{n}\sum_{l=1}^{m}[Y_i(t_{l})-\sum_{j=1}^{p}Z_{ij}(t_l)\{\sum_{k=1}^{k_{j}} b_{kj}\theta_{kj}(t_l)\}]^{2}+\lambda mn\sum_{j=1}^{p}(\*b_j^{T}\^K_{\psi,j}\*b_j)^{1/2}.$$ Now using Cholesky decomposition of $\^K_{\psi,j}=\^L_{\psi,j}\^L_{\psi,j}^T$ and denoting $\bm\gamma_j=\^L_{\psi,j}^T \*b_j$, the penalized sum of square of residuals can be reformulated as, $$\begin{aligned}
R(\bm{\gamma})&=\sum_{i=1}^{n}\sum_{l=1}^{m}[Y_i(t_{l})-\sum_{j=1}^{p}Z_{ij}(t_l)\{\sum_{k=1}^{k_{j}} b_{kj}\theta_{kj}(t_l)\}]^{2}+\lambda mn\sum_{j=1}^{p}(\*b_j^{T}\^K_{\psi,j}\*b_j)^{1/2}\nonumber\\
&=\sum_{i=1}^{n}\sum_{l=1}^{m}[Y_i(t_{l})-\sum_{j=1}^{p}\*Z_{ij}^*(t_l)^T\*b_{j}]^{2}+\lambda mn\sum_{j=1}^{p}(\*b_j^{T}\^K_{\psi,j}\*b_j)^{1/2}\hspace{2mm} \textit{where $\*Z_{ij}^*(t_l)^T=Z_{ij}(t_l)\times\bm\theta_j(t_l)^T$}\\
&=\sum_{i=1}^{n}\sum_{l=1}^{m}[Y_i(t_{l})-\sum_{j=1}^{p}\tilde{\*Z_{ij}}^*(t_l)^T \bm{\gamma}_{j}]^{2}+\lambda mn\sum_{j=1}^{p}(\bm\gamma_j^{T} \bm\gamma_j)^{1/2} \hspace{2mm} \textit{where $\tilde{\*Z_{ij}}^*(t_l)=\^L_{\psi,j}^{-1}\*Z_{ij}^*(t_l)$}\\
&=\sum_{i=1}^{n}||{\*Y_i}-\^Z_i^*\bm\gamma||_{2}^{2}+\lambda mn\sum_{j=1}^{p}(\bm\gamma_j^{T}\bm\gamma_j)^{1/2}
=\sum_{i=1}^{n}||{\*Y_i}-\sum_{j=1}^{p}\^Z_i^{*j}\bm\gamma_j||_{2}^{2}+\lambda mn\sum_{j=1}^{p}(\bm\gamma_j^{T}\bm\gamma_j)^{1/2},\end{aligned}$$ where ${\*Y_i}=(Y_i(t_{1}),Y_i(t_{2}),\ldots,Y_i(t_{m}))^{T}$, and $\bm\gamma= (\bm\gamma_1^T,\bm\gamma_2^T,\ldots,\bm\gamma_p^T)^T$ and $\^Z_i^*$ is defined as follows,
$$\begin{aligned}
\^Z_i^*&=
\begin{bmatrix}
\tilde{\*Z_{i1}}^*(t_1)^T & \tilde{\*Z_{i2}}^*(t_1)^T & \tilde{\*Z_{i3}}^*(t_1)^T & \dots & \tilde{\*Z_{ip}}^*(t_1)^T \\
\tilde{\*Z_{i1}}^*(t_2)^T & \tilde{\*Z_{i2}}^*(t_2)^T & \tilde{\*Z_{i3}}^*(t_2)^T & \dots &\tilde{\*Z_{ip}}^*(t_2)^T \\
\hdotsfor{5} \\
\tilde{\*Z_{i1}}^*(t_m)^T & \tilde{\*Z_{i2}}^*(t_m)^T & \tilde{\*Z_{i3}}^*(t_m)^T & \dots & \tilde{\*Z_{ip}}^*(t_m)^T
\end{bmatrix},\end{aligned}$$
where $\^Z_i^{*j}$ refers to the $j$th block column in this matrix. We recognize this minimization problem as performing a group LASSO [@yuan2006model], where the grouping is introduced by covariates. In particular, we obtain estimates of $\bm\gamma_j$ by minimizing similar penalized least square as in group LASSO namely; $$\begin{aligned}
\bm\hat{\bm\gamma}&=\underset{\bm\gamma_j,j=1,2,\ldots,p}{\text{argmin}} \hspace{2 mm} \sum_{i=1}^{n}||{\*Y_i}-\sum_{j=1}^{p}\^Z_i^{*j}\bm\gamma_j||_{2}^{2}+\lambda mn\sum_{j=1}^{p}(\bm\gamma_j^{T}\bm\gamma_j)^{1/2}\nonumber\\
&=\underset{\bm\gamma_j,j=1,2,\ldots,p}{\text{argmin}}\hspace{2 mm} \sum_{i=1}^{n}||{\*Y_i}-\sum_{j=1}^{p}\^Z_i^{*j}\bm\gamma_j||_{2}^{2}+\lambda mn\sum_{j=1}^{p}||\bm\gamma_j||_2\nonumber\\
&=\underset{\bm\gamma_j,,j=1,2,\ldots,p}{\text{argmin}}\hspace{2 mm} \sum_{i=1}^{n}||{\*Y_i}-\sum_{j=1}^{p}\^Z_i^{*j}\bm\gamma_j||_{2}^{2}+mn\sum_{j=1}^{p}P_{LASSO,\lambda}(||\bm\gamma_j||_2) .\end{aligned}$$ We extend this group LASSO formulation to non-convex penalties, which are known [@breheny2015group; @mazumder2011sparsenet] to produce sparser solutions especially when there are large number of variables. In particular, we propose to use two non convex penalties; SCAD [@fan2001variable] and MCP [@zhang2010nearly]. These two penalties overcome the high bias problem of LASSO as they relax the rate of penalization as the magnitude of the coefficient gets large. SCAD and MCP have been shown to ensure selection consistency and estimation consistency under standard assumptions in the scalar regression case. They also enjoy the so-called oracle property in which they behave like oracle MLE asymptotically. Unlike adaptive LASSO, these methods do not require initial estimates of weights. These facts motivate us to use them in our functional variable selection context. Then the problem of variable selection reduces to a group SCAD or group MCP problem in our modeling setup as follows.
**Group SCAD Method**
In this method, we perform variable selection and obtain estimates of $\bm\gamma$ using a penalized least square criterion as in (4), where we now use a group SCAD penalty on the coefficients instead of group LASSO. In particular we estimate, $$\hat{\bm\gamma}=\underset{\bm\gamma_j,j=1,2,\ldots,p}{\text{argmin}}\hspace{2 mm} \sum_{i=1}^{n}||{\*Y_i}-\sum_{j=1}^{p}\^Z_i^{*j}\bm\gamma_j||_{2}^{2}+mn\sum_{j=1}^{p}P_{SCAD,\lambda,\phi}(||\bm\gamma_j||_2),$$ where $P_{SCAD,\lambda,\phi}(||\bm\gamma_j||_2)$ is defined in the following way:
$$P_{SCAD,\lambda,\phi}(||\bm\gamma_j||_2)=
\begin{cases}
\lambda||\bm\gamma_j||_2\hspace{4.15 cm} \text{if $||\bm\gamma_j||_2\leq \lambda$}.\\
\frac{\lambda\phi||\bm\gamma_j||_2-.5(||\bm\gamma_j||_2^{2}+\lambda^2)}{
\phi-1} \hspace{2.1 cm} \text{if $\lambda< ||\bm\gamma_j||_2\leq \lambda\phi$}.\\
.5\lambda^2(\phi+1) \hspace{3.45 cm} \text{if $||\bm\gamma_j||_2>\lambda\phi$}.\\
\end{cases}$$
**Group MCP Method**
For Group MCP method we estimate $\bm\gamma$ as $$\hat{\bm\gamma}=\underset{\bm\gamma_j,j=1,2,\ldots,p}{\text{argmin}}\hspace{1 mm}\sum_{i=1}^{n}||{\*Y_i}-\sum_{j=1}^{p}\^Z_i^{*j}\bm\gamma_j||_{2}^{2}+mn\sum_{j=1}^{p}P_{MCP,\lambda,\phi}(||\bm\gamma_j||_2),$$\
where $P_{MCP,\lambda,\phi}(||\bm\gamma_j||_2)$ is defined as : $$P_{MCP,\lambda,\phi}(||\bm\gamma_j||_2)=
\begin{cases}
\lambda||\bm\gamma_j||_2-\frac{||\bm\gamma_j||_2^2}{2\phi}\hspace{1.4 cm} \text{if $||\bm\gamma_j||_2\leq \lambda\phi$}.\\
.5\lambda^2\phi \hspace{3.05 cm} \text{if $||\bm\gamma_j||_2>\lambda\phi$}.\\
\end{cases}$$
Incorporating Covariance Structure into Variable Selection {#sec:prewhite}
----------------------------------------------------------
The variable selection method proposed in Section \[sec:mf\] does not account for possible correlation in the error process. In reality, however, temporal correlation is more likely to be present within functions. While using an independent working correlation structure can yield consistent and unbiased estimates, incorporating the true covariance structure in the variable selection criterion (4), (5), or (6) may give definite gains in terms of performance, as illustrated by [@chen2016variable]. We follow a similar pre-whitening procedure employed by [@chen2016variable; @kim2016general] to take into account the correct covariance structure. We assume the error process $\epsilon(t)$ has the form $\epsilon(t)=V(t) + w_t$, where $V(t)$ is a smooth mean zero stochastic process with covariance kernel $G(s,t)$ and $w_t$ is a white noise with variance $\sigma^2$. The covariance function of the error process is then given by $\Sigma(s,t)=cov\{\epsilon(s),\epsilon(t)\}=G(s,t) + \sigma^2 I(s=t)$. For data observed on dense and regular grid, the covariance matrix of the residual vector is the given by, $\#\Sigma$=diag {${\#\Sigma_{m\times m},\#\Sigma_{m\times m},\ldots, \#\Sigma_{m\times m}}$}, where $\#\Sigma_{m\times m}$ denotes the covariance kernel $\Sigma(s,t)$ evaluated at $S= \{t_{1}, t_{2},\ldots, t_{m} \} $. Now if $\#\Sigma_{m\times m}$ is known, redefining $\*Y_i$ and $\^Z_i^{*j}$ as $\*Y_i=\{\#\Sigma_{m\times m}^{-1/2}\}\*Y_i$, $\^Z_i^{*j}=\{\#\Sigma_{m\times m}^{-1/2}\}\^Z_i^{*j}$, the same penalized criterion (4), (5) or (6) can be used to perform variable selection.
In reality $\#\Sigma$ is unknown, and we need an estimator $\hat{\#\Sigma}$. In the context of functional data, we want to estimate $\Sigma(\cdot,\cdot)$ nonparametrically. If we had the original residuals $\epsilon_{ij}$ available, we could use functional principal component analysis (FPCA), e.g., [@yao2005functional2] or [@zhang2007statistical] to estimate $\Sigma(s,t)$. If the covariance kernel $G(s,t)$ of the smooth part $V(t)$ is a Mercer kernel [@j1909xvi], by Mercer’s theorem $G(s,t)$ must have a spectral decomposition $$G(s,t)=\sum_{k=1}^{\infty}\lambda_k\phi_k(s)\phi_k(t),$$ where $\lambda_1\geq\lambda_2\geq \ldots0$ are the ordered eigenvalues and $\phi_k(\cdot)$s are the corresponding eigenfunctions. Thus we have the decomposition $\Sigma(s,t)=\sum_{k=1}^{\infty}\lambda_k\phi_k(s)\phi_k(t) + \sigma^2 I(s=t)$. Given $\epsilon_{t_{ij}}=V(t_{ij}) + w_{ij}$, one could employ FPCA based methods to get $\hat{\phi}_k(\cdot)$, $\hat{\lambda}_k$s and $\hat{\sigma^2}$. So an estimator of $\Sigma(s,t)$ can be formed as $$\hat{\Sigma}(s,t)=\sum_{k=1}^{K}\hat{\lambda_k}\hat{\phi_k}(s)\hat{\phi_k}(t) + \hat{\sigma^2} I(s=t),$$ where $K$ is large enough for the convergence to hold and is typically chosen such that percent of variance explained (PVE) by the selected eigencomponents exceeds some pre-specified value such as $99\%$ or $95\%$. In reality, we don’t have the original residuals $\epsilon_{ij}$ and use the full model (1) to obtain residuals $e_{ij}=Y_{i}(t_j)-\hat{Y_{i}}(t_j)$. Then treating $e_{ij}$ as our original residuals, we obtain $\hat{\Sigma}(s,t)$ using FPCA.
*Remark 1:* We use cubic B-spline basis with the same number of basis functions to model the regression functions $\beta_j(t)$s, where the number of basis is large so the basis is rich enough. For selection of the tuning parameter $\psi$ (for smoothness) and the penalty parameter $\lambda$, we use the Extended Bayesian information criteria (EBIC) [@chen2008extended] corresponding to the equivalent linear model of criterion (4), (5) or (6) and this has shown good performance in our simulation study. [@chen2008extended] established consistency of EBIC under standard assumptions and illustrated its superiority over other methods like cross-validation, AIC, and BIC, which tend to over select the variables. For tuning parameter $\phi$ we use the values $4$ for SCAD and $3$ for MCP, as proposed by the original authors. For model fitting we use ‘grpreg’ package [@breheny2019package] in R.
*Remark 2:* In practice, we recommend standardizing the variables either using Euclidean norm (automatically performed in ‘grpreg’) or using FPCA based methods ($X^{*}_j(t)=\frac{X_j(t)-\mu_j(t)}{\sigma_j(t)}$), which is especially useful for highly sparse data where some B-splines might not have observed data on its support. This can help in faster convergence of the proposed method. We performed both the standardization methods in our simulation studies and obtained very similar results.
Extension to Sparse data and Noisy Covariates {#sec:extension}
---------------------------------------------
More generally we can consider the case where data is observed sparsely and covariates are observed with measurement error. This is most often the case for longitudinal data. Here the observed data is the response {($Y_i(t_{ij}),t_{ij}),
j=1,2,\ldots,m_i$} and the observed covariates {($U_1(t_{1ij}),t_{1ij}),
j=1,2,\ldots,m_{1i}$}, {($U_2(t_{2ij}),t_{2ij}),
j=1,2,\ldots,m_{2i}$},…,{($U_p(t_{pij}),t_{pij}), \hspace{1mm}j=1,2,\ldots,m_{pi}$}. Let us denote $U_k(t_{kij})$s, ($k=1,2,3,\ldots,
p$) by $U_{ijk}$. Here $U_{ijk}$ s represent the observed covariates with measurement error, i.e., we have $U_{ijk}=Z_k(t_{kij})+e_{ijk}$ for $i=1,2,\ldots,n$, $j=1,2,\ldots,m_{ki}$ and $k=1,2,\ldots,p$. The measurement error $e_{ijk}$ are assumed to be white noises with zero mean and variance $\sigma_k^2$. In sparse data set up it is generally assumed [@kim2016general] although individual number of observations $m_{i}$ is small, $\bigcup_{i=1}^{n}\bigcup_{j=1}^{m_i}{t_{ij}}$ is dense in $[0,T]$. Then we reconstruct the original curves from the observed sparse and noisy curves using FPCA methods [@yao2005functional2] by estimating the eigenvalues and eigenfunctions corresponding to the original curves. [@li2010uniform] proved uniform convergence of the mean, eigenvalues and eigenfunctions associated with the curves for both dense and in particular sparse design under suitable regularity conditions. For prediction of the scores, we use PACE method as in [@yao2005functional2]. Then these estimates are put together using Karhunen-Lo$\grave{e}$ve expansion (Karhunen, Loeve 1946) to get estimates $\hat{Z}_{ik}(\cdot)$ of the true curves $Z_{ik}(\cdot)$ as $\hat{Z}_{ik}(t)=\hat{\mu}_k(t)+\sum_{s=1}^{S}\hat{\zeta}_{isk}\hat{
\psi}_{sk}(t)$, where the number of eigenfunctions $S$ to use is chosen using the percent of variance explained (PVE) criterion, which is the percentage of variance explained by the first few eigencomponents. Alternatively one can also use multivariate FPCA [@doi:10.1080/01621459.2016.1273115] instead of running FPCA on each predictor variable separately. Then for sparse data observed on irregular grid and observed with measurement error, we use {$Y_i(t_{ij}),\hat{Z}_{i1}(t_{ij}),\hat{Z}_{i2}(t_{ij}),\ldots,\hat
{Z}_{ip}(t_{ij}) j=1,2,\ldots,m_i\}_{i=1}^{n}$ as our original data for performing variable selection.
Simulation Study {#sec:sim_stud}
================
Simulation Setup
----------------
In this section, we evaluate the performance of our variable selection method using a simulation study. To this end we generate data from the model, $$Y_i(t)=\beta_0(t)+\sum_{j=1}^{20}Z_{ij}(t)\beta_{j}(t)+\epsilon_{i}(t),\hspace{2mm} i=1,2,\ldots,n, \hspace{2mm} t\in [0,100].$$ The regression functions are given by $\beta_{0}(t)= 8sin(\pi t/50)$, $\beta_{1}(t)= 5sin(\pi t/100)$, $\beta_{2}(t)= 4sin(\pi t/50)+4cos(\pi t/50)$, $\beta_{3}(t)=25e^{-t/20}$ and rest of the $\beta_j(t)=0$ for $j=4,5,6,\ldots,20$, i.e., the last 17 covariates are not relevant. The original covariates $Z_{ij}(\cdot)\stackrel{iid}{\sim} Z_j(\cdot)$, where $Z_j(t)$ ($j=1,2,\ldots,20$) are given by $Z_j(t)=a_j\hspace{1mm}\sqrt[]{2}sin(\pi j t/400)+ b_j\hspace{1mm}\sqrt[]{2}cos(\pi j t/400)$, where $a_j\sim \mathcal{N}(50,2^2)$, $b_j\sim \mathcal{N}(50, 2^2)$. We moreover assume that $Z_{ij}(t)$ are observed with measurement error i.e., we observe $U_{ij}(t)=Z_{ij}(t)+\delta_j$, where $\delta_j\sim \mathcal{N}(0, 0.6^2)$. The error process $\epsilon_{i}(t)$ is generated as follows; $$\epsilon_i(t)=\xi_{i1} cos(t) +\xi_{i2}sin(t) + N(0,1),$$ where $\xi_{i1}\stackrel{iid}{\sim}\mathcal{N}(0, 0.5^2)$ and $\xi_{i2}\stackrel{iid}{\sim}\mathcal{N}(0,0.75^2)$. The response $Y_i(t)$ and noisy covariate $U_{ij}(t)$’s are observed sparsely for randomly chosen $m_{i}$ points in $S$, the set of $m = 81$ equidistant time points in $[0,100]$ and $m_{i}\stackrel{iid}{\sim} Unif\{30,31,\ldots,41\}$. Three sample sizes $n\in\{100,200,400\}$ are considered. For each sample size, we use 500 generated datasets for evaluation of our method.
Simulation Results
------------------
Our primary interest is selection (identification) of the relevant covariates $Z_1(\cdot), Z_2(\cdot), Z_3(\cdot)$ and estimating their effects $\beta_1(t), \beta_2(t), \beta_3(t)$ accurately. As the covariates are observed sparsely and with measurement error, we apply FPCA as discussed in Section \[sec:extension\] with PVE= $99\%$ and obtain the denoised curves $\hat{Z}_{ij}(t)$ before applying our variable selection method. We apply the proposed variable selection method with and without the pre-whitening procedure mentioned in Section \[sec:prewhite\]. Table \[tab1\] and Table \[tab12\] display the selection percentage of each variable for each of the three selection methods discussed in Section \[sec:method\] and for the three sample sizes $n=100, 200, 400$, for the non pre-whitened and pre-whitened case respectively. We use the acronyms FLASSO (Functional LASSO), FSCAD (Functional SCAD) and FMCP (Functional MCP) respectively for the proposed variable selection methods for FLCM. We expect that the group LASSO selection method to have a higher false positive rate and use this as a benchmark for comparison. It can be seen from Table \[tab1\] and \[tab12\] that all the three methods (group LASSO, group SCAD, group MCP) pick out the three true covariates $Z_1(\cdot), Z_2(\cdot), Z_3(\cdot)$; $100\%$ of the time. The group LASSO method has a high false positive selection percentage as can be seen in both Table \[tab1\] and Table \[tab12\], with selection accuracy improving with increasing sample size. The group SCAD and group MCP method, on the other hand, have a false selection percentage in the range of $0.2\%-1\%$ for non pre-whitened case and exactly $0\%$ for pre-whitened case. In other words, the group SCAD and group MCP method are able to identify the true model using the pre-whitening procedure. In scalar regression, SCAD and MCP are known to produce sparser solutions than LASSO due to its concave nature, and here also in the context of variable selection in functional linear concurrent regression, we observe these two methods (their group extension) outperforming LASSO. The average model sizes for each scenario are also given in Table \[tab1\], and the group SCAD and group MCP method produce smaller and closer values to the true model size 3 (exactly 3 with pre-whitening procedure in Table \[tab12\]). These results also illustrate the benefit of pre-whitening and henceforth we have used pre-whitening as a preprocessing step to perform variable selection using the proposed methods.
Next, as an assessment of the accuracy of the estimates $\hat{\beta}_k(t)$ ($k=1,2,3$), we plot the true regression curves overlaid by their Monte Carlo (MC) mean estimate from the three methods. MC point-wise confidence intervals ($95\%$) (corresponding to point-wise 2.5 and 97.5 percentiles of the estimated curves over 500 replicates) for each of the three curves are also displayed to
[width=1.4]{}
[|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|]{} Sample Size & Method & Var1 & Var2 & Var3 & Var4 & Var5 & Var6 & Var7 & Var8 & Var9 & Var10 & Var11 & Var12 & Var13 & Var14 & Var15 & Var16 & Var17 & Var18 & Var19 & Var20 &
-----------
Avg Model
Size
-----------
: \[tab1\] Comparison of selection percentages ($\%$) of different variables and average model size, without pre-whitening.
\
& FLASSO & 100 & 100 & 100 & 16.4 & 18.4 & 16.6 & 10 & 14.4 & 15 & 15.6 & 17.2 & 15.2 & 13 & 14.2 & 17.8 & 16.4 & 15.4 & 16 & 14.4 & 13 & 5.59\
& FSCAD & 100 & 100 & 100 & 0.6 & 0.4 & 0.2 & 1.0 & 1.2 & 0.4 & 0.4 & 0.8 & 0.4 & 0.2 & 0.4 & .8 & 0.2 & 0.2 & 0.6 & 0 & 1 & 3.088\
& FMCP & 100 & 100 & 100 & 0.6 & 0.2 & 0.2 & 1.0 & 1.0 & 0.4 & 0.4 & 0.6 & 0.4 & 0.2 & 0.2 & 0.4 & 0.2 & 0.2 & 0.6 & 0 & 1 & 3.076\
& FLASSO & 100 & 100 & 100 & 15.8 & 14.6 & 16.8 & 14 & 13.4 & 15.4 & 11.6 & 14.2 & 14.8 & 14.4 & 15 & 14.4 & 14 & 11.2 & 14.6 & 10.8 & 15 & 5.4\
& FSCAD & 100 & 100 & 100 & 0.2 & 0.6 & 1 & 0.6 & 0.4 & 0.8 & 0.8 & 1.2 & 0.2 & 0.2 & 0 & 0.4 & 0.2 & 0.4 & 0.8 & 0.4 & 1 & 3.092\
& FMCP & 100 & 100 & 100 & 0.2 & 0.6 & 0.8 & 0.4 & 0.4 & 0.8 & 0.6 & 1 & 0.2 & 0.2 & 0 & 0.4 & 0.2 & 0.4 & 0.8 & 0.2 & 0.8 & 3.08\
& FLASSO & 100 & 100 & 100 & 13.8 & 13.4 & 14.8 & 11.2 & 12.6 & 12.8 & 10.8 & 13.4 & 11 & 12.4 & 12.4 & 15.2 & 11.2 & 12.8 & 13.2 & 12 & 13.6 & 5.176\
& FSCAD & 100 & 100 & 100 & 0.4 & 0 & 0.2 & 0.4 & 0.4 & 0 & 0 & 0 & 0 & 0.4 & 0 & 0.2 & 0.2 & 0.2 & 0.2 & 0 & 0 & 3.026\
& FMCP & 100 & 100 & 100 & 0.4 & 0 & 0.2 & 0.4 & 0.2 & 0 & 0 & 0 & 0 & 0.4 & 0 & 0.2 & 0.2 & 0.2 & 0.2 & 0 & 0 & 3.024\
[width=1.4]{}
[|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|]{} Sample Size & Method & Var1 & Var2 & Var3 & Var4 & Var5 & Var6 & Var7 & Var8 & Var9 & Var10 & Var11 & Var12 & Var13 & Var14 & Var15 & Var16 & Var17 & Var18 & Var19 & Var20 &
-----------
Avg Model
Size
-----------
: \[tab12\] Comparison of selection percentages ($\%$) of different variables and average model size, with pre-whitening.
\
& FLASSO & 100 & 100 & 100 & 6.2 & 7.8 & 7.6 & 6 & 7.8 & 6.4 & 6.6 & 7.4 & 5.8 & 5.4 & 6.4 & 6.4 & 7.4 & 4.8 & 6 & 6 & 6.2 & 4.112\
& FSCAD & 100 & 100 & 100 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3\
& FMCP & 100 & 100 & 100 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3\
& FLASSO & 100 & 100 & 100 & 5.8 & 3 & 6 & 4.4 & 5.2 & 4.8 & 5.6 & 4.2 & 7.6 & 4.4 & 4.8 & 3.2 & 3.4 & 2.8 & 5.6 & 4.4 & 16 & 3.932\
& FSCAD & 100 & 100 & 100 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3\
& FMCP & 100 & 100 & 100 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3\
& FLASSO & 100 & 100 & 100 & 4.2 & 2.6 & 4.6 & 3.8 & 3.6 & 3 & 2.6 & 3.4 & 5 & 5.2 & 5.4 & 3.6 & 3.4 & 3.4 & 4.6 & 4.8 & 29 & 3.922\
& FSCAD & 100 & 100 & 100 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3\
& FMCP & 100 & 100 & 100 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3\
asses variability of the estimates. Figure \[fig:fig1\] displays this plot for $n=200$, the plots for $n=100,n=400$ are similar with more accuracy and less variability for larger sample sizes. The group LASSO estimates (dashed line) have a larger bias which is again expected, as LASSO is known to have a relatively high bias when magnitude of the regression coefficient is large. The group SCAD (dotted line) and group MCP (dashed-dotted line) estimates have almost identical accuracy and variability as seen from Figure \[fig:fig1\]; they have superimposed on each other and on the true curves represented by solid lines.
-- --
-- --
To further evaluate the performance of the estimates we calculate the absolute bias and the MC mean square error of the estimates averaged across $100$ equally spaced grid points in $[0,100]$, for all the selection methods and the three sample sizes. This is displayed in Table \[tab2\]. We again observe group SCAD and group MCP method outperforming the group LASSO method, in terms of both absolute bias and mean square error, the performance of the estimators improving with increasing sample size. We compared these mean square errors of the estimates arising from pre-whitening procedure with the same from non pre-whitening procedure and found these only to be marginally higher, which is expected due to the uncertainty associated with estimating the covariance matrix. The mean square errors appear to be converging to zero across all the three methods with increase in sample size indicating consistency of the estimators. The simulation results illustrate superior performance of the proposed group SCAD (FSACD) or group MCP (FMCP) based selection method in the context of functional linear concurrent model and are the recommended methods of this article.
-- -------- ------- ------- ------- ------- ------- -------
Bias MSE Bias MSE Bias MSE
FLASSO 0.083 0.033 0.092 0.048 0.112 0.191
FSCAD 0.011 0.025 0.015 0.038 0.022 0.165
FMCP 0.011 0.025 0.015 0.038 0.022 0.165
FLASSO 0.061 0.017 0.069 0.024 0.092 0.109
FSCAD 0.007 0.013 0.008 0.019 0.010 0.091
FMCP 0.007 0.013 0.008 0.019 0.010 0.091
FLASSO 0.047 0.009 0.051 0.013 0.070 0.063
FSCAD 0.004 0.007 0.004 0.010 0.010 0.050
FMCP 0.004 0.007 0.004 0.010 0.010 0.050
-- -------- ------- ------- ------- ------- ------- -------
: \[tab2\] Comparison of MC absolute bias and mean square error.
Real Data Applications
======================
In this section, we demonstrate application of our variable selection method in selection of influential time-varying predictors in two real data studies. For performing variable selection, we use only the FSCAD and FMCP method along with the initial pre-whitening procedure, as the group LASSO method yields a significantly higher false positive rate which is illustrated by our simulations. We first consider a small dietary calcium absorption dataset (three time-varying covariates) with added pseudo covariates as an illustration of our method. Addition of pseudo covariates is a popular way [@wang2008variable; @wu2007controlling; @miller2002subset] of assessing false selection rate in real datasets. Pseudo variables can, therefore, be used effectively for tuning variable selection procedures. We show that our proposed method is able to select the relevant predictors and discard the pseudo variables successfully. Finally, we apply our variable selection method to the fisheries dataset to find out relevant socio-economic drivers influencing fisheries footprint of nations over time. \[sec:real\_dat\]
Study of Dietary Calcium Absorption
-----------------------------------
We consider the study of dietary calcium absorption in [@davis2002statistical]. In this study, the subjects are a group of 188 patients. We have data on calcium absorption ($Y(t)$), dietary calcium intake ($Z_1(t)$), BMI ($Z_2(t)$) and BSA (Body surface area) ($Z_3(t)$) of these patients, at irregular time points between 35 and 64 years of their ages. At the beginning of the study patients aged between 35 and 45 years and subsequent observations were taken approximately every 5 years. The number of repeated measurements for each patient varies from 1 to 4. Figure \[fig:fig2\] displays the individual curves of patients’ calcium absorption, calcium intake, BSA, BMI along their ages.
![Calcium absorption and covariate profiles of patients along their ages.[]{data-label="fig:fig2"}](calciumsel.eps){width=".9\linewidth" height=".8\linewidth"}
We are primarily interested in finding out which covariates influence calcium absorption profile of the patients. [@kim2016general] also investigated the effect of calcium intake on calcium absorption using an additive nonlinear functional concurrent model, and found the effect to be more or less linear while comparing to a functional linear concurrent model. So we use functional linear concurrent regression to model the dependence of calcium absorption on calcium intake, BSA and BMI. As data is observed very sparsely and the original covariates might be observed with measurement error, we apply FPCA methods (PVE = $95\%$) as discussed in Section \[sec:extension\] and get the denoised trajectories $\hat{Z}_{j}(t)$ for $j=1,2,3$. We expect that, calcium intake among the three covariates will be associated with calcium absorption. We add 15 pseudo covariates by simulating from the following functional model to illustrate the selection performance and false positive rate of our variable selection method. We generate $Z_{ij}(\cdot)\stackrel{iid}{\sim} Z_j(\cdot)$ where $Z_j(t)$ ($j=4,5,\ldots,18$) are given by $Z_j(t)=a_j\hspace{1mm}\sqrt[]{2}sin(\pi (j-3) t/200)+ b_j\hspace{1mm}\sqrt[]{2}cos(\pi (j-3) t/200)$, where $a_j\sim \mathcal{N}(0,(2)^2)$, $b_j\sim \mathcal{N}(0,(2)^2)$. So in total, we have 18 covariates, where the first 3 are the denoised original covariates and rest are simulated predictors. Then we apply our variable selection method to $Y(t)$ and $\hat{Z}_{1}(t),\hat{Z}_{2}(t),\hat{Z}_{3}(t),Z_4(t),Z_5(t),\ldots,Z_{18}(t)$. We repeat this a large number of times and observe which variables are being selected in each iteration. We expect our variable selection method to pick out the truly influential predictors and ignore the randomly generated functional covariates the majority of the time. To illustrate the benefit of using our proposed variable selection method in functional regression model for this particular data we compare its performance to a backward selection method which uses model selection criterion like BIC or Mallows’ Cp, under a linear model approach (using an independent working correlation structure), and to a penalized generalized estimating equations (PGEE) procedure [@wang2012penalized] which was developed to analyze longitudinal data with a large number of covariates. We use the ‘PGEE’ package in R [@inan2017pgee] for implementing the penalized generalized estimating equations procedure under two different working correlation structure (independent and AR (1)).
Table \[tab3\] illustrates the selection percentage of each of the variables under different methods. We notice that both the proposed FSCAD and FMCP method identify calcium intake ($Z_1(t)$) as a significant predictor $100\% $ of the time. All other variables including all the pseudo covariates are ignored in $100\%$ of the iterations. On the other hand, BIC, Cp, and PGEE exhibit a high false selection percentage for the pseudo variables. The case of selection of BSA or BMI appears to be over selection as their individual effects were not found to be statistically significant. This demonstrates when the underlying model is functional, the use of naive variable selection methods using scalar regression techniques can lead to wrong inference as they don’t account for functional nature of the data.
Method Var 1(Calcium Intake) Var 2 (BSA) Var 3(BMI) Max Var (4-18)
------------- ----------------------- ------------- ------------ ----------------
FSCAD 100 0 0 0
FMCP 100 0 0 0
BIC (LM) 100 100 0 9
Cp (LM) 100 100 2 33
PGEE (ind) 100 0 100 31
PGEE (AR-1) 100 0 100 31
: \[tab3\] Selection Percentages ($\%$) of variables in Calcium absorption Study.
As calcium intake is the only significant variable selected by both the proposed methods we want to estimate its effect and also get a measure of uncertainty of our estimate. For this purpose, we use a subject-level bootstrap on our original data (no pseudo covariates added) while performing variable selection to come up with an estimated regression curve $\hat{\beta}_1(t)$ and a pointwise confidence interval for the effect of calcium intake. This is displayed in Figure \[fig:fig3\]. We notice as calcium intake increase calcium absorption should decrease particularly until age $60$ years, as $\hat{\beta}_1(t)<0$ up to this age and the confidence interval strictly lies below zero, which might be due to dietary calcium saturation or due to interaction with some other elements in the body; although the overall magnitude of the effect seems to decrease with age. Above age $60$, the estimate appears to have high variability associated with it, which is primarily because we have very few observations ($5.62\%$) above this mark (illustrated in Figure \[fig:fig3\]). The uncertainty in estimating $\hat{\#\Sigma}$ using FPCA and the uncertainty due to bootstrap is reflected in its variability. Hence, some care should be taken in interpretation of the estimated regression curve beyond 60 years because of such high uncertainty.
-- --
-- --
Study of Fisheries Footprint {#ffp}
----------------------------
Production of fisheries is a source of protein as well as an economic livelihood across the world. Along with the increasing global population, the importance of fish production and consumption has steadily increased through the modern era. Fisheries Footprint is defined as the Global Footprint Network’s measure of total marine area required to sustain consumption levels of aquatic production of fish, crustacean (e.g., shrimp), shellfish, and seaweed from captures and aquaculture; so the fisheries footprint basically represents the coastal and marine area required to sustain the amount of seafood products a nation consumes. As pointed out by [@longo2016ocean], the interaction between marine and social systems calls for further sociological analysis.
Over the last two decades, social scientists have accomplished much in advancing scholarly knowledge on the social drivers of ecological impact at a macro-scale. Such work is essential, as ecological problems are becoming increasingly interlinked and severe at a global or planetary scale [@steffen2011anthropocene]. Over time, for example, economic development, population structuring (e.g., urbanization or age structure), trade relations, and technological change are shown to affect measures of environmental impact across nations over time [@doi:10.1177/0020715219869976; @jorgenson2010assessing; @york2003footprints]. This body of literature centers on the ecological affects of globalization and modernization, under the socio-structural parameters of a capitalist economy. There is still much debate over the impacts of industrial and agricultural modernization on ecosystems. For example, development and resource economics literature [@world2007] advocate for the utilization of innovation and techno-improvements to improve marine system sustainability and food security [@valderrama2010market], while, on the other hand, environmental sociologists demonstrate that such innovation, chiefly aquaculture, does not displace the deleterious ecological impacts of capture fisheries [@longo2019aquaculture]. Nevertheless, despite such progress, fisheries footprint remains an understudied metric, and its drivers are less understood in social research [@clark2018socio; @jorgenson2005unpacking].
The goal of this study, therefore, is to identify the relevant socio-economic drivers such as levels of economic development, population size, and transformations in food-system dynamics that influence fisheries footprint of nations over time and also to capture their time-varying effects. Data for this study is collected from the World Bank, Fish StatJ of UN FAO, and Ecological Footprint Network for years between 1970-2009, across 136 nations. The main dependent variable of interest in this study is fisheries footprint. Figure \[fig:fish1\] displays the fisheries footprint of the nations over the study years in log scale. Fisheries footprint of three representative nations are plotted using solid, dashed and dotted lines.
![Fisheries footprint of the nations over Year 1970-2009.[]{data-label="fig:fish1"}](fisheries.eps){width=".7\linewidth" height=".7\linewidth"}
To capture the trend over the years, we plot the mean fisheries footprint of the nations along with their pointwise $95\%$ confidence interval. This is displayed in Figure \[fig:fish2\]. We notice an overall upward trend as well as heterogeneity across years.
![Mean fisheries footprint along with their 95% confidence interval. []{data-label="fig:fish2"}](fishfootmean.eps){width=".7\linewidth" height=".7\linewidth"}
There are 20 independent time-varying covariates in the study, broadly covering various sectors of population dynamics (e.g. population density, urban population, total population, working age population percentage etc), agriculture (e.g. tractor, agriculture value added etc), food consumption and other fisheries variables (e.g. meat consumption, aquaculture production tons etc), international relations (e.g. food export as percentage of merchandise, FDI inflow etc), and economy (e.g. GDP per capita at constant U.S. dollar , trade percentage of GDP etc). The full list of the variables are given in Table \[tab4\].
[ll]{}\
&\
&\
&\
&\
&\
&\
&\
&\
&\
&\
The predictors here are also time-varying and can be expected to have dynamic effects on fisheries footprint. Generally panel data methods like fixed effects or random effects modelling is used [@torres2007panel; @doi:10.1177/0020715219869976] for analyzing such data where the effect the covariates are taken to be constant, since we are interested in dynamic effects of the covariates, the FLCM can be seen as a generalization of this approach with time-varying effects of predictors. Therefore we use a functional linear concurrent regression model (1) discussed in this article to model the dynamic effects of the socio-economic predictors on fisheries footprint. The predictors in their original scale are also very large in magnitude, therefore converted into log scale; the covariates observed as percentages are used without any conversion. Before applying our variable selection method all the covariates are preprocessed using FPCA methods (PVE = $95\%$) as discussed in Section \[sec:extension\].
We use the pre-whitening procedure discussed in Section \[sec:prewhite\] and apply our proposed variable selection method. Out of 20 covariates, the proposed FSCAD and FMCP method both identify GDP per capita and urban population as the two significant predictors. Gross domestic product (GDP) is a measure of the market value of all the final goods and services produced in a specific time period, and GDP per capita is a measure of a country’s economic output adjusting for its number of people. As the major economic indicator GDP per capita is associated with primary aspects of economic growth, consumer behaviour, trade and therefore is a key indicator of fisheries footprint of nations over time. Furthermore, GDP per capita is a common metric in extant social science research to operationalize the extent to which a nation is successfully developing according to the standards of the world, capitalist economy [@dietz2013structural]. Figure \[fig:fig5\] (left panel) shows the estimated regression curve for GDP per capita obtained by applying the FSCAD selection method. The estimate from the FMCP method is similar. We observe the net effect of GDP per capita on fisheries footprint to be positive and linear, although the magnitude of the effect has decreased over time.
-- --
-- --
Urban population being the key market, also plays a crucial role in the total seafood consumption of nations and therefore influences fisheries footprint. Change in urban population reflects urbanization and urbanization have important effects on food security and farming [@satterthwaite2010urbanization; @cohen2010food]. Figure \[fig:fig5\] (right panel) shows the estimated regression curve illustrating the effect of urban population on fisheries footprint. Here also we notice the net effect of urban population on fisheries footprint to be positive, although the effect appears to be more or less constant with a very marginal decrease (in log scale).
Here, it is important to note that fisheries footprint represents the metabolic potential of an ecosystem to reproduce itself ecologically. According to the Food and Agriculture Organization of the United Nations [@FAO2016], about 58 percent of global fish stocks are currently fully exploited, and about 55 percent of ocean territory (conservatively) was subjected to industrial fishing in the past year [@kroodsma2018tracking]. Thus, there is declining metabolic potential for the expansion of capture fisheries, which likely helps to account for why variable effects were stronger in earlier, more ecologically productive decades.
Both these variables are important in the sense they represent the primary indicators in economics, food consumption, population dynamics, trade, etc; which directly interact with a nation’s need for seafood and therefore should influence fisheries footprint. It is therefore not surprising that countries having high GDP per capita and/or high urban population e.g., United States, Australia, Singapore, etc also have a high fisheries footprint. In Figure \[fig:fig7\] we display the fisheries footprint, GDP per capita and urban population profile of the three representative countries mentioned earlier. We notice the overall trend in the fisheries footprint profile can be described well by their GDP per capita and urban population profile, both of which were shown to have a positive effect on fisheries footprint.\
![Profile of fisheries footprint, GDP per capita and urban population of three representative countries.[]{data-label="fig:fig7"}](fisheries3c.eps){width=".6\linewidth" height=".6\linewidth"}
*Remark 3:* We have successfully applied our proposed variable selection method to find out the relevant time-varying predictors and their time-varying effects on fisheries footprint. It is very plausible that there might be country or region specific effects on fisheries footprint as revealed in study by [@doi:10.1177/0020715219869976], and one might be interested in estimating these effects. The proposed variable selection method for FLCM can be extended to handle such region specific effects in its existing form.
*Remark 4:* We have considered concurrent effects of the predictors while some of the predictors might have lagged effects on fisheries footprint. For example in an economic crisis or recession, the predictors could very likely have reverberating impacts on development for a few years. As invested capital takes time to flow through the economy, considering such lagged effects would be interesting. We applied our variable selection method with lagged predictors present along with the original predictors (with lag window = 1, 3). We found out that for lag one, the proposed FMCP and FSCAD method select almost identical models with the FSCAD method selecting ‘services value growth pct’ as an additional variable. Considering a lag window of three years, the FSCAD method selects urban population lag instead of urban population while the FMCP method additionally selects ‘aquaculture production tons’ and ‘services value growth pct lag’ as influential covariates. These results indicate some of the predictors could have reverberating impacts and a more general framework like the historical functional regression model [@malfait2003historical] might be more suitable to model past effects of covariates on the response at current time point.
Discussion {#con}
==========
In this article, we have proposed a variable selection method in functional linear concurrent regression extending the classically used penalized variable selection methods like LASSO, SCAD, and MCP. We have shown the problem can be addressed as a group LASSO and their natural extension group SCAD or group MCP problem. We have used a pre-whitening procedure to take into account the temporal dependence present within functions and through numerical simulations, have illustrated our proposed selection method with group SCAD or group MCP penalty can select the true underlying variables with high accuracy and has minuscule false positive and false negative rate even when data is observed sparsely, is contaminated with measurement error and the error process is highly non-stationary. We have illustrated usefulness of the proposed method by applying to two real datasets: the dietary calcium absorption study data and the fisheries footprint data in identification of the relevant time-varying covariates. In this article we have used a resampling subject based bootstrap method to measure uncertainty of the regression functions estimates, theoretical properties corresponding to such bootstrap is something we would like to explore more deeply in the future.
There are many interesting research directions this work can head into. In real data, the dynamic effects of the predictors might always not be linear. In future, we would like to extend our variable selection method to nonparametric functional concurrent regression model [@maity2017nonparametric], which is a more general and flexible model to capture complex relationships present between the response and covariates. As mentioned earlier it would be also of interest to consider the lagged effects of covariates through a more general historical functional regression model [@malfait2003historical].
In developing our method we assumed the covariates to be independent and identically distributed. In many cases this might not be a reasonable assumption. For example, in the fisheries footprint data some countries could be very similar and form clusters, on the other hand, they might not be even independent with the interplay of economies and other variables among nations. Even if the covariates are not independent over subjects, the variable selection criterion proposed in this article can still be used in practice as a penalized least square method. The heterogeneity present among the subjects can be addressed using interaction effect of covariates with regions, which can be clustered based on the level of affluence. This can be done similarly as in [@doi:10.1177/0020715219869976]. Alternatively, one can also use subject specific functional random effects for covariates, especially if one is interested in individual specific trajectories. Functional linear mixed model [@liu2017estimating] might be an appropriate choice in such situations. Extending the proposed variable selection method to such general functional regression models would be an extension of this work and remain an area for future research.
Software {#software .unnumbered}
========
All the methods discussed in this article has been implemented using the ‘grpreg’ package [@breheny2019package] in R. Illustrations of implementation of our method using $R$ are available with this article on Wiley Online Library and at GitHub (<https://github.com/rahulfrodo/FLCM_Selection>).
Acknowledgement {#acknowledgement .unnumbered}
===============
We would like to thank the editor, the associate editor and an anonymous referee for their valuable inputs and suggestions which have greatly helped in improving this article.
|
---
abstract: 'The EDGES Collaboration has reported an anomalously strong 21cm absorption feature corresponding to the era of first star formation, which may indirectly betray the influence of dark matter during this epoch. We demonstrate that, by virtue of the ability to mediate cooling processes whilst in the condensed phase, a small amount of axion dark matter can explain these observations within the context of standard models of axions and axion-like-particles. The EDGES best-fit result favours an axion-like-particles mass in the (10, 450) meV range, which can be compressed for the QCD axion to (100, 450) meV in the absence of fine tuning. Future experiments and large scale surveys, particularly the International Axion Observatory (IAXO) and EUCLID, should have the capability to directly test this scenario.'
author:
- 'Nick Houston$\,{}^a$'
- 'Chuang Li$\,{}^{a,b}$'
- 'Tianjun Li$\,{}^{a,b}$'
- 'Qiaoli Yang$\,{}^{c}$'
- 'Xin Zhang$\,{}^{d,e}$'
title: 'Natural explanation for 21cm absorption signals via axion-induced cooling'
---
**Introduction.** After recombination, between the thermal decoupling of baryons and the CMB and the era of first star formation, the Universe entered a prolonged period of cooling known as the dark ages. Intriguingly, this epoch is both largely untested by observations, and in the standard $\Lambda$CDM cosmology, relatively predictable and easily understood. As such, it can serve as a precise probe of physics outside of the $\Lambda$CDM paradigm.
One key observable is related to the absorption of 21cm light at that time, arising from the neutral hydrogen present filtering background radiation, and thereby imprinting a characteristic spectral distortion on wavelengths close to atomic transitions. This feature redshifts to the 80 MHz range today, and has been recently observed by the Experiment to Detect the Global Epoch of reionisation Signature (EDGES) Collaboration.
Their result is an anomalously strong 21cm absorption feature from $z\in (20,15)$, corresponding to the era of first star formation [@Bowman:2018yin]. The amplitude of this signal is $$T_{21}\simeq 35 \mathrm{mK}\left(1-\frac{T_{\gamma}}{T_s}\right)\sqrt{\frac{1+z}{18}}
\simeq -0.5^{+0.2}_{-0.5}\,\mathrm{K}\,,
\label{T21}$$ where $T_\gamma$ is the CMB temperature, $T_s$ the singlet/triplet spin temperature of the hydrogen gas present at that time, and the uncertainties quoted are at 99% confidence level. Once stellar emission of UV radiation begins at $z\sim 20$ we expect that $T_{\gamma}>>T_s\gtrsim T_{\mathrm{gas}}$, due to the decoupling of the CMB and hydrogen gas at $z \sim 200$, and the coupling of the spin temperature to the kinetic gas temperature. In the standard $\Lambda$CDM scenario $T_{\gamma}|_{z\sim17}\simeq 49$ K and $T_{\mathrm{gas}}|_{z\sim17}\simeq 6.8$ K, so we expect $T_{21}\gtrsim-0.2$ K. The resulting significance of this deviation from the $\Lambda$CDM prediction is estimated to be $3.8\,\sigma$.
One approach to resolving this discrepancy relies upon interactions with cold dark matter (CDM) to lower the gas temperature. However, as demonstrated in Ref. [@Barkana:2018lgd], the interaction cross section required to achieve this is prohibitive for models of dark matter. Consistency with other experimental and observational constraints ultimately limits models capable of explaining the EDGES observation to being comprised of just $0.3-2$% millicharged dark matter, with masses and millicharges in the $(10, 80)$ MeV and $(10^{-4},10^{-6})$ ranges, respectively [@Munoz:2018pzp; @Berlin:2018sjs; @Barkana:2018qrx] A number of other approaches have also been explored, including adding additional dark sector interactions, modifying the thermal history, and injecting additional soft photons during that epoch [@Fraser:2018acy; @Costa:2018aoy; @Li:2018kzs; @Hill:2018lfx; @Falkowski:2018qdj]. Several axion-theoretic explanations have also been recently proposed [@Lambiase:2018lhs; @Lawson:2018qkc; @Moroi:2018vci], but we emphasise for clarity that our approach differs in many essential respects from these.
More specifically, in the following we propose a dark-matter theoretic approach, which relies upon the speculated ability of axion dark matter to form a Bose Einstein Condensate (BEC) [@Sikivie:2009qn; @Erken:2011dz]. Whilst behaving in many respects as ordinary CDM, a particularly interesting aspect of this phenomenon exists in the ability of this condensed state to induce transitions between momentum states of coupled particle species and thereby mediate cooling processes. This scenario was originally invoked in Ref. [@Erken:2011vv] to lower the photon temperature in the era of Big Bang Nucleosynthesis (BBN), in order to adjust the baryon-to-photon ratio and thus ease the discrepancy between the observed and predicted primordial ${}^7$Li abundance.
As we will see in the following, by analogously lowering the hydrogen temperature prior to the cosmic dawn this mechanism can explain the EDGES observations in the context of axion and axion-like-particle (ALP) models. The implied parameter range is close to existing experimental limits, and so could be tested at the next generation of axion experiments and via large scale surveys, particularly IAXO and EUCLID, respectively [@Armengaud:2014gea; @Laureijs:2011gra].
**Axion dark matter condensation.** The underlying conditions for BEC formation are that a system comprise a large number of identical bosons, conserved in number, which are sufficiently degenerate and in thermal equilibrium [@Chakrabarty:2017fkd]. As such, the formation of a BEC of CDM axions seems a reasonable possibility.
Nonetheless, there has been some controversy in the literature around this and the value of the resulting correlation length [@Saikawa:2012uk; @Davidson:2013aba; @Davidson:2014hfa; @Guth:2014hsa]. In particular it was ultimately concluded in [@Guth:2014hsa] that although a Bose-Einstein condensate can form, the claim of long-range correlation in the case of attractive interactions is unjustified. It has however been argued more recently that these findings may be overly reliant on the criterion of homogeneity, whilst a BEC can be inhomogeneous and nonetheless correlated over its whole extent, which can be arbitrarily large [@Chakrabarty:2017fkd]. Addressing these points is somewhat beyond the scope of this paper, and so we instead proceed under the assumption that the BEC cooling mechanism functions as advertised in [@Sikivie:2009qn].
It is nonetheless key to note that for generic ALPs with repulsive self-interactions, the thermalisation rate is $$\Gamma_a/H\sim4\pi \lambda n_a m_a^2/H\,,
\label{ALP thermalisation}$$ where $\lambda$, $n_a$ and $m_a$ are respectively the quartic coupling, and the cold axion number density and mass. Since this increases with time, long range order will eventually be established and condensation can be reasonably and uncontroversially expected [@Guth:2014hsa].
For the QCD axion, which provides both a compelling solution to the strong CP problem and a particularly attractive target for beyond the Standard Model physics searches [@Peccei:1977hh; @Weinberg:1977ma; @Wilczek:1977pj; @Kim:1979if; @Shifman:1979if; @Dine:1981rt; @Zhitnitsky:1980tq], all available interactions are however attractive. The thermalisation rate due to gravitational interactions is given by $$\Gamma_a/H\sim4\pi Gn_am_a^2l_a^2/H\,,$$ where $G$ is Newton’s constant, and $l_a$ is the correlation length [@Sikivie:2009qn]. This scales as $t/a$, where $a$ is the scale factor, and so by the logic of [@Chakrabarty:2017fkd] can also be relied upon to ensure long-range order and the condensed phase persists.
Once formed, the large-scale gravitational field of the condensate can reduce the momenta of particle species, with the cooling effects beginning once the characteristic relaxation timescale $\Gamma$ exceeds the Hubble rate, so that $$\Gamma/H\sim4\pi Gm_an_al_a\omega/\Delta pH\gtrsim 1\,,$$ where $\omega$ and $\Delta p$ are the energy and momentum dispersion of the particle species in question.
This phenomenon offers the possibility to then explain the anomalous EDGES result, with condensed axion dark matter cooling the primordial hydrogen after it decouples from the CMB at $z\sim 200$. This latter point is essential, as if axion cooling begins whilst the CMB and hydrogen remain in thermal equilibrium, the effect on will be negligible. Of course the onset of cooling must also be prior to the cosmic dawn, and the effect in total must give the correct EDGES absorption magnitude. As we will see in the following, and perhaps surprisingly, these various requirements can be simultaneously accommodated by an ALP which may or may not also function as the QCD axion. In practice the EDGES observation uniquely selects a small range for $m_a$, which is compatible with present-day axion phenomenology and can conceivably be explored at the next generation of axion experiments.
**Condensate-induced hydrogen cooling.** Using the formulae of the previous section, our starting point is the baryon cooling rate at the time of matter-radiation equality, $$\frac{\Gamma_H}{H}\bigg |_{t_{eq}}\sim\sqrt{\frac{3m_H}{16 T_{eq}}}\frac{\Omega_ah^2}{\Omega_{DM}h^2}\,,
\label{hydrogen cooling rate}$$ where $\Omega_ah^2/\Omega_{DM}h^2$ is the fraction of the cooling-induced ALP density over the dark matter relic density, where we have used the Friedmann equation at this time to identify $3H^2\simeq16\pi G\rho_{DM}$, neglecting the contributions of visible matter and dark energy, and, assuming that we are in the condensed phase, identified $l_a\sim 1/H$. By virtue of the Maxwell-Boltzmann distribution $\Delta p\simeq\sqrt{3m_HT_H}$, and at this temperature we can identify $\omega\sim m_H$.
As $m_H>>T_{eq}$ we evidently need a small $\left(\Omega_a/\Omega_{DM}\right)$ ratio to ensure cooling only begins when $z \in(200, 20)$. To be more precise we can note that since $a\propto t^{2/3}$ during matter domination, $\Gamma_H/H\propto 1/\sqrt{T}$. This then implies that after matter-radiation equality, $$\frac{\Gamma_H}{H}=\frac{\Gamma_H}{H}\bigg |_{t_{eq}}\left(\frac{T_{eq}}{T_H}\right)^{1/2}\,.$$ Since $T_{eq}\sim 0.75$ eV $\simeq 8.7\times 10^3$ K, and we require axion-induced cooling to occur between $T_H^{z=200}\sim 475$ K and $T_H^{z=20}\sim 10$ K, we can first establish that we require $$\Omega_a h^2/\Omega_{DM}h^2\in(0.22,1.5)\times10^{-5}\,.
\label{CDM relic density}$$
It is important to note that once the BEC forms, we will have two distinct populations of cold axions; those that are in the condensed state, and a remnant thermal population. Hydrogen can in principle interact with both, however there exists a key distinction; scattering from the cold thermal axions will simply raise their temperature, whilst scattering from the condensed axions will typically liberate them from the BEC, given the energies involved, and into the thermal population. However, in Ref. [@Davidson:2014hfa] the rate at which the BEC occupation number can change by scattering with external particles is calculated, finding that the latter number-changing process should be vanishingly rare.
Energy conservation then dictates that $$\rho_H\left(T_i\right)\simeq\rho_H\left(T_f\right)+\rho_a\left(T_f\right)\,.
\label{energy conservation}$$ since the energy lost from the hydrogen must be transferred to the thermal axions [^1]. In the case of cold hydrogen gas $\rho_H\simeq n_H(m_H+3T/2)$ to lowest order, where $n_H$ is the relic abundance. Since hydrogen comprises the majority of baryonic matter at this epoch we can use the baryon-to-photon ratio to estimate $n_H\simeq 6\times 10^{-10}\,n_\gamma$, where $n_\gamma=2\zeta(3)T_\gamma^3/\pi^2$ is the photon number density. Inserting a Maxwell-Boltzmann distribution for the thermal axions we have $$\rho_a=\frac{T^4}{2\pi^2}\int_{0}^\infty\frac{\xi^2\sqrt{\xi^2+(m_a/T)^2}}{\exp\left(\sqrt{\xi^2+(m_a/T)^2}\right)- 1}d\xi\,,$$ and we can solve numerically for the cooling ratio $T_i/T_f$.
Assuming for simplicity that the change in $z$ is negligible during the cooling process, we have $$T_H^{z=17}\simeq T_H^{z=200}\left(\frac{z_{c}+1}{200+1}\right)^2\left(\frac{T_f}{T_i}\right)\left(\frac{17+1}{z_{c}+1}\right)^2\,,$$ where $z_{c}$ is the redshift at which cooling begins. Since dependence on this quantity cancels, we find $$T_{21}=35 \,\mathrm{mK}\left(1-\frac{T_i}{T_f}\frac{T_{\gamma}}{T_H}\right)\sqrt{\frac{1+z}{18}}\,,
\label{Cooled T21}$$ where $T_\gamma$ and $T_H$ take their usual $\Lambda$CDM values. In practice additional care is needed since basic redshift relations do not accurately capture the evolution of $T_H$ in this region, so we use RECFAST to compute $T_H$ and $T_\gamma$ [@Seager:1999bc]. However, the resulting dependence in is nonetheless correct, and so we can use to find the resulting 21cm absorption feature. This is given in Fig. 1, where we see the EDGES best-fit value favours an ALP with mass $m_a\in\left(10,450\right)$ meV.
{width="\linewidth"}
Since in the generic ALP case the relation between $\Omega_ah^2$ and $m_a$ is unfixed, we cannot directly connect them to coupling constraints and thus standard axion phenomenology. However, for the QCD axion the corresponding $f_a$ is given via $$\Omega_a h^2=0.15 X \left(\frac{f_a}{10^{12}\,\mathrm{GeV}}\right)^{7/6}\,,
\label{QCD axion relic density}$$ where for Peccei-Quinn (PQ) symmetry breaking before inflation, $X\sim \sin^2\theta_{mis}/2$, whilst for PQ symmetry breaking after inflation $X\in\left(2,10\right)$ depending on the relative contributions of topological defect decays and vacuum misalignment [@Erken:2011dz]. This yields $f_a\in\left(1.2,6.1\right)\times X^{-6/7}\times 10^7$ GeV, which is in turn related through chiral perturbation theory to $m_a$ via $$m_a\simeq 6\,\mathrm{eV}\left(\frac{10^6\,\mathrm{GeV}}{f_a}\right)\,,
\label{axion mass range}$$ yielding $m_a\in\left(0.1,0.5\right) \times X^{6/7}$ eV.
Note however that $m_a$ is not freely varied in this case; each value is associated to a specific $\Omega_ah^2$, and thus the specific $z$ and $T_H$ at which cooling begins. Taking care to accommodate this, we find a one-to-one mapping between $m_a$ and $T_{21}$. We also note for clarity that in this mass range we can expect both hot and cold axion dark matter, due, for example, to thermal production and vacuum misalignment respectively.
This being the case, we also represent the QCD axion in Fig. 1 via lines of constant $X^{6/7}$. Since $X\in(2,10)$ for PQ symmetry breaking after inflation, the minimum value for this quantity is realised for pre-inflationary symmetry breaking. In this case we have $X^{6/7}\sim 0.5$ in the absence of fine-tuning, assuming the initial misalignment angle is randomly drawn from a uniform distribution on $[-\pi,\pi]$, giving $\langle\theta_{\mathrm{mis}}^2\rangle=\pi^2/3$.
Varying $X$ we find a preferred natural range of $m_a\in(100,450)$ meV for the QCD axion by virtue of the EDGES best-fit result, where each value gives $T_{21}\simeq -0.5$ K at $z\sim17$ by solution of . Fixing $X=1$ as a benchmark case we find $T_{21}\in( -1.75,-0.21)$ K at $z\sim17$, where we remind the reader that $T_{21}\simeq-0.21$ K is the standard $\Lambda$CDM result, which we reach in the limit of this mechanism being inoperative. Working backwards, the 99% confidence limits presented in then in this case imply the range $m_a\in(120,180)$ meV, with the best fit value corresponding to $m_a\simeq 150$ meV.
**QCD axion constraints.** Since the ALP case does not immediately translate to ordinary axion constraints, we can specialise to the QCD axion to gain some phenomenological insight and delineate the parameter values implied by the EDGES observation in this scenario, along with the various experimental and observational constraints which may apply. In Fig. 2 we reproduce constraints on the axion parameter space in our region of interest from [@Graham:2015ouw] colour coded with the resultant value of $T_{21}$ at $z=17$ for the benchmark case of $X=1$. As can be seen, the EDGES observations can be straightforwardly accommodated within the ordinary QCD axion band. Furthermore much of the resulting preferred parameter space will be covered by the IAXO experiment, allowing the possibility of a direct confirmation of these findings.
![The portion of the axion parameter space relevant for our purposes, reproduced from [@Graham:2015ouw], with the 21cm brightness temperature at $z\sim17$ overlaid from axion-induced cooling processes in the benchmark case of $X=1$. The yellow band denotes QCD axion models with varying electromagnetic/colour anomaly coefficients, whilst the black curves indicate possible sensitivities for the proposed IAXO experiment. The best fit axion mass value preferred by the EDGES observations in this case is 150 meV, whilst beyond $\sim 300$ meV the ordinary $\Lambda$CDM result for $T_{21}$ prevails. Variations in $X$ enlarge the preferred range to $m_a\in(100,450)$ meV in the absence of fine-tuning, effectively shifting the colour-coded region within the QCD axion band.](cooling_figure_journalv5.pdf){width="\linewidth"}
We can also note from Ref. [@Archidiacono:2013cha] that although our mass range of interest evades hot dark matter constraints at present, future large scale surveys such as the EUCLID mission are in conjunction with Planck CMB data projected to probe $m_a\gtrsim150$ meV for the QCD axion at high significance, allowing this scenario to be definitively tested in the near future [@Archidiacono:2015mda].
It is of course important to note that the full possible mass range favoured by these results is for DFSZ type axions strongly disfavoured due to stellar energy-loss arguments [@Zhitnitsky:1980tq; @Dine:1981rt; @Graham:2015ouw]. As such we are implicitly considering KSVZ type models [@Shifman:1979if; @Kim:1979if], although the ratio $E/N$ of the electromagnetic to colour anomaly is however allowed to vary within the usual range to accommodate variant models of the QCD axion [@DiLuzio:2016sbl; @DiLuzio:2017pfr].
Strictly speaking even then there is tension between our preferred mass range and the observed burst duration of SN1987A, which favours $f_a\gtrsim 4\times 10^8$ GeV for standard QCD axions [@Raffelt:1996wa]. This arises from an inference of the supernova cooling timescale, and thus energy loss to axions, from the time interval between the first and last neutrino observation. However, given that these limits are derived from a single observation, and not to mention our limited knowledge available about axion emission in this extreme environment (the resulting exclusion being ‘fraught with uncertainties’ in the words of Ref. [@Raffelt:1996wa]), we can follow the example of others (e.g. Ref. [@Archidiacono:2013cha]) and exercise a measure of caution in applying this constraint.
So-called ‘astrophobic’ axion models are also of note here, where $\mathcal{O}$(100) meV axion masses are allowed at the cost of introducing some flavour-violating couplings [@DiLuzio:2017ogq; @Hindmarsh:1997ac]. Furthermore, we can also recapitulate at this point that ultimately the axion cooling mechanism leveraged here is gravitationally mediated, and so could be achieved with no Standard Model couplings whatsoever, and thus no issues in this regard. By extension, the use of the QCD axion is in this context non-essential, and our primary results for generic axion-like-particles can still apply nonetheless.
**Discussion and conclusions.** The EDGES collaboration have recently presented an anomalously strong 21cm absorption profile, which may be the result of dark matter interactions around the time of the cosmic dawn. Despite a flurry of interest there is as of yet no clear consensus on the provenance of this effect, and indeed whether it is a signature of dark matter at all, however these results nonetheless provide an exciting first window into a previously unexplored epoch.
We have in this letter explored the potential of condensed-phase axion dark matter, previously employed in the service of photon cooling, to explain these anomalous observations via reduction of the hydrogen spin temperature during this epoch. By fixing the axion CDM relic density so that cooling begins within the appropriate epoch, we find that the resulting cooling effects are both capable of explaining the EDGES observations and compatible with present day axion phenomenology.
More specifically, we find that the EDGES best-fit result of $T_{21}\simeq-0.5$ K and the requirement that hydrogen cooling occur when $z\in (200,20)$ are consistent with the cooling induced by an axion-like-particle of mass $m_a\in\left(10,450\right)$ meV. Specialising further to the QCD axion case, we find a preferred range $m_a\in(100,450)$ meV, in the absence of fine-tuning.
Furthermore, future experiments and large scale surveys such as IAXO and EUCLID should have the capability to directly probe the relevant parameter region and thereby test this scenario. Indeed, as a dedicated direct-detection experiment sensitive in this mass range IAXO offers particular promise with regards to this scenario. That said, as the underlying cooling mechanism relies only upon gravitational couplings it is not limited strictly to the context of models of the QCD axion, and so can also be arranged to occur in the primary scenario of axion-like-particles with no Standard Model couplings whatsoever, which could then evade these bounds.
We also note Ref. [@Sikivie:2018tml], which appeared shortly after this letter appeared online and deals with exactly the same scenario of axion BEC-induced cooling and 21cm cosmology. A key point raised therein, which we have not previously addressed, is that this mechanism may have a damping effect on Baryon Acoustic Oscillations (BAO). Although it is ultimately argued there that the net effect on BAO should be consistent with observations, it may be worthwhile to more deeply explore the consequences of this scenario for this and other cosmological observables.
This research was supported by a CAS President’s International Fellowship, the Projects 11475238, 11647601, 11875062 and 11875148 supported by the National Natural Science Foundation of China, and by the Key Research Program of Frontier Science, CAS. We also thank our anonymous referees for their very helpful comments and suggestions.
[99]{}
J. D. Bowman, A. E. E. Rogers, R. A. Monsalve, T. J. Mozdzen and N. Mahesh, Nature [**555**]{} (2018) no.7694, 67. R. Barkana, Nature [**555**]{} (2018) no.7694, 71 \[arXiv:1803.06698 \[astro-ph.CO\]\]. J. B. Munoz and A. Loeb, arXiv:1802.10094 \[astro-ph.CO\]. A. Berlin, D. Hooper, G. Krnjaic and S. D. McDermott, arXiv:1803.02804 \[hep-ph\]. R. Barkana, N. J. Outmezguine, D. Redigolo and T. Volansky, arXiv:1803.03091 \[hep-ph\]. S. Fraser [*et al.*]{}, arXiv:1803.03245 \[hep-ph\].
A. A. Costa, R. C. G. Landim, B. Wang and E. Abdalla, arXiv:1803.06944 \[astro-ph.CO\]. C. Li and Y. F. Cai, arXiv:1804.04816 \[astro-ph.CO\]. J. C. Hill and E. J. Baxter, arXiv:1803.07555 \[astro-ph.CO\].
A. Falkowski and K. Petraki, arXiv:1803.10096 \[hep-ph\]. G. Lambiase and S. Mohanty, arXiv:1804.05318 \[hep-ph\]. K. Lawson and A. R. Zhitnitsky, arXiv:1804.07340 \[hep-ph\]. T. Moroi, K. Nakayama and Y. Tang, arXiv:1804.10378 \[hep-ph\]. P. Sikivie and Q. Yang, Phys. Rev. Lett. [**103**]{} (2009) 111301 \[arXiv:0901.1106 \[hep-ph\]\]. O. Erken, P. Sikivie, H. Tam and Q. Yang, Phys. Rev. D [**85**]{} (2012) 063520 \[arXiv:1111.1157 \[astro-ph.CO\]\]. O. Erken, P. Sikivie, H. Tam and Q. Yang, Phys. Rev. Lett. [**108**]{} (2012) 061304 \[arXiv:1104.4507 \[astro-ph.CO\]\]. E. Armengaud [*et al.*]{}, JINST [**9**]{} (2014) T05002 \[arXiv:1401.3233 \[physics.ins-det\]\]. R. Laureijs [*et al.*]{} \[EUCLID Collaboration\], arXiv:1110.3193 \[astro-ph.CO\]. S. S. Chakrabarty, S. Enomoto, Y. Han, P. Sikivie and E. M. Todarello, Phys. Rev. D [**97**]{} (2018) no.4, 043531 \[arXiv:1710.02195 \[hep-ph\]\].
K. Saikawa and M. Yamaguchi, Phys. Rev. D [**87**]{} (2013) no.8, 085010 \[arXiv:1210.7080 \[hep-ph\]\]. S. Davidson and M. Elmer, JCAP [**1312**]{} (2013) 034 \[arXiv:1307.8024 \[hep-ph\]\]. S. Davidson, Astropart. Phys. [**65**]{} (2015) 101 \[arXiv:1405.1139 \[hep-ph\]\]. A. H. Guth, M. P. Hertzberg and C. Prescod-Weinstein, Phys. Rev. D [**92**]{} (2015) no.10, 103513 \[arXiv:1412.5930 \[astro-ph.CO\]\].
R. D. Peccei and H. R. Quinn, Phys. Rev. Lett. [**38**]{} (1977) 1440. doi:10.1103/PhysRevLett.38.1440 S. Weinberg, Phys. Rev. Lett. [**40**]{} (1978) 223. doi:10.1103/PhysRevLett.40.223 F. Wilczek, Phys. Rev. Lett. [**40**]{} (1978) 279. doi:10.1103/PhysRevLett.40.279 M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Nucl. Phys. B [**166**]{} (1980) 493. doi:10.1016/0550-3213(80)90209-6 J. E. Kim, Phys. Rev. Lett. [**43**]{} (1979) 103. doi:10.1103/PhysRevLett.43.103 A. R. Zhitnitsky, Sov. J. Nucl. Phys. [**31**]{} (1980) 260 \[Yad. Fiz. [**31**]{} (1980) 497\]. M. Dine, W. Fischler and M. Srednicki, Phys. Lett. [**104B**]{} (1981) 199. doi:10.1016/0370-2693(81)90590-6
S. Seager, D. D. Sasselov and D. Scott, Astrophys. J. [**523**]{} (1999) L1 \[astro-ph/9909275\]. P. W. Graham, I. G. Irastorza, S. K. Lamoreaux, A. Lindner and K. A. van Bibber, Ann. Rev. Nucl. Part. Sci. [**65**]{} (2015) 485 \[arXiv:1602.00039 \[hep-ex\]\]. M. Archidiacono, S. Hannestad, A. Mirizzi, G. Raffelt and Y. Y. Y. Wong, JCAP [**1310**]{} (2013) 020 \[arXiv:1307.0615 \[astro-ph.CO\]\]. L. Di Luzio, F. Mescia and E. Nardi, Phys. Rev. Lett. [**118**]{} (2017) no.3, 031801 \[arXiv:1610.07593 \[hep-ph\]\]. L. Di Luzio, F. Mescia and E. Nardi, Phys. Rev. D [**96**]{} (2017) no.7, 075003 \[arXiv:1705.05370 \[hep-ph\]\]. G. G. Raffelt, Chicago, USA: Univ. Pr. (1996) 664 p L. Di Luzio, F. Mescia, E. Nardi, P. Panci and R. Ziegler, Phys. Rev. Lett. [**120**]{} (2018) no.26, 261803 \[arXiv:1712.04940 \[hep-ph\]\]. M. Hindmarsh and P. Moulatsiotis, Phys. Rev. D [**56**]{} (1997) 8074 \[hep-ph/9708281\]. M. Archidiacono, T. Basse, J. Hamann, S. Hannestad, G. Raffelt and Y. Y. Y. Wong, JCAP [**1505**]{} (2015) no.05, 050 \[arXiv:1502.03325 \[astro-ph.CO\]\]. P. Sikivie, arXiv:1805.05577 \[astro-ph.CO\].
[^1]: For our parameter range of interest, photon cooling can be neglected. As $n_H$ remains constant during axion cooling, to explain we require $\rho_H(T_i)/\rho_H(T_f)\sim \mathcal{O}(1)$, implying $\rho_H$ and $\rho _a$ must be of the same order. Since $\rho_\gamma>> \rho_H$, from the known baryon-photon ratio, the resulting $\rho _a$ is too small to affect $\rho_\gamma$ and hence $T_{\gamma}$. Thermal axion heating by photons is also strongly suppressed, as there is no large $\sqrt{m_H/T}$ factor in the corresponding equivalent of . We also note the principal constraint in the axion-induced cooling ${}^7$Li scenario was a large resulting $N_{\mathrm{eff}}$ at recombination. For us this is not a cause for concern as we are operating at a much later epoch, and the thermal axions excited will be non-relativistic.
|
---
abstract: 'Let $\theta_3(\tau)=1+2\sum_{\nu=1}^{\infty} q^{\nu^2}$ with $q=e^{i\pi \tau}$ denote the Thetanullwert of the Jacobi theta function $$\theta(z|\tau) \,=\,\sum_{\nu=-\infty}^{\infty} e^{\pi i\nu^2\tau + 2\pi i\nu z} \,.$$ Moreover, let $\theta_2(\tau)=2\sum_{\nu=0}^{\infty} q^{{(\nu+1/2)}^2}$ and $\theta_4(\tau)=1+2\sum_{\nu=1}^{\infty} {(-1)}^{\nu}q^{\nu^2}$. For algebraic numbers $q$ with $0<|q|<1$ and for any $j\in \{ 2,3,4\}$ we prove the algebraic independence over ${\Q}$ of the numbers $\theta_j(n\tau)$ and $\theta_j(\tau)$ for all odd integers $n\geq 3$. Assuming the same conditions on $q$ and $\tau$ as above, we obtain sufficient conditions by use of a criterion involving resultants in order to decide on the algebraic independence over ${\Q}$ of $\theta_j(2m\tau)$ and $\theta_j(\tau)$ $(j=2,3,4)$ and of $\theta_3(4m\tau)$ and $\theta_3(\tau)$ with odd positive integers $m$. In particular, we prove the algebraic independence of $\theta_3(n\tau)$ and $\theta_3(\tau)$ for even integers $n$ with $2\leq n\leq 22$. The paper continues the work of the first-mentioned author, who already proved the algebraic independence of $\theta_3(2^m\tau)$ and $\theta_3(\tau)$ for $m=1,2,\dots$.'
author:
- '***Carsten Elsner[^1], Yohei Tachiya[^2]***'
title: |
0.45in [ ****** ]{}
0.45in **<span style="font-variant:small-caps;">Algebraic independence results for values of theta-constants, II</span>**
---
Dedicated to Professor Iekata Shiokawa on the occasion of his 75th birthday
$$$$ $$$$
**Keywords:** Algebraic independence, Theta-constants, Nesterenko’s theorem, Independence criterion, Modular equations\
**AMS Subject Classification:** 11J85, 11J91, 11F27.\
Introduction and statement of results {#Sec1}
=====================================
Let $\tau $ be a complex variable in the complex upper half-plane $\Im (\tau) >0 $. The series $$\theta_2(\tau) = 2\sum_{\nu=0}^{\infty} q^{{(\nu+1/2)}^2} \,,\qquad \theta_3(\tau) = 1+2\sum_{\nu=1}^{\infty}
q^{\nu^2} \,,\qquad \theta_4(\tau) = 1+2\sum_{\nu=1}^{\infty} {(-1)}^\nu q^{\nu^2}$$ are known as theta-constants or Thetanullwerte, where $q=e^{\pi i\tau} $. In particular, $\theta_3(\tau)$ is the Thetanullwert of the Jacobi theta function $\theta(z|\tau) \,=\,\sum_{\nu=-\infty}^{\infty} e^{\pi i\nu^2\tau + 2\pi i\nu z}$. For an extensive discussion of theta-functions and theta-constants we refer the reader to [@Hurwitz], [@Lang], and [@Lawden]. Recently, the first-named author has proven the following result.\
[**Theorem A.**]{} [@Elsner3 Theorem 1.1] [*Let $q$ be an algebraic number with $q=e^{\pi i \tau} $ and $\Im (\tau) >0 $. Let $m\geq1$ be an integer. Then, the two numbers $\theta_3(2^m\tau)$ and $\theta_3(\tau)$ are algebraically independent over ${\Q}$ as well as the two numbers $\theta_3(n\tau)$ and $\theta_3(\tau)$ for $n=3,5,6,7,9,10,11,12$*]{}.\
The first basic tool in proving such algebraic independence results are integer polynomials in two variables $X,Y$, which vanish at certain points $X=X_0$ and $Y=Y_0$ given by values of rational functions of theta-constants. For instance, for $n=2^m$ $(m\geq3)$ we consider the polynomial $$P_n(X,Y) \,=\, {\big( nX - {(1+Y)}^2 \big)}^{2^{m-2}} + YU_n\big( X,{(1+Y)}^2,Y \big)$$ where $U_n(t_1,t_2,t_3)\in {\Q}[t_1,t_2,t_3]$ is a polynomial satisfying $$U_n\Big( \,\frac{1}{n},1,0\,\Big) \,=\, -2^{2^{m-1}-1} \,.$$ Moreover, we have $$P_n\Big( \,\frac{\theta_3^2(n\tau)}{\theta_3^2(\tau)},\,\frac{\theta_4(\tau)}{\theta_3(\tau)} \,\Big) \,=\, 0\,.$$ The second tool is an algebraic independence criterion (see Lemma \[Lem1\] below), from which the algebraic independence of $\theta_3(n\tau)$ and $\theta_3(\tau)$ over ${\Q}$ can be obtained by proving that the resultant $$\mbox{Res}_X\,\Big( \,P_n(X,Y),\,\frac{\partial}{\partial Y}P_n(X,Y)\,\Big) \,\in \, {\Z}[Y]$$ does not vanish identically (see [@Elsner3 Theorem 4.1]). This can be seen as follows. We have the identities $$\begin{aligned}
P_n(X,0) &=& {\big( 2^mX-1 \big)}^{2^{m-2}} \,,\\
\frac{\partial P_n}{\partial Y} (X,0) &=& -2^{m-1}{\big( 2^mX-1 \big)}^{2^{m-2}-1} + U_n(X,1,0) \,,\end{aligned}$$ from which on the one hand we deduce that $P_n(X,0)$ has a $2^{m-2}$-fold root at $X_1=1/2^m$. On the other hand one has $$\frac{\partial P_n}{\partial Y} (X_1,0) \,=\, U_n\Big( \,\frac{1}{n},1,0\,\Big) \,=\, -2^{2^{m-1}-1} \,\not= \, 0\,.$$ Hence, for $Y=0$ the polynomials $P_n(X,Y)$ and $\partial P_n(X,Y)/\partial Y$ have no common root. Therefore, the above resultant with respect to $X$ does not vanish identically, which gives the desired result. We state the polynomials $P_{2^m}(X,Y)$ for $m=1,2,3,4$ explicitly. The algorithm to compute these polynomials recursivly is given by Lemma 3.1 in [@Elsner3]. $$\begin{aligned}
P_2 &=& 2X-Y^2-1 \,,\\
P_4 &=& 4X-{(1+Y)}^2 \,,\\
P_8 &=& 64X^2 - 16{(1+Y)}^2X + {(1-Y)}^4 \,,\\
P_{16} &=& 65536X^4 - 16384{(1+Y)}^2X^3 + 512(3Y^4+4Y^3+18Y^2+4Y+3)X^2 \\
&& -\,64{(1+Y)}^2(Y^4+28Y^3+6Y^2+28Y+1)X + {(1-Y)}^8 \,,\end{aligned}$$
Let $n\geq 3 $ denote an odd positive integer. Set $$h_j(\tau) := n^2\frac{\theta_j^4(n\tau)}{\theta_j^4(\tau)} \quad (j=2,3,4)\,,\quad
\lambda = \lambda(\tau) := \frac{\theta_2^4(\tau)}{\theta_3^4(\tau)} \,,
\quad \psi (n) := n\prod_{p|n} \Big( 1+\frac{1}{p} \Big) \,,$$ where $p$ runs through all primes dividing $n$. Yu.V. Nesterenko [@Nes3] proved the existence of integer polynomials $P_n(X,Y)\in
{\Z}[X,Y]$ such that $P_n\big(h_j(\tau),R_j(\lambda(\tau))\big)=0$ holds for $j=2,3,4$, odd integers $n\geq 3$, and a suitable rational function $R_2,R_3$, or $R_4$, respectively.\
[**Theorem B.**]{} [@Nes3 Theorem 1, Corollary 3] [*For any odd integer $n\geq 3 $ there exists a polynomial $P_n(X,Y) \in {\Z}[X,Y] $, $\deg_X P_n = \psi (n) $, such that*]{} $$\begin{aligned}
P_n\Big( \,h_2(\tau),16\frac{\lambda(\tau)-1}{\lambda(\tau)} \,\Big) &\,=\,& 0\,, \label{eq:101}\\
P_n\big( h_3(\tau),16\lambda(\tau) \big) &\,=\,& 0 \,,
\label{eq:102}\\
P_n\Big( \,h_4(\tau),16\frac{\lambda(\tau)}{\lambda(\tau)-1} \,\Big) &\,=\,& 0 \,.\label{eq:103}\end{aligned}$$ The polynomials $P_3,P_5,P_7,P_9$, and $P_{11}$ are listed in the appendix of [@Elsner3]. $P_3$ and $P_5$ are already given in [@Nes3], $P_7,P_9$, and $P_{11}$ are the results of computer-assisted computations of the first-named author.\
In this paper we focus on the problem to decide on the algebraic independence of $\theta_j(n\tau)$ and $\theta_j(\tau)$ $(j=2,3,4)$ over ${\Q}$ for algebraic numbers $q$, where $n\geq 3$ is an odd integer or $n=2m,4m$ with odd positive integers $m$. The above Theorem B will be used in Section \[Sec2\].\
In the following theorems, the number $q=e^{\pi i \tau} $ is an algebraic number with $\Im (\tau) >0 $.
\[Thm1\] Let $n\geq3$ be an odd integer. Then, the numbers in each of the sets $$\{\theta_2(n\tau),\theta_2(\tau)\},\quad\{\theta_3(n\tau),\theta_3(\tau)\},\quad
\{\theta_4(n\tau),\theta_4(\tau)\}$$ are algebraically independent over $\Bbb{Q}$.
In order to prove this theorem we first shall show that for an algebraic number $q$ with $0<|q|<1$ the numbers $h_2(\tau)$, $h_3(\tau)$, and $h_4(\tau)$ are transcendental (Lemma \[Lem7\]). This interim result already shows that the two numbers $\theta_j(n\tau)$ and $\theta_j(\tau)$ $(j=2,3,4)$ are homogeneously algebraically independent over ${\Q}$.
On the other hand, it has not been shown that Theorem \[Thm1\] holds for arbitrary even integers $n$. However we can prove it for small even integers $n$ by checking the non-vanishing of a Jacobian determinant (Lemma \[Lem1\]), which is hard to decide when the involved polynomials are not given explicitly.
For $n=2,4,6$, the numbers $\theta_2(n\tau)$ and $\theta_2(\tau)$ are algebraically independent over ${\Q}$. \[Thm2\]
For $n=2,4,6,8,10$, the numbers $\theta_4(n\tau)$ and $\theta_4(\tau)$ are algebraically independent over ${\Q}$. \[Thm4\]
Let $2\leq n\leq 22$ be an even integer. Then, the numbers $\theta_3(n\tau)$ and $\theta_3(\tau)$ are algebraically independent over ${\Q}$. \[Thm3\]
Auxiliary results {#Sec2}
=================
In this section, we prepare some lemmas to prove theorems.
[@Elsner2 Lemma 4] Let $q$ be an algebraic number with $q=e^{\pi i \tau} $ and $\Im (\tau) >0 $. Then, any two numbers in the set in each of the sets $$\big\{ \theta_2(\tau),\theta_3(\tau),\theta_4(\tau) \big\}$$ are algebraically independent over ${\Q} $. \[Lem2\]
This result can be derived from Yu.V. Nesterenko’s theorem [@Nes1] on the algebraic independence of the values $P(q),Q(q),R(q)$ of the Ramanujan functions $P,Q,R$ at a nonvanishing algebraic point $q$. It should be noticed that the three numbers $\theta_2(\tau)$, $\theta_3(\tau)$, and $\theta_4(\tau)$ are algebraically dependent over $\Bbb{Q}$, since the identity $$\label{eq:405}
\theta_3^4(\tau)=\theta_2^4(\tau)+\theta_4^4(\tau)$$ holds for any $\tau\in\Bbb{C}$ with $\Im (\tau) >0 $.\
In what follows, we distinguish two cases based on the parity of $n$.
The case where $n$ is odd
-------------------------
The following subsequent Lemmas \[Lem6\], \[Lem7\], and \[Lem71\] are needed to prove Theorem \[Thm1\]. Let $n\geq3$ be a fixed odd integer and $\tau \in {\C}$ with $\Im (\tau)>0$. From Theorem B we know that there exists a nonzero polynomial $P_n(X,Y)\in {\Z}[X,Y]$ with $\deg_XP_n=\psi(n)$ such that $P_n(X_0,Y_0)$ vanishes for $$X_0 \,:=\, h_3(\tau) \,=\, n^2\frac{\theta_3^4(n\tau)}{\theta_3^4(\tau)}\,,\qquad Y_0 \,:=\, 16\lambda(\tau) \,=\, 16\frac{\theta_2^4(\tau)}{\theta_3^4(\tau)} \,.$$ Let $N:=\deg_Y P_n(X,Y)$. The polynomials $Q_j(X)\in {\Z}[X]$ $(j=0,1,\dots ,N)$ are given by $$P_n(X,Y) \,=\, \sum_{j=0}^N Q_j(X)Y^j \,.
\label{2.10}$$
For any complex number $\alpha$, there exists a subscript $j=j(\alpha)$ such that $Q_j(\alpha)\not= 0$. \[Lem6\]
[*Proof of Lemma \[Lem6\].*]{} Suppose on the contrary that there exists an $\alpha\in\Bbb{C}$ such that $Q_j(\alpha)=0$ for all $j=0,1,\dots ,N$. It follows from (\[2.10\]) that there exists a polynomial $R_n(X,Y)\in {\C}[X,Y]$ satisfying $$P_n(X,Y) \,=\, (X-\alpha) R_n(X,Y) \,.
\label{H120}$$ In accordance with formula (5) in [@Nes3] we define for any $\tau \in {\C}$ with $\Im (\tau)>0$ the numbers $$x_{\nu}(\tau) \,:=\, u^2\frac{\theta_3^4\big( \frac{u\tau +2v}{w}\big)}{\theta_3^4(\tau)} \qquad \big( \nu =1,2,\dots,\psi(n)\big) \,,$$ where the nonnegative integers $u,v,w$ are given by [@Nes3 Lemma 1]. These integers depend on $n$ and $\nu$ and satisfy the three conditions $$(u,v,w) \,=\, 1 \,,\quad uw \,=\, n \,,\quad 0\leq v<w \,.
\label{H130}$$ Substituting $Y=16\lambda (\tau)$ into (\[H120\]), we have by [@Nes3 Corollary 1] $$\prod_{\nu =1}^{\psi(n)} \big( X - x_{\nu}(\tau)\big) \,=\, P_n\big( X,16\lambda(\tau)\big) \,=\, (X-\alpha)R_n\big( X,16\lambda(\tau) \big) \,.
\label{H140}$$ Next, by substituting $X=\alpha$ into (\[H140\]), we obtain $$\prod_{\nu =1}^{\psi(n)} \big( \alpha - x_{\nu}(\tau)\big) \,=\, 0
\label{H150}$$ for any $\tau \in {\C}$ with $\Im (\tau)>0$. Let $a_k:=nki$ $(k=1,2,\dots)$ be a sequence of complex numbers on the imaginary axis. Then we get by (\[H150\]) $$\prod_{\nu =1}^{\psi(n)} \big( \alpha - x_{\nu}(a_k)\big) \,=\, 0 \qquad (k=1,2,\dots) \,.$$ Hence, by the pigeonhole principle, there is a subscript $\nu_0$ with $1\leq \nu_0 \leq \psi(n)$ such that $$u^2\frac{\theta_3^4\big( \frac{ub_k +2v}{w}\big)}{\theta_3^4(b_k)} \,=\, x_{\nu_0}(b_k) \,=\, \alpha
\label{H160}$$ holds for some subsequence ${\{ b_k\}}_{k\geq 1}$ of ${\{ a_k\}}_{k\geq 1}$. The integers $u,v,w$ in (\[H160\]) depend on $n,\nu_0$ and satisfy the condions in (\[H130\]). Since $\sqrt[4]{\alpha/u^2}$ takes four complex values, we see in the same way by applying the pigeonhole principle that there exists a complex number $\beta$ with $\beta^4 =\alpha/u^2$ such that $$\frac{\theta_3\big( \frac{uc_k +2v}{w}\big)}{\theta_3(c_k)} \,=\, \beta
\label{H170}$$ holds for some subsequence ${\{ c_k\}}_{k\geq 1}$ of ${\{ b_k\}}_{k\geq 1}$. Let $c_k := nt_k i$, where $t_k$ $(k\geq1)$ are positive integers with $t_1<t_2<\dots$. Then, by substituting $\tau =c_k$ into $q=e^{\pi i\tau}=e^{-\pi t_kn}$, we obtain $$\theta_3(c_k) \,=\, 1+2\sum_{m=1}^{\infty} {\big( e^{-\pi t_k} \big)}^{nm^2} \,.
\label{H180}$$ For $\tau = \frac{uc_k +2v}{w}$ it follows with $\xi_w :=e^{\frac{2\pi i}{w}}$ that $$q \,=\, e^{\pi i\frac{unt_ki+2v}{w}} \,=\, {\big( e^{\frac{2\pi i}{w}}\big)}^v \cdot e^{-\pi u^2t_k} \,=\, \xi_w^v \cdot e^{-\pi u^2t_k} \,,$$ which yields $$\theta_3\Big( \,\frac{uc_k+2v}{w}\,\Big) \,=\, 1+2\sum_{m=1}^{\infty} {\xi}_w^{vm^2} {\big( e^{-\pi t_k} \big)}^{u^2m^2} \,.
\label{H190}$$ Next, two complex functions $f(z)$ and $g(z)$ are defined by their power series, namely $$\begin{aligned}
f(z) &:=& 1+2\sum_{m=1}^{\infty} z^{nm^2} \,,\\
g(z) &:=& 1+2\sum_{m=1}^{\infty} {\xi}_w^{vm^2}z^{u^2m^2} \,.\end{aligned}$$ Setting $\eta_k:=e^{-\pi t_k}$ $(k=1,2,\dots)$, it follows from (\[H170\]) to (\[H190\]) that $$\beta f(\eta_k) \,=\, g(\eta_k) \qquad (k=1,2,\dots) \,.$$ The sequence ${\{ \eta_k\}}_{k\geq 1}$ tends to zero, such that we may apply the identity theorem for power series. We obtain $$\beta f(z) \,=\, g(z) \qquad (|z|<1) \,.$$ Comparing the first and second nonvanishing coefficients of the series, it follows that $\beta =1$, and, by applying $uw=n$ in (\[H130\]), $u^2=n$, $u=w=\sqrt{n}>1$, $\xi_w^v=1$. Since $\xi_w \not=1$, we conclude from $\xi_w^v=1$ and $0\leq v<w$ in (\[H130\]) that $v=0$. Finally, we deduce that $(u,v,w)=(u,0,u)=u>1$, which contradicts the arithmetic condition $(u,v,w)=1$ in (\[H130\]). This completes the proof of Lemma \[Lem6\]. $\Box$
Let $q$ be an algebraic number with $q=e^{\pi i \tau} $ and $\Im (\tau) >0 $. Then the three numbers $h_2(\tau)$, $h_3(\tau)$, and $h_4(\tau)$ are transcendental. \[Lem7\]
[*Proof of Lemma \[Lem7\].*]{} We suppose on the contrary that $h_3(\tau)$ is an algebraic number. Then it follows from (\[2.10\]) and from Lemma \[Lem6\] that $$F(Y) \,:=\, P_n\big( h_3(\tau),Y \big) \,=\, \sum_{j=0}^N Q_j\big( h_3(\tau) \big) Y^j$$ is a nonzero polynomial with algebraic coefficients. Hence, the identity (\[eq:102\]) yields $F\big( 16\lambda(\tau) \big)=0$, thus showing that $\lambda(\tau)=\frac{\theta_2^4(\tau)}{\theta_3^4(\tau)}$ is an algebraic number. But, by Lemma \[Lem2\], the numbers $\theta_2(\tau)$ and $\theta_3(\tau)$ are algebraically independent over ${\Q}$, a contradiction. Thus, $h_3(\tau)$ is transcendental. Similarly, the transcendence of $h_2(\tau)$ and $h_4(\tau)$ follows from the identities (\[eq:101\]) and (\[eq:103\]). $\Box$
\[Lem71\] Let $\alpha_1,\alpha_2\in\Bbb{C}$ be algebraically independent over $\Bbb{Q}$ and let $\beta_1,\beta_2\in\Bbb{C}$ $(\beta_2\neq0)$ such that $\beta_1/\beta_2$ is transcendental. Suppose that there exist nonzero polynomials $P(X,Y),Q(X,Y)\in\Bbb{Q}[X,Y]$ such that $$\label{eq:10}
P(\alpha_1/\alpha_2,\beta_1/\beta_2)=0.$$ and $$\label{eq:20}
Q(\beta_1,\beta_2)=\alpha_2.$$ Then $\beta_1$ and $\beta_2$ are algebraically independent over $\Bbb{Q}$.
[*Proof of Lemma \[Lem71\].*]{} Define the fields $F:=\mathbb{Q}(\beta_1,\beta_2)$ and $E:=F(\alpha_1,\alpha_2)$. We first prove that the field extension $E/F$ is algebraic. Since $\alpha_2\in F$ by (\[eq:20\]), we only have to show that $\alpha_1$ is algebraic over $F$. Let $$P(X,Y):=\sum_{j=0}^{\ell}R_j(Y)X^j,\qquad R_j(Y)\in\mathbb{Q}[Y],\qquad R_\ell(Y)\not\equiv0,$$ and define $$f(X):=\sum_{j=0}^{\ell}R_j(\beta_1/\beta_2)(X/\alpha_2)^j\in F[X],$$ where $f(X)$ is a nonzero polynomial, since $R_\ell(\beta_1/\beta_2)\neq0$ follows from the transcendence of $\beta_1/\beta_2$. By (\[eq:10\]), we have $f(\alpha_1)=P(\alpha_1/\alpha_2,\beta_1/\beta_2)=0$, which implies that $\alpha_1$ is algebraic over $F$.
Thus we get $$\begin{array}{ll}
{\rm trans.}\deg F/\mathbb{Q}={\rm trans.}\deg E/F+
{\rm trans.}\deg F/{\mathbb{Q}}=
{\rm trans.}\deg E/{\mathbb{Q}}\geq2,
\end{array}$$ where we used the algebraic independence hypothesis on $\alpha_1$ and $\alpha_2$. On the other hand, ${\rm trans.}\deg F/\mathbb{Q}\leq2$ is trivial. Therefore we obtain $${\rm trans.}\deg F/\mathbb{Q}=2,$$ which gives the desired result.
The case where $n$ is even
--------------------------
We need the the expressions of $\theta_j(2^\ell\tau)$ $(j=2,3,4,\ell=1,2,3)$ in terms of $\theta_j:=\theta_j(\tau)$ $(j=2,3,4)$, which will be used to prove Theorems \[Thm2\], \[Thm4\], and \[Thm3\]. Recall that the following identities hold for any $\tau\in\mathbb{C}$ with $\Im(\tau)>0$: $$\begin{aligned}
2\theta_2^2(2\tau) &=&\theta_3^2 - \theta_4^2
\,,
\label{Formula1} \\
2\theta_3^2(2\tau) &=& \theta_3^2 + \theta_4^2
\,,
\label{Formula2}\\
\theta_4^2(2\tau) &=& \theta_3
\theta_4
\,,
\label{Formula3}\end{aligned}$$ and $$\begin{aligned}
2\theta_2(4\tau) &=& \theta_3 - \theta_4\,,
\label{Formula4} \\
2\theta_3(4\tau) &=& \theta_3+ \theta_4 \,,
\label{Formula5}\\
2\theta_4^4(4\tau) &=&{\big( \theta_3^2+ \theta_4^2\big)}
\theta_3\theta_4\,.
\label{Formula6}\end{aligned}$$ The most important tool to transfer the algebraic independence of a set of $m$ numbers to another set of $m$ numbers, which all satisfy a system of algebraic identities, is given by the following lemma. We call it an [*algebraic independence criterion*]{} (AIC).
[@Elsner Lemma 3.1] Let $x_1,\dots,x_m\in{\C}$ be algebraically independent over ${\Q} $ and let $y_1,\dots,y_m\in{\C} $ satisfy the system of equations $$f_j(x_1,\dots,x_m,y_1,\dots,y_m) \,=\, 0 \qquad (1\leq j\leq m) \,,$$ where $f_j(t_1,\dots,t_m,u_1,\dots,u_m)\in{\Q}[t_1,\dots,t_m,u_1,\dots,u_m] $ $(1\leq j\leq m)$. Assume that $$\det \left( \frac{\partial f_j}{\partial t_i}(x_1,\dots,x_m,y_1,\dots,y_m) \right) \,\not= \,0 \,.$$ Then the numbers $y_1,\dots,y_m$ are algebraically independent over ${\Q} $. \[Lem1\]
We shall apply the AIC to the sets $\{ x_1,x_2\}$ with $x_1,x_2\in\Bbb{Z}[\theta_2,\theta_3,\theta_4]$.
### The case $n=2m$ with odd integer $m$
In this subsection, we put $n=2m$ with an odd integer $m>1$. In Lemmas \[Lem3.1\], \[Lem4\], and \[Lem3.2\] below, we give sufficient conditions for the numbers in each of the set $\{\theta_j(n\tau),\theta_j(\tau)\}$ $(j=2,3,4)$ to be algebraically independent over ${\Q}$. Replacing $\tau$ by $2\tau$ in (\[eq:101\]), (\[eq:102\]), and (\[eq:103\]), we have $$P_m(X_0,Y_0)=0$$ for $$\begin{aligned}
X_0=h_2(2\tau)=\displaystyle m^2\frac{\theta_2^4(n\tau)}{\theta_2^4(2\tau)}
&\quad \mbox{and}\quad &
Y_0=
16\frac{\lambda(2\tau)-1}{\lambda(2\tau)}=
16\frac{\theta_2^4(2\tau)-\theta_3^4(2\tau)}{\theta_2^4(2\tau)},
\label{eq:160218} \\
X_0=h_3(2\tau)=m^2\frac{\theta_3^4(n\tau)}{\theta_3^4(2\tau)}
&\mbox{and}
&
Y_0=
16\lambda(2\tau)=16\frac{\theta_2^4(2\tau)}{\theta_3^4(2\tau)},\label{eq:160219}\\
X_0=h_4(2\tau)=\, m^2\frac{\theta_4^4(n\tau)}{\theta_4^4(2\tau)}
&\mbox{and}
&
Y_0=16\frac{\lambda(2\tau)}{\lambda(2\tau)-1}=16\frac{\theta_2^4(2\tau)}{\theta_2^4(2\tau)-\theta_3^4(2\tau)},\label{eq:160220}\end{aligned}$$ respectively.\
Let $q$ be an algebraic number with $q=e^{\pi i \tau} $ and $\Im (\tau) >0 $. The total degree of $P_m(X,Y)$ is denoted by $M$.
If the polynomial $$\mbox{\rm Res\,}_X \left( P_m(X,Y),\,X\frac{\partial}{\partial X} P_m\big( X,Y\big) +
2\big( Y-16 \big)\frac{\partial}{\partial Y} P_m\big( X,Y\big)
\right)$$ does not vanish identically, then the numbers $\theta_2(n\tau)$ and $\theta_2(\tau)$ are algebraically independent over ${\Q}$. \[Lem3.1\]
[*Proof of Lemma \[Lem3.1\].*]{} Let $$\begin{array}{lcllcl}
x_1 &:=& (\theta_3^4-\theta_4^4)^2\,,\quad & x_2&:=& (\theta_3^2+\theta_4^2)^2 \,,\\
y_1 &:=& 4m^2\theta_2^4(n\tau)\,,\quad & y_2&:=& \theta_2^4 \,.
\end{array}$$ Then the numbers $x_1$ and $x_2$ are algebraically independent over ${\Q}$. Indeed, the numbers $\theta_3$ and $\theta_4$ are the roots of polynomial $$T^8-\frac{1}{2}\left(\frac{x_1}{x_2}+x_2\right)T^4+\frac{1}{16}
\left(\frac{x_1}{x_2}-x_2\right)^2,$$ so that the field $E:=\mathbb{Q}(\theta_3,\theta_4)$ is an algebraic extension of $F:=\mathbb{Q}(x_1,x_2)$, and hence by Lemma \[Lem2\] $$\begin{array}{ll}
{\rm trans.}\deg F/\mathbb{Q}={\rm trans.}\deg E/F+
{\rm trans.}\deg F/{\mathbb{Q}}=
{\rm trans.}\deg E/{\mathbb{Q}}=2.
\end{array}$$ By the identities (\[Formula1\]) and (\[Formula2\]), the numbers $X_0$ and $Y_0$ in (\[eq:160218\]) are expressed as $$X_0 \,=\, \frac{x_2y_1}{{x_1}} \qquad \mbox{and} \qquad
Y_0 \,=\, 16\frac{x_1-x_2^2}{{x_1}}
\,.$$ Define $$\begin{aligned}
g_1(t_1,t_2,u_1,u_2)&:=&\frac{t_2u_1}{{t_1}},\nonumber \\\nonumber \\
g_2(t_1,t_2,u_1,u_2)&:=&16\frac{t_1-t_2^2}{{t_1}},\nonumber \\\nonumber \\
f_1(t_1,t_2,u_1,u_2) &:=&t_1^{M} P_m
(g_1,g_2) \label{H400} \\\nonumber \\
f_2(t_1,t_2,u_1,u_2) &:=& u_2^2 - t_1 \nonumber \,.\end{aligned}$$ Since $g_1(x_1,x_2,y_1,y_2)=X_0$ and $g_1(x_1,x_2,y_1,y_2)=Y_0$, $$f_1(x_1,x_2,y_1,y_2)=x_1^{M}P_m(X_0,Y_0)=0$$ and by the identity (2.1) $$f_2(x_1,x_2,y_1,y_2)=y_2^2-x_1=0.$$ Using the algebraic independence criterion we have to show the nonvanishing of $$\Delta \,:=\, \det \left( \begin{array}{cc}
\displaystyle \frac{\partial f_1}{\partial t_1} & \displaystyle \frac{\partial f_1}{\partial t_2} \\ \\
\displaystyle \frac{\partial f_2}{\partial t_1} & \displaystyle \frac{\partial f_2}{\partial t_2}
\end{array} \right) \,=\, \frac{\partial f_1}{\partial t_2}$$ at $(x_1,x_2,y_1,y_2)$; namely by (\[H400\]) $$\begin{aligned}
\frac{\partial f_1}{\partial t_2} \big( x_1,x_2,y_1,y_2 \big)&=& x_1^{M} \frac{\partial P_m}{\partial t_2}(X_0,Y_0)\neq0.\end{aligned}$$ Applying the chain rule, we obtain $$\begin{aligned}
\frac{\partial P_m}{\partial t_2}(X_0,Y_0)&=&\frac{\partial P_m}{\partial X}(X_0,Y_0)\cdot\frac{\partial g_1}{\partial t_2}(x_1,x_2,y_1,y_2)+
\frac{\partial P_m}{\partial Y}(X_0,Y_0)\cdot\frac{\partial g_2}{\partial t_2}(x_1,x_2,y_1,y_2)\\\\
&=& \frac{1}{x_2}\Big( \,X_0\frac{\partial P_m}{\partial X}(X_0,Y_0) + 2\big( Y_0-16 \,\big)
\frac{\partial P_m}{\partial Y}(X_0,Y_0) \,\Big) \,.\end{aligned}$$ Therefore, in order to prove the lemma by the algebraic independence criterion, it suffices to show that $$X_0\frac{\partial P_m}{\partial X}(X_0,Y_0) + 2\big( Y_0 -16 \,\big) \frac{\partial P_m}{\partial Y}(X_0,Y_0) \,\not= \, 0\,.
\label{H500}$$
By the hypothesis of Lemma \[Lem3.1\] the polynomial $$R(Y):=\mbox{Res\,}_X \left( P_m(X,Y),\,X\frac{\partial}{\partial X} P_m\big( X,Y\big) +
2\big( Y-16 \big)\frac{\partial}{\partial Y} P_m\big( X,Y\big)
\right)\in{\mathbb{Z}}[Y]$$ does not vanish identically. For fixed $Y=Y_0:=16(x_1-x_2^2)/x_1$ we have $R(Y_0)\in {\Q}(x_1,x_2)$, so that the algebraic independence of $x_1,x_2$ proves $R(Y_0)\not= 0$. In particular, $P_m(X,Y_0)$ and $$X\frac{\partial}{\partial X} P_m\big( X,Y_0\big) +
2\big( Y_0-16 \big)\frac{\partial}{\partial Y} P_m\big( X,Y_0\big)$$ (which both are polynomials in $X$) have no common root. Since $P_m(X,Y_0)$ vanishes for $X=X_0:=x_2y_1/x_1$, we obtain (\[H500\]). The proof of Lemma \[Lem3.1\] is completed. $\Box$
If the polynomial $$\mbox{\rm Res\,}_X \left( P_m(X^2,Y^2),\,X^2\frac{\partial}{\partial X} P_m\big( X^2,Y^2\big) + \big( Y^2+4Y \big)\frac{\partial}{\partial Y} P_m\big( X^2,Y^2\big)
\right)$$ does not vanish identically, then the numbers $\theta_3(n\tau)$ and $\theta_3(\tau)$ are algebraically independent over ${\Q}$. \[Lem4\]
[*Proof of Lemma \[Lem4\].*]{} Let $$\begin{array}{lcllcl}
x_1 &:=& 2\theta_3^2\,,\quad & x_2 &:=& \theta_3^2+\theta_4^2 \,,\\
y_1 &:=& 2m\theta_3^2(n\tau)\,,\quad & y_2 &:=&\, \theta_3^2 \,.
\end{array}$$ Then the numbers $x_1,x_2$ are algebraically independent over ${\Q}$ and we see by (\[Formula1\]) and (\[Formula2\]) that the numbers $X_0$ and $Y_0$ in (\[eq:160219\]) are given by $$X_0 \,=\, \frac{y_1^2}{x_2^2} \qquad \mbox{and} \qquad
Y_0=(\sqrt{Y_0})^2 \,:=\, \left(\frac{4{(x_1 - x_2)}}{x_2}\right)^2 \,.
\label{H300}$$ Define $$\begin{aligned}
f_1(t_1,t_2,u_1,u_2) &:=&{t_2}^{2M} P_m
\left(\frac{u_1^2}{t_2^2},\frac{16{(t_1 - t_2)}^2}{t_2^2}\right) \nonumber \\\nonumber \\
f_2(t_1,t_2,u_1,u_2) &:=& 2u_2 - t_1. \nonumber \\\nonumber \,\end{aligned}$$ Similarly to the proof of Lemma \[Lem3.1\], applying the algebraic independence criterion, we have to show that $$\frac{\partial P_m}{\partial t_2}(X_0,Y_0)
=-\frac{2}{x_2}\Big( \,X_0\frac{\partial P_m}{\partial X}(X_0,Y_0) + \big( Y_0 + 4\sqrt{Y_0} \,\big)
\frac{\partial P_m}{\partial Y}(X_0,Y_0) \,\Big)\neq0.
\label{eq:160221}$$ By the hypothesis of Lemma \[Lem4\] the polynomial $$R(Y) \,:=\, \mbox{Res\,}_X \left( P_m(X^2,Y^2),\,X^2\frac{\partial P_m}{\partial X}\big( X^2,Y^2\big) + \big( Y^2+4Y \big)\frac{\partial P_m}{\partial Y}
\big( X^2,Y^2\big) \right) \,\in \, {\Z}[Y]$$ does not vanish identically. Since the numbers $x_1$ and $x_2$ are algebraically independent over ${\Q}$, we have $R(Y_1)\not= 0$ for $Y_1:=4(x_1-x_2)/x_2$, and hence the polynomials $P_m(X^2,Y_1^2)$ and $$X^2\frac{\partial P_m}{\partial X}\big( X^2,Y_1^2\big) + \big( Y_1^2+4Y_1 \big)\frac{\partial P_m}{\partial Y} \big( X^2,Y_1^2\big)$$ have no common root. Noting that $P_m(X_1^2,Y_1^2)=0$ holds for $X_1:=y_1/x_2$, we obtain $$X_1^2\frac{\partial P_m}{\partial X}\big( X_1^2,Y_1^2\big) + \big( Y_1^2+4Y_1 \big)\frac{\partial P_m}{\partial Y} \big( X_1^2,Y_1^2\big) \,\not=\, 0\,.$$ Finally, using $X_0=X_1^2$ and $Y_0=Y_1^2$ by (\[H300\]), we obtain (\[eq:160221\]), which completes the proof of Lemma \[Lem4\]. $\Box$
If the polynomial $$\mbox{\rm Res\,}_X
\left( P_m(X,Y),\,
X^2\left(
\frac{\partial P_m}{\partial X}(X,Y)\right)^2-
Y\big( Y -16 \,\big)
\left(\frac{\partial P_m}{\partial Y}(X,Y)\right)^2
\right)$$ does not vanish identically, then the numbers $\theta_4(n\tau)$ and $\theta_4(\tau)$ are algebraically independent over ${\Q}$. \[Lem3.2\]
[*Proof of Lemma \[Lem3.2\].*]{} By (\[Formula1\]), (\[Formula2\]), and (\[Formula3\]), the numbers $X_0$ and $Y_0$ in (\[eq:160220\]) are expressed as $$X_0 \,=\, \frac{y_1}{{x_2}y_2} \qquad \mbox{and} \qquad
Y_0 \,=\, -4\frac{(x_2-y_2)^2}{{x_2}y_2}$$ with $$Y_0(Y_0-16)=(\sqrt{Y_0(Y_0-16)})^2:=
\left(
\frac{4(x_2^2-y_2^2)}{{x_2}y_2}
\right)^2
\,,$$ where $$\begin{array}{lcllcl}
x_1 &:=&\theta_4^2 \,,\quad & x_2&:=&\theta_3^2 \,,\\
y_1 &:=& m^2\theta_4^4(n\tau)\,,\quad & y_2&:=& \theta_4^2\,.
\end{array}$$ Define $$\begin{aligned}
f_1(t_1,t_2,u_1,u_2) &:=&(t_2u_2)^{M} P_m
\left(\frac{u_1}{{t_2}u_2},-4\frac{(t_2-u_2)^2}{{t_2}u_2}\right) \nonumber \\\nonumber \\
f_2(t_1,t_2,u_1,u_2) &:=& u_2 - t_1 \nonumber \,.\end{aligned}$$ Then, similarly to the proofs of privious lemmas, we have only to prove $$\frac{\partial P_m}{\partial t_2}(X_0,Y_0)
=-\frac{1}{x_2}\Big( \,X_0\frac{\partial P_m}{\partial X}(X_0,Y_0) -
\sqrt{Y_0\big( Y_0-16 \,\big)}
\frac{\partial P_m}{\partial Y}(X_0,Y_0) \,\Big)\neq0 \,,$$ which follows immediately from the hypothesis of Lemma \[Lem3.2\]. $\Box$
### The case $n=4m$ with odd integer $m$
Let $n=4m$, where $m>1$ is an odd integer. Let $q$ be an algebraic number with $q=e^{\pi i\tau}$ and $\Im (\tau)>0$. If the polynomial $$\mbox{Res\,}_X \left( P_m(X^4,Y^4),\,X^4\frac{\partial}{\partial X} P_m\big( X^4,Y^4\big) + \big( Y^4+2Y^3 \big)\frac{\partial}{\partial Y} P_m\big( X^4,Y^4\big)
\right)$$ does not vanish identically, then the numbers $\theta_3(n\tau)$ and $\theta_3(\tau)$ are algebraically independent over ${\Q}$. \[Lem5\]
[*Proof of Lemma \[Lem5\].*]{} Let $$\begin{array}{lcllcl}
x_1 &:=&2\theta_3 \,,\quad & x_2&:=&\theta_3+\theta_4 \,,\\
y_1 &:=& 16m^2\theta_3^4(n\tau)\,,\quad & y_2&:=& \theta_3 \,.
\end{array}$$ Then, by the identities (\[Formula4\]) and (\[Formula5\]), the polynomial $P_m(X,Y)$ vanishes at $$X_0 \,=\, \frac{y_1}{x_2^4} \qquad \mbox{and} \qquad
Y_0=(\sqrt[4]{Y_0})^4 \,:=\, \left(\frac{2{(x_1 - x_2)}}{x_2}\right)^4\quad \mbox{with}\quad
\sqrt[4]{Y_0^3}:=(\sqrt[4]{Y_0})^3.$$ Again we have $2y_2-x_1=0$, and $x_1,x_2$ are algebraically independent over ${\Q}$ for any algebraic number $q=e^{\pi i\tau}$ with $\Im (\tau)>0$. We introduce the polynomials $$\begin{aligned}
f_1(t_1,t_2,u_1,u_2) &:=&t_2^{4M} P_m\Big( \,\frac{u_1}{t_2^4},
\frac{16{(t_1-t_2)}^4}{t_2^4}\,\Big) \,,
\label{H100} \\
f_2(t_1,t_2,u_1,u_2) &:=& 2u_2 - t_1 \nonumber \,.\end{aligned}$$ Using the algebraic independence criterion, we have to show that $$\frac{\partial f_1}{\partial t_2} \big( x_1,x_2,y_1,y_2 \big)=-4x_2^{4M-1} \Big( \,X_0\frac{\partial P_m}{\partial X}(X_0,Y_0) + \big( Y_0 + 2\sqrt[4]{Y_0^3} \,\big)
\frac{\partial P_m}{\partial Y}(X_0,Y_0) \,\Big)\neq0 \,.$$ which follows from the hypothesis of Lemma \[Lem5\]. $\Box$
Proof of Theorems {#Sec4}
=================
[*Proof of Theorem \[Thm1\]*]{}. We consider the case of $\theta_3(\tau)$. Let $$\begin{array}{ll}
\alpha_1:=16\theta_2^4,&\quad \alpha_2:=\theta_3^4,\\\\
\beta_1:=n^2\theta_3^4(n\tau),&\quad \beta_2:=\theta_3^4,
\end{array}$$ where, by Lemmas \[Lem2\] and \[Lem7\], the numbers $\alpha_1$ and $\alpha_2$ are algebraically independent over $\Bbb{Q}$ and the number $\beta_1/\beta_2=h_3(\tau)$ is transcendental. Define $P(X,Y):=P_n(Y,X)$ and $Q(X,Y):=Y$. By (\[eq:102\]) $$P(\alpha_1/\alpha_2,\beta_1/\beta_2)=P_n(h_3(\tau),16\lambda(\tau))=0$$ and $$Q(\beta_1,\beta_2)=\beta_2=\alpha_2.$$ Hence, applying Lemma \[Lem71\], we obtain the algebraic independence over $\mathbb{Q}$ of the numbers $\beta_1$ and $\beta_2$. This implies that the numbers $\theta_3(n\tau)$ and $\theta_3(\tau)$ are algebraically independent over $\mathbb{Q}$.\
The same holds for the sets $\{\theta_2(n\tau),\theta_2(\tau)\}$ and $\{\theta_4(n\tau),\theta_4(\tau)\}$. In these cases, we use the identities (\[eq:101\]), (\[eq:103\]) and Lemma \[Lem71\] with $$\begin{array}{ll}
\alpha_1:=16(\theta_2^4-\theta_3^4),&\quad \alpha_2:=\theta_2^4,\\\\
\beta_1:=n^2\theta_2^4(n\tau),&\quad \beta_2:=\theta_2^4,\\\\
P(X,Y):=P_n(Y,X),&\quad Q(X,Y):=Y,
\end{array}$$ and $$\begin{array}{ll}
\alpha_1:=16\theta_2^4,&\quad \alpha_2:=\theta_2^4-\theta_3^4,\\\\
\beta_1:=n^2\theta_4^4(n\tau),&\quad \beta_2:=\theta_4^4,\\\\
P(X,Y):=P_n(Y,X),&\quad Q(X,Y)=-Y,
\end{array}$$ respectively. In the latter case, we note that the equality $Q(\beta_1,\beta_2)=\alpha_2$ holds from the identity (\[eq:405\]). Thus, the proof of Theorem \[Thm1\] is completed.\
[*Proof of Theorem \[Thm2\].*]{} We first consider the case $n=2$. Let $F:=\mathbb{Q}(x_1,x_2)$, where $x_1:=2\theta_2^2(2\tau)$ and $x_2:=\theta_2^4$. Then by the identity (\[Formula1\]) together with the relation $\theta_2^4=\theta_3^4-\theta_4^4$, we have $F\subset E:=\mathbb{Q}(\theta_3,\theta_4)$ and $$2x_1\theta_3^2-x_1^2-x_2=2x_1\theta_4^2+x_1^2-x_2=0.$$ This implies that the field extension $E/F$ is algebraic, so that $${\rm trans}\deg F/\mathbb{Q}=
{\rm trans}\deg E/F+ {\rm trans}\deg F/\mathbb{Q}
={\rm trans}\deg E/\mathbb{Q}=2,$$ which implies that the numbers $\theta_2(2\tau)$ and $\theta_2(\tau)$ are are algebraically independent over $\mathbb{Q}$. For $n=4$, putting $F:=\mathbb{Q}(2\theta_2(4\tau),\theta_2^4)$ and using (\[Formula4\]), we can proceed the same argument as stated above. For $x_1:=2\theta_2(4\tau)$ and $x_2:=\theta_2^4$ we use the identities $$\theta_3^4-{\big( x_1-\theta_3 \big)}^4 - x_2 \,=\, \theta_4^4-{\big( x_1+\theta_4 \big)}^4 + x_2 \,=\, 0\,.$$ In the case of $n=6$, we use Lemma \[Lem3.1\]. From [@Nes3] we know that $$P_3(X,Y) \,=\, 9-\big( Y^2-16Y+28\big) X+30X^2-12X^3+X^4 \,.$$ Hence, we have $$\begin{aligned}
&& \mbox{\rm Res\,}_X \left( P_3(X,Y),\,X\frac{\partial}{\partial X} P_3\big( X,Y\big) +
2\big( Y-16 \big)\frac{\partial}{\partial Y} P_3\big( X,Y\big) \right) \\
&=& 9Y\big( 5Y-512\big) {\big( Y-16\big)}^2{\big( Y-8\big)}^4
\,\not\equiv \,0 \,.\end{aligned}$$ Lemma \[Lem3.1\] gives the desired result for $n=6$.\
[*Proof of Theorem \[Thm4\].*]{} Similarly to the proof of Theorem \[Thm2\], we can deduce the conclusion for the cases $n=2,4,8$ by using the identities (\[Formula3\]), (\[Formula6\]), and $$32\theta_4^8(8\tau)=
\left(\theta_3+\theta_4\right)^4
\left(\theta_3^2+\theta_4^2\right)\theta_3\theta_4,$$ which is yielded from (\[Formula2\]), (\[Formula3\]), and (\[Formula6\]). In the cases of $n=6,10$, we use Lemma \[Lem3.2\]. For $n=6$ we compute the resultant from the lemma explicitly. $$\begin{aligned}
&& \mbox{\rm Res\,}_X \left( P_3(X,Y),\,X^2\left( \frac{\partial P_3}{\partial X}(X,Y)\right)^2-Y\big( Y -16 \,\big)
\left(\frac{\partial P_3}{\partial Y}(X,Y)\right)^2 \right) \\
&=& -81Y^3\big( 375Y^2-6000Y+262144\big) {\big( Y-16\big)}^3{\big( Y-8\big)}^8 \,\not\equiv \,0 \,.\end{aligned}$$ Next, let $n=10$. In [@Nes3] the polynomial $P_5(X,Y)$ is given as well. $$\begin{aligned}
P_5(X,Y) &\,=\,& 25 - (126 - 832Y + 308Y^2 - 32Y^3 + Y^4)X + (255 + 1920Y - 120Y^2)X^2 \\
&& +\,(-260 + 320Y - 20Y^2)X^3 + 135X^4 - 30X^5 + X^6 \,.\end{aligned}$$ Hence, by setting $$T_{10}(Y) \,:=\, \mbox{\rm Res\,}_X \left( P_5(X,Y),\,X^2\left( \frac{\partial P_5}{\partial X}(X,Y)\right)^2-Y\big( Y -16 \,\big)
\left(\frac{\partial P_5}{\partial Y}(X,Y)\right)^2 \right) \,,$$ we obtain $T_{10}(1) \equiv 1 \pmod 2$, such that $T_{10}(Y)$ does not vanish identically. This completes the proof of Theorem \[Thm4\].\
[*Proof of Theorem \[Thm3\].*]{} Taking the results from Theorem A and Theorem \[Thm1\] into account, for Theorem \[Thm3\] it suffices to consider $n\in \{ 14,18,20,22\}$. Here, we compute the resultants from Lemma \[Lem4\] (for $n\in \{ 14,18,22\}$) and from Lemma \[Lem5\] (for $n=20$) explicitly by using a computer algebra system. In order to show that the resultants do not vanish we again consider the values at $Y=1$. For $n=14$ we use $$\begin{aligned}
P_7(X,Y) &\,=\,& 49 - (344 - 17568Y + 20554Y^2 - 6528Y^3 + 844Y^4 - 48Y^5 + Y^6)X \\
&& +\,(1036 + 156800Y + 88760Y^2 - 12320Y^3 + 385Y^4)X^2 \\
&& -\,(1736 - 185024Y + 18732Y^2 - 896Y^3 + 28Y^4)X^3 \\
&& +\,(1750 + 31360Y - 1960Y^2)X^4 - (1064 - 2464Y + 154Y^2)X^5 \\
&& +\,364X^6 - 56X^7 + X^8\end{aligned}$$ (cf. [@Elsner3]) to obtain the resultant from Lemma \[Lem4\], $$T_{14}(Y) \,:=\, \mbox{Res\,}_X \left( P_7(X^2,Y^2),\,X^2\frac{\partial}{\partial X} P_7\big( X^2,Y^2\big) + \big( Y^2+4Y \big)\frac{\partial}{\partial Y}
P_7\big( X^2,Y^2\big) \right) \,.$$ It follows that $T_{14}(1) \equiv 1 \pmod 2$. This shows that $T_{14}(1)\not= 0$. For $n=18$ we apply Lemma \[Lem4\] with $$\begin{aligned}
P_9(X,Y) &\,=\,& 6561 - (60588-18652032Y+56033208Y^2-40036032Y^3+11743542Y^4 \\
&& -\,1715904Y^5+132516Y^6-5184Y^7+81Y^8)X \\
&& +\,(250146+427613184Y+2083563072Y^2+86274432Y^3-57982860Y^4 \\
&& +\,4249728Y^5-99288Y^6+576Y^7-9Y^8)X^2 \\
&& -\,(607420-1418904064Y+2511615520Y^2-353755456Y^3+19071754Y^4 \\
&& -\,612736Y^5+13960Y^6-64Y^7+Y^8)X^3 \\
&& +\,(959535+856286208Y+8468928Y^2-2145024Y^3-808488Y^4 \\
&& +\,65664Y^5-1368Y^6)X^4 \\
&& -\,(1028952+22899456Y+1430352Y^2-505152Y^3+38826Y^4 \\
&& -\,1728Y^5+36Y^6)X^5 \\
&& +\,(757596-13138944Y+4160448Y^2-417408Y^3+13044Y^4)X^6 \\
&& -\,(378072+1138176Y+16416Y^2-10944Y^3+342Y^4)X^7 \\
&& +\,(122895+64512Y-4032Y^2)X^8 - (24060-11136Y+696Y^2)X^9 \\
&& +\,2466X^{10} - 108X^{11} + X^{12} \,.\end{aligned}$$ We have $$T_{18}(Y) \,:=\, \mbox{Res\,}_X \left( P_9(X^2,Y^2),\,X^2\frac{\partial}{\partial X} P_9\big( X^2,Y^2\big) + \big( Y^2+4Y \big)\frac{\partial}{\partial Y}
P_9\big( X^2,Y^2\big) \right) \,,$$ and thus $T_{18}(1) \equiv 1 \pmod 2$. Hence, $T_{18}(1)\not= 0$. For $n=20$ we need the polynomial $P_5(X,Y)$, which was already used in the proof of Theorem \[Thm4\]. The resultant from Lemma \[Lem5\], $$T_{20}(Y) \,:=\, \mbox{Res\,}_X \left( P_5(X^4,Y^4),\,X^4\frac{\partial}{\partial X} P_5\big( X^4,Y^4\big) + \big( Y^4+2Y^3 \big)\frac{\partial}{\partial Y}
P_5\big( X^4,Y^4\big) \right) \,,$$ satisfies $T_{20}(1) \equiv 1 \pmod 2$. Finally, for $n=22$ we again apply Lemma \[Lem4\] with $$\begin{aligned}
P_{11}(X,Y) &\,=\,& 121 - (1332-2214576Y+15234219Y^2-21424896Y^3+11848792Y^4 \\
&& -\,3309152Y^5+522914Y^6-48896Y^7+2684Y^8-80Y^9+Y^{10})X \\
&& +\,(6666+111458688Y+2532888424Y^2+2367855776Y^3-327773413Y^4 \\
&& -\,9982720Y^5+3230480Y^6-161920Y^7+2530Y^8)X^2 \\
&& -\,(20020-864654912Y+12880909668Y^2-5289254784Y^3+744094076Y^4 \\
&& -\,43914992Y^5+967461Y^6-2816Y^7+44Y^8 )X^3 \\\end{aligned}$$ $$\begin{aligned}
&& +\,(40095+1748954240Y-175142088Y^2+372281536Y^3-68516998Y^4 \\
&& +\,4266240Y^5-88880Y^6)X^4 \\
&& -\,(56232-1061669664Y+132688050Y^2-10724736Y^3+715308Y^4 \\
&& -\,28512Y^5+594Y^6)X^5 \\
&& +\,(56364+211953280Y-7454568Y^2-724064Y^3+22627Y^4)X^6 \\
&& -\,(40392-24140864Y+2162116Y^2-81664Y^3+2552Y^4)X^7 \\
&& +\,(20295+1448832Y-90552Y^2)X^8 - (6820-36784Y+2299Y^2)X^9 \\
&& +\,1386X^{10} - 132X^{11} + X^{12} \,.\end{aligned}$$ Here, we obtain $$T_{22}(Y) \,:=\, \mbox{Res\,}_X \left( P_{11}(X^2,Y^2),\,X^2\frac{\partial}{\partial X} P_{11}\big( X^2,Y^2\big) + \big( Y^2+4Y \big)\frac{\partial}{\partial Y}
P_{11}\big( X^2,Y^2\big) \right) \,,$$ where $T_{22}(1) \equiv 3 \pmod {13}$, such that $T_{22}(1) \not= 0$. The proof of Theorem \[Thm2\] is complete.\
[**Acknowledgements.**]{} The second author was supported by Japan Society for the Promotion of Science, Grant-in-Aid for Young Scientists (B), 15K17504.
[99]{} Elsner,C., Shimomura,Sh., ShiokawaI., Algebraic independence results for reciprocal sums of Fibonacci numbers, [*Acta Arith.*]{} [**148.3**]{} (2011), 205–223. Elsner,C., ShiokawaI., On algebraic relations for Ramanujan’s functions, [*Ramanujan J.*]{} [**29**]{} (2012), 273–294. Elsner,C., Algebraic independence results for values of theta-constants, [*Funct. Approx. Comment. Math.*]{} [**52.1**]{} (2015), 7–27. Hurwitz,A., Courant,R., [*Vorlesungen über allgemeine Funktionentheorie und elliptische Funktionen*]{}, Die Grundlehren der Mathematischen Wissenschaften 3, Springer-Verlag, Berlin, 1922 (forth edition, 1964). Lang,S., [*Elliptic functions*]{}, Second Edition, Graduate Texts in Mathematics [**112**]{}, Spinger-Verlag, New York, 1987. Lawden,D.F. [*Elliptic functions and applications*]{}, Applied Mathematical Sciences, vol. 80, Springer-Verlag, New York, 1989. Nesterenko,Yu.V., Modular functions and transcendence questions, [*Mat. Sb.*]{} [**187**]{} (1996), 65–96; English transl. [*Sb. Math.*]{} [**187**]{}, 1319-1348. Nesterenko,Yu.V., On some identities for theta-constants, Seminar on Mathematical Sciences: Diophantine Analysis and Related Fields, edt. by M.Katsurada, T.Komatsu, and H.Nakada, Keio University, Yokohama, no.35 (2006), 151-160.
[^1]: Fachhochschule f[ü]{}r die Wirtschaft, University of Applied Sciences, Freundallee 15, D-30173 Hannover, Germany e-mail: [email protected]
[^2]: Hirosaki University, Graduate School of Science and Technology, Hirosaki 036-8561, Japan e-mail: [email protected]
|
4 cm
3 cm
[**AFSAR ABBAS**]{}\
Institute of Physics, Bhubaneswar-751005, India\
(e-mail : [email protected])
20 mm
[**Abstract** ]{}
3 mm
It is shown here that the Standard Model (SM) of particle physics supports the view that gravity need not be quantized. It is shown that the SM gives a consistent description of the origin of the universe. It is guggested that the universe came into existence when the SM symmetry was broken spontaneously. This brings out a complete and consistent model of the physical universe in the framework of a “semiclassical quantum gravity” theory.
To quantize or not to quantize gravity, is one of the outstanding questions facing physics today. Though there is still no compelling empirical evidence supporting quantization, the current theoretical prejudice is strongly in favour of the latter possibilty. Here it shall however be shown that the most succesful model of particle physics, the so-called Standard Model ( SM ), supports the former view and infact gives a consistent and complete basis for the “ semiclassical quantum gravity ” idea of Rosenfeld \[1\].
As the universe expands, it is predicted that it undergoes a series of phase transitions \[2\] during which the appropriate symmetry breakes down in various stages until it reaches the stage of the SM symmetry $SU(3)_c$ $\otimes$ $SU(2)_{L}$ $\otimes$ $U(1)_{Y} $. After $ t \sim 10^{-10} $ seconds ( at $T\sim 10^{2}$ GeV ) the SM phase transition to $SU(3)_c$ $\otimes$ $U(1)_{em}$ through the Higgs Mechanism takes place.
The SM has been well studied in the laboratory. It is the best tested model of particle physics \[3\]. It was however believed earlier that a weaknesses of the SM was that the electric charge was not quantized in it. However, it was as late as 1990 that this was shown to be wrong. Against all expectations, the electric charge was shown to be fully and consistently quantized in the SM \[4\].
It was shown by the author \[4,5\] that in SM $ SU(N_{C}) \otimes SU(2)_{L} \times U(1)_{Y} $ for $ N_{C} = 3 $, spontaneous symmetry breaking by a Higgs isospin doublet of weak hypercharge $ Y_{ \phi } $, the isospin -1/2 component develops the vacuum expectation value $ < \phi >_{0} $. This fixes ‘b’ in the electric charge definition $ Q = T_{ 3 } + b Y $ to give
$$Q = T_{ 3 } + \frac { 1 } { 2 Y_{ \phi } } Y$$
where Y is the hypercharge for doublets and singlets for a single generation. For each generation renormalizability through triangular anomaly cancellation and the requirement of the identity of L- and R-handed charges in $ U(1)_{ em } $ one finds that all unknown hypercharges are proportional to $ Y_{\phi}$. Hence correct charges (for $ N_{C} = 3 $ ) fall through as below \[4-8\]
$$\begin{aligned}
Q(u) = \frac{1}{2} ( 1 + \frac{1}{N_{C}} ) \\
Q(d) = \frac{1}{2} ( -1 + \frac{1}{N_{C}} ) \\
Q(e) = -1 \\
Q(\nu) = 0\end{aligned}$$
Hence the electric charge is quantized in SM. The complete structure of the SM as is, is required to obtain this result on very general grounds.
Clearly the $ U(1)_{em}$ symmetry which arose due to spontaneous symmetry breaking due to a Higgs doublet in the SM symmetry will be lost above $ T_{c}^{SM} $ whence $ SU(2)_{L}$ $\otimes$ $ U(1)_{em}$ symmetry would be restored. As is obvious, above $ T_{c}^{SM}$ all the fermions and gauge bosons becomes massless. Here I point out a new phenomenon arising from the restoring of the full SM symmetry .
Note that to start with the parameter b and Y in the definition of electric charge were completely unknown. We could lay a handle on ’b’ entirely on the basis of the presence of spontaneous symmetry breaking and on ensuring that photon was massless $ b = \frac{1}{2 Y_{\phi}} $. Above $ T_{c}^{SM} $ where the SM symmetry is restored there is no spontaneous symmetry breaking and hence the parameter b is completely undetermined. Together ’bY’ could be any arbitrary number whatsoever even an irrational number. Within the framework of this model above $ T_{c}^{SM} $ we just cannot define electric charges at all. Hence the electric charge loses any physical meaning all together. There is no such thing as charge anymore. The photon (which was a linear combination of $W^{0}$ and $ B_{\mu} $ after spontaneous symmetry breaking ), with it’s defining vector characterteristic, does not exist either. So the conclusion is that there is no electrodynamics above $ T_{c}^{SM} $. Maxwells equations of electrodynamics show that light is an electromagnetic phenomenon. Hence above the SM phase transition there was no light and no Maxwells equations. And surprisingly we are led to conclude that there was no velocity of light c as well. One knows that the velocity of light c is given as
$$c^2 = { 1 \over { \epsilon_0 \mu_0} }$$
where $ \epsilon_0 $ and $ \mu_0 $ are permittivity and permeability of the free space. These electromagnetic properties disappear along with the electric charge and hence the velocity of light also disappears above the SM phase transition.
The premise on which the theory of relativity is based is that c, the velocity of light is always the same, no matter from which frame of reference it is measured. Of fundamental significance is the invariant interval
$$s^2 = { ( c t ) }^2 - x^2 - y^2 - z^2$$
Here the constant c provides us with a means of defining time in terms of spatial separation and vice versa through l = c t. this enables one to visualize time as the fourth dimension. Hence time gets defined due to the constant c. Therefore when there were no c as in the early universe ,there was no time as well. As the special theory of relativity depends upon c and time, above the SM breaking scale there was no special theory of relativity. As the General Theory of relativity also requires the concept of a physical space-time, it collapses too above the SM breaking scale. Hence the whole physical universe collapses above this scale.
Therefore it was hypothesized that the Universe came into existence when the SM symmetry was broken spontaneously \[11\]. Before it there was no ’time’, no ’light’, no maximum velocity like ’c’, no gravity or space-time structure on which objective physical laws as we know it could exist \[11\].
In short we find that the SM of particle physics is not only good to explain all known facts, it is also capable of giving a natural and consistent description of the origin of the Universe \[11\].
So matter (which is quantizd) and the space-time ( and so gravity ) are bound together holistically. In this picture one cannot talk of matter without spacetime and neither can we talk of space-time without matter. They are two sides of the same coin and hence the Einstein’s equation as per the SM should read as follows:
$$G_{\mu \nu} = 8 \pi G \langle \phi | T _{\mu \nu} | \phi \rangle$$
This is the “ semiclassical quantum gravity ” approach of Rosenfeld \[1\]. However here $ \phi $ represents the complete matter degrees of freedom incorporated in the SM in terms of the three generations of matter particles plus the gauge particle of the SM ( no other spurious model is needed \[9,10\] ). Gravity, a property of the spacetime arises as a result of the quantum mechanical SM phase transtion and is not quantized. Hence as per the SM origin of the universe scenario presented here we get a complete and consistent picture of the origin and the existence of space-time and matter. The two are inseparably connected.
It seems that the reason that inspite of rigourous efforts we have not seen graviton so far is because as shown here it does not exist. This need not shock some modern adherents of quantization of gravity as many a scientists have been warning against an urge to quantize gravity. Long ago Rosenfeld \[1\] had pointed out that empirical evidence and not logic forces us to quantize fields. He said that in the absence of such evidence the temptation to quantize fields must be resisted \[1\]. Indeed this is what the SM of particle physics is telling us. And this is within the framework of the most successful model of particle physics. Not only can the SM of particle physics provide a good description of all known particles, it also has the capacity to give a consistent picture of the origin of the universe and also of settling the issue of whether to quantize gravity or not.
2 cm
[**REFERENCES** ]{} m
1 cm
1\. L. Rosenfeld, [*Nucl. Phys.* ]{}, [**40**]{} (1963) 353
2\. E. W. Kolb and M. S. Turner, “ The Early Universe ” Edison Wesley, New York ( 1990 )
3\. T. P. Cheng and L. F. Lee, “ Gauge Theory of Elementary Particle Physics ”, Clarendon Press, Oxford ( 1988 )
4\. A. Abbas, [*Phys. Lett.* ]{}, [**B 238**]{} ( 1990) 344
5\. A. Abbas, [*J. Phys. G.* ]{}, [**16** ]{} ( 1990 ) L163
6\. A. Abbas, [*Nuovo Cimento* ]{}, [**A 106**]{} ( 1993 ) 985
7\. A. Abbas, [*Hadronic J.*]{}, [**15**]{} ( 1992 ) 475
8\. A. Abbas, [*Ind. J. Phys.*]{}, [**A 67**]{} ( 1993 ) 541
9\. A. Abbas, [*Physics Today* ]{} ( July 1999 ) p.81-82
10\. A. Abbas, [*‘On standard model Higgs and superstring theories’*]{} “Particles, Strings and Cosmology PASCOS99”, Ed K Cheung, J F Gunion and S Mrenna, World Scientific, Singapore ( 2000) 123
11\. A. Abbas, “ On the Origin of the Universe ”, [*Spacetime & Substance, Int Phys Journal*]{}, [**2 (7)**]{} ( 2001 ) 49 ; Website: spacetime.narod.ru/2-7-2001.html
|
---
abstract: 'We study quantitatively the formation and radiation acceleration of electron-positron pair plasmoids produced by photon-photon collisions near Galactic black holes (GBHs). The terminal ejecta velocity is found to be completely determined by the total disk luminosity, proton loading factor and disk size, with no dependence on the initial velocity. We discuss potential applications to the recently discovered Galactic superluminal sources GRS1915, GROJ1655 and possibly other GBHs.'
author:
- 'Hui Li and Edison P. Liang'
title: |
Formation and Radiation Acceleration of Pair Plasmoids\
Near Galactic Black Holes
---
INTRODUCTION
============
The origin of relativistic jets from Active Galactic Nuclei (AGNs) remains one of the great challenges in astrophysics. Earlier Phinney (1982, 1987) concluded that radiation from the accretion disk is probably not responsible for accelerating the superluminal AGN jets with bulk Lorentz factors $\Gamma \sim 10$ ([@por87]). When $\Gamma \gg 1$, the aberrated photons in the jet’s rest frame comes from the forward direction against the jet motion, thus limiting the maximum $\Gamma$ achievable to only a few, much smaller than the observed values.
Recently, observations of several Galactic black hole (GBH) candidates have revealed a causal relation between X-ray and radio flaring of the central object. In some cases, radio plasmoids were ejected relativistically away from the central sources following the x-ray flares ([@mi92; @mr94; @hjr95]). In two cases, GRS1915 ([@mr94]) and GROJ1655 ([@hjr95]), the ejecta motions appear to be superluminal, resembling the superluminal AGN sources.
Motivated by the correlation of the ejection with the x-ray flux, energetics of the ejecta and the relatively small $\Gamma \sim 2.5$, we proposed in a recent paper ([@ll95]) a scenario for the origin of the episodic ejections based on the standard inverse-Compton accretion disk paradigm ([@sle76]). The idea is that episodic quenching of the (external and internal synchrotron) soft photon source to a level less than the internal bremsstrahlung flux superheats the innermost disk to gamma ray temperatures ([@ld88; @wl91; @pk95]). However, these gamma rays cannot all escape, if their source compactness ($\propto$ gamma ray luminosity/source size, [@sv84; @zz84]) is too high. Instead most gamma rays are converted into pairs via photon-photon pair creation which will be further accelerated axially by the radiation pressure of the disk x-ray flux and attain significant terminal Lorentz factors in a bipolar outflow if the disk luminosity is sufficiently high. When the ejecta encounters the interstellar medium and develops shocks and turbulence it will convert the bulk kinetic energy into high energy particles, emitting the observed radio synchrotron radiation.
In this paper, we summarize some key results for pair plasma formation and their radiation acceleration by the disk photons for GBHs.
COMPACTNESS AND FORMATION OF PAIR PLASMOIDS
===========================================
To study the formation of the pairs produced by $\gamma\gamma$ and $\gamma$x collisions [*outside*]{} the disk, we assume a 2-zone Keplerian disk model depicted in Fig.1. We define $r_{\gamma}$ as the boundary, interior (exterior) to which local bremsstrahlung flux is higher (lower) than the soft photon flux. Gamma rays are emitted by a superheated ion torus within $r_{\gamma}$ ([@rees82; @liang90]), The spectrum from this region is assumed to be a Comptonized bremsstrahlung + annihilation spectrum with $T_{\gamma} \sim mc^2$ (Fig.1a, e.g. [@zz84; @ld88; @ku88]). Exterior to $r_{\gamma}$, we assume that the disk x-ray flux has the standard inverse-Compton spectral shape (Fig.1b, [@sle76; @st80]). Using standard photon-photon pair production cross-section we compute the (angle-averaged) gamma ray pair-production depth $\tau_{\gamma\gamma}$ as a function of photon energy due to collisions among gamma rays and between gamma rays and disk x-rays: $$\tau_{\gamma\gamma} \simeq r_{\gamma}~
\int d\epsilon^{\prime} n(\epsilon^{\prime})
\sigma_{\gamma\gamma}(\epsilon,\epsilon^{\prime}) f$$ where $\sigma_{\gamma\gamma}(\epsilon,\epsilon^{\prime})$ is the angle-averaged pair production cross-section ([@gs67]), and $\epsilon\equiv h\nu/511$keV. $n(\epsilon^{\prime})=n_{\gamma}(\epsilon^{\prime})
+n_x(\epsilon^{\prime})$ is the normalized photon spectral density evaluated at $r_{\gamma}$, and $f$ is a fudge factor of order unity to incorporate various complex geometry and angle-dependent effects (e.g. photon intensity anisotropy and variation with path length along the line of sight, angular dependence of $\sigma_{\gamma\gamma}$, etc.). [*We calibrate $f$ using sample numerical simulations*]{} (details to be published elsewhere). For given geometry and spectral shapes (cf. Fig.1), $\tau_{\gamma\gamma}$ basically depends only on the gamma ray source global compactness $l_{\gamma} = L_{\gamma}/r_{\gamma}/(3.7\times 10^{28}$ erg/cm/s) ([@sv84; @zz84]). In Fig.2a we plot $\tau_{\gamma\gamma}$ as functions of photon energy for various $l_{\gamma}$ and $T_{\gamma} \sim mc^2$. In Fig.2b we plot the fraction of gamma rays that are absorbed into pairs as a function of $l_{\gamma}$. We have assumed that all pairs produced escape without re-annihilation in situ due to their high kinetic energy ($\sim$ few hundred keV). This is a crude approximation but we estimate the error to be less than a factor of 2. The critical compactness at which $\gte 80\%$ of the gamma rays are converted into escaping pairs is around $l_{\gamma} \sim 100$ (with $\gte 2$ uncertainty), consistent with similar results found by others (Zdziarski 1995, private communications). To achieve $l_{\gamma} \geq 100$, Liang & Li (1995) show that the overall accretional luminosity $L_0$ must be $\geq 0.1-0.2$ $L_{edd}$.
RADIATION ACCELERATION
======================
Consider a plasma of electrons, positrons and some entrained protons directly above the black hole and along the axis of the disk, interacting with the disk radiation via only Compton scattering (cf. Fig.1). Assuming that protons, electrons and positrons are always co-moving (a cold jet), $\gamma_{p} \equiv \gamma_{e} = \Gamma$, and local charge neutrality is maintained ($n_{e^{-}}=n_{p}+n_{e^{+}}$), we can obtain the equation of motion for the jet (at height Z) as: $$\label{motion}
-{d\Gamma\over dz_*}~=~({m_{e}\over m_{eq}})~\left({r_g\sigma_{T}\over
\beta c}\right)
\int d\epsilon \int d\Omega~
I_{ph}(\epsilon,\Omega) (1-\beta\mu)
[\Gamma^{2}(1-\beta\mu) - 1]
{}~+~{1\over z_*^2}$$ where $z_* = Z/r_g$, $r_g = GM / c^2$, and $\mu = \cos\theta$, the angle between the photon momentum and the z-axis, and $I_{ph}(\epsilon,\Omega)$ is the spectral intensity seen by the jet (cf Fig.1, see [@od81; @ph87] for previous results). $m_{eq}$ is the equivalent mass of the particles $= m_{e}+{\zeta_p\over 2-\zeta_p}m_{p}$ with $\zeta_p=n_{p}/n_{e^{-}}$, the fraction of proton loading. For $\zeta_p=0$, $m_{eq}=m_e$ for a pure pair jet and for $\zeta_p=1$, $m_{eq}=m_{e}+m_{p}$ for an electron-proton jet. In deriving equation (\[motion\]), we have: (1) kept only the parallel (z) component due to the axial-symmetry of the system (though there will be a random orthogonal component); (2) used the angle-dependent differential Compton scattering cross section (in Thomson limit) and ignored the energy change (second order in scattered photon energy), but kept the momentum change (first order in scattered photon energy) for acceleration; (3) included the gravity pull on protons and leptons but neglected the radiation force on protons.
The (isotropic) spectral intensity at the disk surface can be approximated as: $$\label{inten}
I_{ph}(\epsilon) = {1.68\times 10^{32}\over \epsilon ~
{\rm exp}\left({511\over 80}\epsilon\right)}
\left({L_0\over L_{edd}}\right)
{1\over M_{10}}~{1\over {\cal G}}~
r_*^{-3} \left(1-\sqrt{6\over r_*}\right)
\left({1\over {\rm cm^2\cdot s \cdot sr}}\right)$$
where $r_* = r/r_g$, $M_{10} = M/10M_{\odot}$ and ${\cal G} = 18\left({1\over r^*_{min}} - {1\over r^*_{max}}\right)
+ 2\left[ \left({6\over r^*_{max}}\right)^{3/2} -
\left({6\over r^*_{min}}\right)^{3/2} \right]$. Here, $r^*_{min}$ and $r^*_{max}$ define the size of the disk from which the total luminosity is $L_0$, and we assume $r^*_{min} = r_{\gamma}/r_g$. We have used the observed spectrum of GRS1915 ([@har94]), but results are insensitive to detailed spectral shape since it will be integrated away when putting into equation (\[motion\]) and only the total flux from each radius matters since we are working in the Thomson limit (as emphasized by the referee). The $\gamma$-ray component from $r < r_{\gamma}$ (Fig.1a) produces negligible additional effects on the axial acceleration due to the strong $\gamma-\gamma$ self-absorption at high $l_{\gamma}$. Substituting equation (\[inten\]) into equation (\[motion\]) the integral over $\Omega$ becomes an integral over $r_*$ (cf. Fig.1).
Equation (\[motion\]) can be evaluated numerically for the terminal $\Gamma_{\infty}$. A key term in this equation is $\Gamma^{2}(1-\beta\mu)~-~1$, which signifies the accelerating (or decelerating) force from a “ring” of radiation at radius $r_*$, depending on whether it is negative (or positive) for particles at the height $z_*$, where $\mu = z_*/\sqrt{z_*^2+r_*^2}$. One can find a disk radius $r_{sep}$, interior to which disk radiation accelerates the ejecta, and exterior to which radiation decelerates the ejecta. Whether an ejecta can get net acceleration depends on the sum of these two contributions.
The dependence of $\Gamma_{\infty}$ on the initial conditions ($\Gamma_{initial}~{\rm and}~z^*_{min}$) and the geometry ($r^*_{min}~{\rm and}~r^*_{max}$) can be obtained by solving equation (\[motion\]). We will focus on cases with $\zeta_p = 0$ first. The results are shown in Figure 3 and summarized here.
(A). For fixed disk geometry and total disk luminosity, [*regardless of their initial velocities $\Gamma_{initial}$*]{}, leptons will be accelerated to a fixed terminal $\Gamma_{\infty}$, even though particles with high $\Gamma_{initial}$ get decelerated first. The radiation field acts like a “thermostat” that regulates the particle’s final velocity along the z-axis. In Fig.3a, we show the ejecta’s $\Gamma$ as a function of height $z_*$ with different initial velocities, whereas fixing $r^*_{min} = 12,~r^*_{max} = 10^4,~z^*_{min} = 10~
{}~{\rm and}~L_0/L_{edd} = 0.2$.
(B). Fig.3b, 3c, and 3d show the ejecta’s $\Gamma$ as a function of height $z_*$ with different $r^*_{min}$, different $r^*_{max}$ and different initial $z^*_{min}$, respectively. Again, $L_0/L_{edd} = 0.2$. It is evident that, $\Gamma_{\infty}$ depends somewhat sensitively on $r^*_{min}$ since it is mostly the inner disk radiation that accelerates the ejecta; on the other hand, the dependence on $r^*_{max}$ is weak as long as it is $\gg 100$ due to the radiation intensity drops down drastically as $r^*$ increases. Note from equation (\[motion\]) that the rate of acceleration is proportional to $d\Omega I_{ph}(\epsilon,\Omega)$, which is decreasing as $1/z_*^2$. So efficient acceleration only occurs $within~a~height~of~100-1000 r_g$ of the black hole if the conditions are favorable. $\Gamma_{\infty}$ is not a sensitive function of $z^*_{min}$ as long as $z^*_{min} \leq 50$. Ejecta starting at very high $z_*$ is not of interest here.
Our main results are summarized in Figure 4, where we show the terminal $\Gamma_{\infty}$ as a function of $L_0/L_{edd}$ for different proton loading $\zeta_p = n_p/n_{e^-}$ with $r^*_{min} = 12$, $r^*_{max} = 10^4$ and $z^*_{min} = 10$. It is evident that for $\zeta_p > 10^{-3}$, gravitational attraction on the protons hinders the acceleration. This has important implications for the composition of the ejecta. If the jets are composed of pure pairs as suggested in previous sections, then radiation acceleration can indeed account for the observed $\Gamma$ in both GRS1915 and GROJ1655 ([@mr94; @hjr95]). The observed $\Gamma \sim 2.55$ of GRS1915 implies $L_0/L_{edd} \sim 0.2$. This constrains the mass of black hole in GRS1915 to $\sim 12 M_{\odot}$ when using the observed luminosity ([@har94]).
The jet could be suffocated if there is an additional isotropic radiation field $I_{iso}$ and it reaches $\sim 10\%$ of $I_{disk}$. $I_{iso}$ could be caused by the companion wind or circumstellar medium.
DISCUSSIONS
===========
The pairs are born with average kinetic energy of several hundred keV. However, we do not expect the “Compton rocket” effect ([@od81]) to be important to affect the ejecta’s terminal velocity because the disk radiation field will regulate the particle’s final velocity, no matter what the initial velocity is, as shown in Fig.3a.
We predict that the terminal $\Gamma_{\infty}$ should increase as $L_0/L_{edd}$ increases. This can be tested using the ejections from different episodes with the corresponding radiation luminosities.
To summarize, in the present approach, we have adopted the [*thermal*]{} accretion disk model for the hard x-ray emissions (as suggested by the spectra of GRS1915, [@har94]), and use that as the basis for the pair production and bulk acceleration. Hence the energetics of the jet is constrained by the pre-ejection accretional X-ray power output ([@ll95]). We find that, for a jet composed mostly of leptons, disk radiation can accelerate it to a terminal Lorentz $\Gamma_{\infty}$ of a few, depending on the total disk luminosity and geometry. However, [*non-thermal*]{} processes may be needed to explain the power-law hard X-ray spectra of some x-ray novae (including GROJ1655, [@har94; @kro95]). Yet both classes of GBHs show relativistic outflows. Can GBH jets be also generated by say, purely electrodynamical processes unrelated to the conventional thermal hard x-ray disk models? We will explore alternative models in future publications.
We thank an anonymous referee for constructive criticisms. This work is supported by NASA grant NAG 5-1547 and NRL contract No. N00014-94-P-2020.
Gould, R. J. & Schreder, G. P. 1967, Phys. Rev., 155, 1404 Harmon, B. A. et al. in The Second Compton Symposium, ed. C. Fichtel, N. Gehrels & J. Norris (New York: AIP Conf. Proc. No. 304), 210 Hjellming, R. M. & Rupen, M. P. 1995, Nature, 375, 464 Kroeger, R. A. et al. 1995, Nature, [*submitted*]{} Kusunose, M. & Takahara, F. 1988, PASJ, 40, 435 Liang, E. P. 1990, A&A, 227, 447 Liang, E. P. & Dermer, C. D. 1988, , 325, L39 Liang, E. P. & Li, H. 1995, A&A, 298, L45 Mirabel, I. F. et al. 1992, Nature, 358, 215 Mirabel, I. F. & Rodriguez, L. F. 1994, Nature, 371, 46 O’Dell, S. L. 1981, , 243, L147 Phinney, E. S. 1982, , 198, 1109 Phinney, E. S. 1987, in Superluminal Radio Sources, ed. J. A. Zensus & T. J. Pearson (Cambridge: Cambridge Univ. Press), 301 Pietrini, P. & Krolik, J. H. 1995, preprint Porcas, R. W. 1987, in Superluminal Radio Sources, ed. J. A. Zensus & T. J. Pearson (Cambridge: Cambridge Univ. Press), 12 Rees, M. et al. 1982, Nature, 295, 17 Shapiro, S. L., Lightman, A. P. & Eardley, D. M. 1976, , 204, 187 Sunyaev, R. & Titarchuk, L. 1980, A&A, 86, 121 Svensson, R. 1984, , 209, 175 Wandel, A. & Liang, E. P. 1991, , 380, 84 Zdziarski, A. A. 1984, , 283, 842
|
---
abstract: 'In two-dimensional electron systems confined to GaAs quantum wells, as a function of either tilting the sample in magnetic field or increasing density, we observe multiple transitions of the fractional quantum Hall states (FQHSs) near filling factors $\nu=3/4$ and 5/4. The data reveal that these are spin-polarization transitions of interacting two-flux composite Fermions, which form their own FQHSs at these fillings. The fact that the reentrant integer quantum Hall effect near $\nu=4/5$ always develops following the transition to full spin polarization of the $\nu=4/5$ FQHS strongly links the reentrant phase to a pinned *ferromagnetic* Wigner crystal of two-flux composite Fermions.'
author:
- Yang Liu
- 'D. Kamburov'
- 'S. Hasdemir'
- 'M. Shayegan'
- 'L.N. Pfeiffer'
- 'K.W. West'
- 'K.W. Baldwin'
bibliography:
- '../bib\_full.bib'
date: today
title: 'Fractional Quantum Hall Effect and Wigner Crystal of Two-Flux Composite Fermions'
---
Fractional quantum Hall states (FQHSs) are among the most fundamental hallmarks of ultra-clean interacting two-dimensional electron systems (2DESs) at a large perpendicular magnetic field ($B_{\perp}$) [@Tsui.PRL.1982]. These incompressible quantum liquid phases, signaled by the vanishing of the longitudinal resistance ($R_{xx}$) and the quantization of the Hall resistance ($R_{xy}$), can be explained by mapping the interacting electrons to a system of essentially non-interacting, 2$p$-flux composite Fermions ($^{2p}$CFs), each formed by attaching $2p$ magnetic flux quanta to an electron ($p$ is an integer). The $^{2p}$CFs have discrete energy levels, the so-called $\Lambda$-levels, and the FQHSs of electrons seen around Landau level (LL) filling factor $\nu=1/2$ (1/4) would correspond to the integer quantum Hall states of $^2$CFs ($^4$CFs) at integral $\nu^{CF}$ [@Jain.CF.2007]. In state-of-the-art, high-mobility 2DESs, FQHSs also develop around $\nu=3/4$, and are usually understood as the particle-hole counterparts of the FQHSs near $\nu=1/4$ through the relation $\nu\leftrightarrow (1-\nu$) [^1] [@Yeh.PRL.1999]. Alternatively, these states might also be the FQHSs of *interacting* $^{2}$CFs at $\nu^{CF}=\nu/(1-2\nu)$. For example, the $\nu=4/5$ state is the $\nu^{CF}=-4/3$ FQHS of $^{2p}$CFs, and has the same origin as the unconventional FQHS seen at $\nu=4/11$ ($\nu^{CF}=4/3$) [@Pan.PRL.2003; @Chang.PRL.2004].
{width=".45\textwidth"}
Another hallmark of clean 2DESs is an insulating phase that terminates the series of FQHSs at low fillings, near $\nu=1/5$, [@Jiang.PRL.1990; @Goldman.PRL.1990]. This insulating phase is generally believed to be an electron Wigner crystal, pinned by the small but ubiquitous disorder potential [^2]. Recently, an insulating phase was observed near $\nu=4/5$ in clean 2DESs [@Liu.PRL.2012; @Hatke.Nat.Comm.2014]. This phase, which is signaled by a reentrant integer quantum Hall state (RIQHS) near $\nu=1$, was interpreted as the particle-hole symmetric state of the Wigner crystal seen at very small $\nu$ [@Liu.PRL.2012; @Hatke.Nat.Comm.2014]. In this picture, the holes, unoccupied states in the lowest LL, have filling factor $\nu^h\sim 1/5$ ($=1-4/5$) and form a liquid phase when the short-range interaction is strong; see the left panel of Fig. 1(a). They turn into a solid phase when the thickness of the 2DES increases and the long-range interaction dominates (right panel of Fig. 1(a)). This interpretation is plausible, since the RIQHS only appears when the well-width ($W$) is more than five times larger than the magnetic length. However, it does not predict or allow for any transitions of the $\nu=4/5$ FQHS, which is always seen just before the RIQHS develops [@Liu.PRL.2012].
Here we report our extensive study of the FQHSs near $\nu=3/4$ (at 4/5 and 5/7) and their particle-hole counterparts near $\nu=5/4$ (at 6/5 and 9/7) [@Note1]. Via either increasing the 2DES density or the tilt angle between the magnetic field and the sample normal, we increase the ratio of the Zeeman energy ($E_Z=|g|\mu_BB$ where $B$ is the total magnetic field) to Coulomb energy ($V_C=e^2/4\pi\epsilon
l_B$ where $l_B=\sqrt{\hbar/e B_{\perp}}$ is the magnetic length), and demonstrate that these FQHSs undergo multiple transitions as they become spin polarized. The number of observed transitions, one for the FQHSs at $\nu=4/5$ and 6/5, and two for the states at $\nu=5/7$ and 9/7, is consistent with what is expected for polarizing the spins of *interacting* $^2$CFs. Note that these interacting $^2$CFs form FQHSs at *fractional* CF fillings $\nu^{CF}=-4/3$ ($\nu=4/5$ and 6/5) and -5/3 ($\nu=5/7$ and 9/7). Even more revealing is the observation that, whenever the RIQHS near $\nu=4/5$ develops, it is preceded by a transition of the FQHS at $\nu=4/5$ to a fully spin-polarized $^2$CF state. This provides evidence that the RIQHS is the manifestation of a *ferromagnetic $^2$CF Wigner crystal* (see Fig. 1(b)).
We studied 2DESs confined to wide GaAs quantum wells (QWs) bounded on each side by undoped Al$_{0.24}$Ga$_{0.76}$As spacer layers and Si $\delta$-doped layers, grown by molecular beam epitaxy. We report here data for two samples, with QW widths $W=$ 65 and 60 nm, and as-grown densities of $n\simeq$ 1.4 and 0.4, in units of $10^{11}$ cm$^{-2}$ which we use throughout this report. The samples have a van der Pauw geometry with InSn contacts at their corners, and each is fitted with an evaporated Ti/Au front-gate and an In back-gate. We carefully control $n$ while keeping the charge distribution symmetric. The measurements were carried out in dilution refrigerators with a base temperature of $T \simeq$ 25 mK and superconducting magnets. We used low-frequency ($\lesssim$ 40 Hz) lock-in techniques to measure the transport coefficients.
{width=".45\textwidth"}
{width=".8\textwidth"}
{width=".45\textwidth"}
Figure 2(a) shows $R_{xx}$ and $R_{xy}$ magnetoresistance traces near $\nu=3/4$ measured in a symmetric 65-nm-wide QW, at densities ranging from 1.00 to 1.54. The deep $R_{xx}$ minimum seen at $\nu=4/5$ in the lowest density ($n=1.00$) trace disappears at $n=1.13$ and reappears at higher densities. With increasing $n$, an $R_{xx}$ minimum also develops to the left of $\nu=4/5$, and merges with the $\nu=1$ $R_{xx}=0$ plateau at the highest density $n=1.54$ (see down arrows in Fig. 2(a)). Meanwhile, two minima appear in $R_{xy}$ on the sides of $\nu=4/5$ when the $\nu=4/5$ FQHS reappears (up-arrows in Fig. 2(a)). These two $R_{xy}$ minima become deeper at higher densities and, at $n\simeq 1.54$, the $R_{xy}$ minimum on the left side of $\nu=4/5$ merges into the $\nu=1$ $R_{xy}=h/e^2$ plateau [^3].
The $R_{xx}$ and $R_{xy}$ data of Fig. 2(a) provide evidence for the development of a RIQHS between $\nu=4/5$ and 1, as reported recently and attributed to the formation of a pinned Wigner crystal state [@Liu.PRL.2012; @Hatke.Nat.Comm.2014]. Note that at the onset of this development, the $\nu=4/5$ FQHS shows a transition manifested by a weakening and strengthening of its $R_{xx}$ minimum. The central questions we address here are: What is the source of this transition, and what does that imply for the origin of the $\nu=4/5$ FQHS and the nearby RIQHS?
As Fig. 2(b) illustrates, the $R_{xx}$ and $R_{xy}$ traces measured in the same QW at a fixed density $n = 1.00$ and different tilting angles $\theta$ reveal an evolution very similar to the one seen in Fig. 2(a) ($\theta$ denotes the angle between the magnetic field and normal to the 2D plane). At $\theta=0^{\circ}$, a strong FQHS is seen at $\nu=4/5$. The $R_{xx}$ minimum at $\nu=4/5$ disappears at $\theta\simeq 30^{\circ}$ and reappears at higher $\theta$, signaling the destruction and resurrection of the FQHS. Two minima in $R_{xy}$ on the sides of $\nu=4/5$, marked by the up-arrows, develop at $\theta>30^{\circ}$. As $\theta$ is further increased, the $R_{xy}$ minimum to the left of $\nu=4/5$ deepens and an $R_{xx}$ minimum starts to appear at the same $\nu$ (see down-arrows in Fig. 2(b)). At the highest tilting angles $\theta>40^{\circ}$, these minima merge into the $\nu=1$ $R_{xy}$ Hall plateau quantized at $h/e^2$, and the $R_{xx}=0$ minimum near $\nu=1$ plateau, respectively. This evolution with increasing $\theta$ is clearly very similar to what is seen as a function of increasing electron density. Moreover, it suggests that the $\nu=4/5$ FQHS transition is induced by the enhancement of $E_Z$ and is therefore spin related, similar to the spin-polarization transitions observed for other FQHSs [@Clark.PRL.1989; @Eisenstein.PRL.1989; @Du.PRL.1995; @Liu.cond.mat.2014].
Observing a spin-polarization transition for the $\nu=4/5$ FQHS, however, is surprising as this state is usually interpreted as the particle-hole counterpart of the $\nu=1/5$ FQHS, which is formed by *non-interacting* four-flux $^4$CFs. Such a state should be always fully spin-polarized and no spin-polarization transition is expected [@Yeh.PRL.1999]. On the other hand, if the $\nu=4/5$ FQHS is interpreted as the FQHS of *interacting* two-flux CFs ($^2$CFs), then it corresponds to the $\nu^{CF}=-4/3$ FQHS of $^2$CFs, and has two possible spin configurations as shown in Fig. 3(c). The system has one fully-occupied, spin-up, $\Lambda$-level and one 1/3-occupied $\Lambda$-level. Depending on whether $E_Z$ is smaller or larger than the $\Lambda$-level separation of the $^2$CFs, the 1/3-filled $\Lambda$-level may be either spin-down or spin-up (see Fig. 3(c)) [^4].
To further test the validity of the above interpretation, we measured $R_{xx}$ in a 60-nm-wide QW at a very low density $n=0.44$ and different $\theta$ in Fig. 3(a). The $\nu=4/5$ FQHS exhibits a clear transition at $\theta=60^{\circ}$, manifested by a weakening of the $R_{xx}$ minimum. Note that the transition of the $\nu=4/5$ FQHS appears in Figs. 2(a), 2(b) and 3(a) when the ratio of the Zeeman to Coulomb energies ($E_Z/V_C$) is about 0.0145, 0.0157 and 0.0177, respectively. The electron layer-thicknesses at these three transitions, parameterized by the standard deviation ($\lambda$) of the charge distribution in units of $l_B$, are 1.66, 1.52 and 0.75, respectively. The softening of the Coulomb interaction due to the finite-layer-thickness effect is less and the spin-polarization energy should be higher for smaller $\lambda/l_B$ (see [@Liu.cond.mat.2014] for the dependence of spin-polarization energy on the finite-layer-thickness). Therefore, these values are consistent with each other.
In Fig. 3(a), we also observe two transitions for the $\nu=5/7$ FQHS at $\theta=37.5^{\circ}$ and $50^{\circ}$, suggesting three different phases. This observation is consistent with the $\nu=5/7$ FQHS being formed by interacting $^2$CFs. In such a picture, the $\nu=5/7$ FQHS, which is the $\nu^{CF}=-5/3$ FQHS of the $^2$CFs, has three different possible spin configurations, as shown in Fig. 3(d). Similar to Fig. 3(c), the lowest spin-up $\Lambda$-level is always fully occupied. The second $\Lambda$-level is 2/3-occupied spin-up (spin-down), if $E_Z$ is larger (smaller) than the $\Lambda$-level separation. If $E_Z$ equals the $\Lambda$-level separation, the $^2$CFs form a novel spin-singlet state when the spin-up and spin-down $\Lambda$-levels are both 1/3-occupied; see the middle panel of Fig. 3(d).
Data near $\nu=5/4$ measured in the 65-nm-wide QW at different densities, shown in Fig. 3(b), further confirm our picture. The $\nu=6/5$ and 9/7 FQHSs exhibit transitions similar to their particle-hole conjugate states at $\nu=4/5$ and 5/7, respectively. The $\nu=6/5$ FQHS shows a transition at $n=1.78$. At this transition, $E_Z/V_C\simeq 0.0149$ and $\lambda/l_B\simeq 1.86$, very similar to the corresponding values (0.0145 and 1.66) at $\nu=4/5$ in Fig. 2(a), suggesting that the particle-hole symmetry $\nu\leftrightarrow
(2-\nu)$ is conserved in this case [^5]. Furthermore, the $\nu=9/7$ FQHS becomes weak twice, at $n=1.17$ and 1.55, also consistent with the $\nu=5/7$ FQHS transitions.
It is instructive to compare the transitions we observe at fractional $\nu^{CF}$ with the spin-polarization transitions of other FQHSs at integer $\nu^{CF}$. In Fig. 4, we summarize the critical $E_Z/V_C$ above which the FQHSs between $\nu=1/2$ and 3/2 become fully spin-polarized. The measurements were all made on the 60-nm-wide QW. The $x$-axis is $1/\nu^{CF}$, and we mark the electron LL filling factor $\nu$ in the top axis. The dotted lines, drawn as guides to the eye, represent the phase boundary between fully spin-polarized (above) and partially spin-polarized (below) $^2$CFs. Note that the system is always fully spin-polarized at $\nu^{CF}=-1$ ($\nu=1$) [@Kukushkin.PRL.1999; @Tiemann.Science.2012]. The critical $E_Z/V_C$ of FQHSs with integral $\nu^{CF}$ increases with $\nu^{CF}$ and reaches maxima at $\nu^{CF}=-\infty$ ($\nu=1/2$ and 3/2). Secondary maxima in the boundaries appear at $\nu^{CF}=-3/2$ ($\nu=3/4$ and 5/4), and seem to have approximately the same height as at $\nu=1/2$ and 3/2.
While Fig. 3 data strongly suggest that we are observing spin transitions of various FQHSs, there is also some theoretical justification. It has been proposed that the enigmatic FQHSs observed at $\nu=4/11$ and 5/13 in the highest quality samples can be interpreted as the FQHSs of interacting $^2$CFs at $\nu^{CF}=+4/3$ and +5/3 [@Pan.PRL.2003; @Chang.PRL.2004]. A recent theoretical study predicts a transition of the $\nu=4/11$ FQHS to full spin polarization when $E_Z/V_C$ is about 0.025 [@Mukherjee.PRL.2014]. Our observed transition of the $\nu=4/5$ ($\nu^{CF}=-4/3$) FQHS appears at $E_Z/V_C
\simeq 0.015$ to 0.024, in different QWs with well width ranging from 65 to 31 nm and corresponding $\lambda/l_B \simeq 1.7$ to 1.1 [@Liu.PRL.2012], consistent with this theoretically predicted value.
Another useful parameter in characterizing the origin of the $\nu=4/5$ FQHS and its transition are the energy gaps on the two sides of the transition. We show in Fig. 2(c) the measured excitation gaps for this state at different densities in the 65-nm-wide QW. The $\nu=4/5$ FQHS transition for this sample occurs at density $n \simeq 1.1$, see Fig. 2(a). Before and after the transition, e.g. at $n=0.86$ and 1.64, the $\nu=4/5$ FQHS has very similar energy gaps ($\sim 0.35$ K) although the densities are different by nearly a factor of two. Since the FQHS energy gaps at a given filling are ordinarily expected to scale with $V_C \sim \sqrt{n}$), this observation suggests that the excitation gap at $\nu=4/5$ is reduced when the FQHS becomes fully spin polarized.
Finally we revisit the RIQHSs we observe near $\nu=4/5$ (see, e.g., Figs. 1(a) and 1(b)). These RIQHSs were interpreted as pinned Wigner crystal states [@Liu.PRL.2012], and the recent microwave resonance experiments confirm this interpretation [@Hatke.Nat.Comm.2014]. Moreover, the data in Ref. [@Liu.PRL.2012] as well as the data we have presented here all indicate that, whenever a transition to a RIQHS occurs, it is initiated by a transition of the $\nu=4/5$ FQHS. As we have shown here, the transition we see for the $\nu=4/5$ FQHS is a transition to a *fully spin polarized state of interacting $^2$CFs*. Combining these observations leads to a tantalizing conclusion: The fact that the RIQHS near $\nu=4/5$ always develops following a spin polarization transition of $^2$CFs strongly links the RIQHS to a pinned, *ferromagnetic Wigner crystal of $^2$CFs*, as schematically illustrated in Fig. 1(b).
We acknowledge support through the NSF (DMR-1305691) for measurements, and the Gordon and Betty Moore Foundation (Grant GBMF2719), Keck Foundation, the NSF MRSEC (DMR-0819860), and the DOE BES (DE-FG02-00-ER45841) for sample fabrication. A portion of this work was performed at the National High Magnetic Field Laboratory, which is supported by the NSF (Cooperative Agreement No. DMR-1157490), State of Florida, and DOE. We thank S. Hannahs, G. E. Jones, T. P. Murphy, E. Palm, and J. H. Park for technical assistance, and J. K. Jain for illuminating discussions.
[^1]: In spinless or fully spin-polarized systems, the interacting electrons at $\nu$ are equivalent to interacting holes at $(1-\nu)$. In spinful systems, each LL is 2-fold degenerate and the particle-hole symmetry relates $\nu$ to $(2-\nu)$.
[^2]: See articles by H. A. Fertig and by M. Shayegan, in *Perspectives in Quantum Hall Effects*, Edited by S. Das Sarma and A. Pinczuk (Wiley, New York, 1997).
[^3]: As demonstrated in Ref. [@Liu.PRL.2012] (see, e.g., Fig. 3(c) of Ref. \[10\]), at the lowest temperatures, $R_{xy}$ minimum to the right of $\nu=4/5$ also approaches the $\nu=1$ plateau ($=h/e^2$), and $R_{xy}$ at 4/5 becomes quantized at $(5/4)(h/e^2)$.
[^4]: When $E_Z/V_C$ is extremely small, the $\nu=4/5$ FQHS can possibly have a third, spin-singlet configuration where the spin-up and -down $\Lambda$-levels are both 2/3-occupied.
[^5]: In Ref. [@Liu.cond.mat.2014], the FQHSs with integer $\nu^{CF}$ near $\nu=1/2$ and 3/2 have very different spin polarization energies at similar $\lambda/l_B$, suggesting that the particle-hole symmetry is broken.
|
---
abstract: |
Let $F(\b{z})=\sum_\b{r} a_\b{r}\b{z^r}$ be a multivariate generating function which is meromorphic in some neighborhood of the origin of $\mathbb{C}^d$, and let $\sing$ be its set of singularities. Effective asymptotic expansions for the coefficients can be obtained by complex contour integration near points of $\sing$.
In the first article in this series, we treated the case of smooth points of $\sing$. In this article we deal with multiple points of $\sing$. Our results show that the central limit (Ornstein-Zernike) behavior typical of the smooth case does not hold in the multiple point case. For example, when $\sing$ has a multiple point singularity at $(1 , \ldots , 1)$, rather than $a_\b{r}$ decaying as $|\b{r}|^{-1/2}$ as $|\b{r}| \to \infty$, $a_\b{r}$ is very nearly polynomial in a cone of directions.
address:
- 'Department of Mathematics, Ohio State University, Columbus OH 43210, USA'
- 'Department of Computer Science, University of Auckland, Private Bag 92019 Auckland, New Zealand'
author:
- Robin Pemantle
- 'Mark C. Wilson'
bibliography:
- 'mvGF.bib'
title: |
Asymptotics of multivariate sequences [II]{}.\
Multiple points of the singular variety.
---
[^1]
Introduction
============
In [@pemantle-wilson-multivariate1], we began a series of articles addressing the general problem of computing asymptotic expansions for a multivariate sequence whose generating function is known. Such problems are encountered frequently in combinatorics and probability; see for instance Examples 2 – 8 in Section 1 of [@pemantle-lecnotes], which collects examples from various sources including [@larsen-lyons-coalescing; @flatto-mckean-queues-parallel; @wilf-GFology; @comtet-advanced]. Our aim is to present methods which are as general as possible, and lead to effective computation. Our apparatus may be applied to any function whose dominant singularities are poles. Among other things, we showed in [@pemantle-wilson-multivariate1] that for all nonnegative bivariate sequences of this type, our method is applicable (further work may be required in one degenerate case).
The article [@pemantle-wilson-multivariate1] handled the case when the pole variety was smooth. The present work is motivated by a collection of applications where the pole variety is composed of intersecting branches. Several examples are as follows.
1. $$F(x,y) = \frac{2}{(1-2x)(1-2y)} \left[ 2+ \frac{xy-1}{1-xy(1+x+y+2xy)}\right]$$ arises in Markov modeling [@karloff];
2. $$F(\b{z}) = \frac{G(\b{z})}{\prod_{j=1}^k L_j (\b{z})}$$ where the $L_j$ are affine, arises in queueing theory [@bertozzi-mckenna-queueing];
3. $$F(x,y,z) = \frac {x^a y^b z^c}{(1-x)(1-y)(1-z)(1-xz)(1-yz)}$$ is a typical generating function arising in enumeration of integer solutions to unimodular linear equations, and is the running example used in [@deloera-sturmfels];
4. $$F(x,y) = \frac{1}{(1 - (1/3) x - (2/3) y)(1 - (2/3) x - (1/3) y)}$$ counts winning plays in a dice game (see Example \[eg:comb\]).
5. $$F(x,y,z) = \frac{z/2}{(1-yz) P(x,y,z)}$$ arises in the analysis of random lattice tilings [@cohn-elkies-propp-aztec]. In this case, in addition to the factorization of the denominator, there is an isolated singularity in the variety defined by the polynomial $P$, for which reason the analysis is not carried out in the present paper but in [@cohn-pemantle-fixation].
These examples serve as our motivation to undertake a categorical examination of generating functions with self-intersections of the pole variety. As in the above cases, one may imagine that this comes about from a factorization of the generating function, although we show in Example \[eg:figure 8\] that the method works as well for irreducible functions with the same “multiple point” local geometry.
Our approach is analytic. For simplicity, we restrict to the two-variable case in this introduction, though our methods work for any number of variables. Given a sequence $a_{rs}$ indexed by the $2$-dimensional nonnegative integer lattice, we seek asymptotics as $r,s \to \infty$. Form the generating function $F(z,w)=\sum_{r,s}a_{rs}z^rw^s$ of the sequence; we assume that $F$ is analytic in some neighborhood of the origin.
The iterated Cauchy integral formula yields $$a_{rs}=\frac{1}{(2\pi i)^2} \int_{\mathcal{C}'} \int_{\mathcal{C}}
\frac{F(z,w)}{z^{r+1}w^{s+1}}\,dw \,dz,$$ where $\mathcal{C}$ and $\mathcal{C'}$ are circles centered at $0$ and $F$ is analytic on a polydisk containing the torus $\mathcal{C}\times
\mathcal{C'}$. Expand the torus, by expanding (say) $\mathcal{C}$, slightly beyond a [*minimal singularity*]{} of $F$ (that is, a point $(z_0 , w_0)$ at which the expanding torus first touches the singular set $\sing$ of $F$). The difference between the corresponding inner integrals is then computed via residue theory. Thus $a_{rs}$ is represented as a sum of a residue and an integral on a large torus. One hopes that the residue term is dominant, and gives a good approximation to $a_{rs}$. The residue term is itself an integral (since the residue is taken in the inner integral only). A stationary phase analysis of the residue integral shows when our hope is realized, and yields an asymptotic expansion for $a_{rs}$.
In [@pemantle-wilson-multivariate1] we considered the case when the minimal singularity in question is a smooth point of $\sing$. The present article deals with the case where the minimal singularity is a [*multiple point*]{}: locally, the singular set is a union of finitely many graphs of analytic functions. This case includes the smooth point analysis of [@pemantle-wilson-multivariate1], and Theorem \[thm:transverse\] with $n$ set to $0$ essentially generalizes the analysis of that article). In the two-variable case, we know from Lemma 6.1 of [@pemantle-wilson-multivariate1] that every minimal singularity must have this form or else be a cusp (see also ), though more complicated singularities may arise in higher dimensions.
The rest of this article is organized as follows. In the remainder of this section we describe in more detail the program begun in [@pemantle-wilson-multivariate1] and continued here and in future articles in this series. Section \[sec:prelim\] deals with notation and preliminaries required for the statement of our main results. Those results, along with illustrative examples, are listed in Section \[sec:results\]. Proofs are given in Section \[sec:proofs\]. We discuss some further details and outline future work in Section \[sec:future\].
Details of the Program {#details-of-the-program .unnumbered}
----------------------
Our notation is similar to that in [@pemantle-wilson-multivariate1]. For clarity we shall reserve the names of several objects throughout. We use [**boldface**]{} to indicate a (row or column) vector. The number of variables will be denoted $d+1$. The usual multi-index notation is in use: $\b{z}$ denotes a vector $(z_1, \dots ,z_{d+1})^T \in
\mathbb{C}^{d+1}$, and if $\b{r}$ is an integer vector then $\b{z}^{\b{r}}=\prod_j z_j^{r_j}$. We also use the convention that a function, ostensibly of $1$ variable, applied to an element of $\mathbb{C}^{d+1}$ acts on each coordinate separately — for example $e^{\b{x}}=(e^{x_1}, \dots , e^{x_{d+1}}).$ Throughout, $\numer$ and $\denom$ denote functions analytic in some polydisk about $\b{0}$ and $F=G/H=\sum_{\b{r}} a_{\b{r}}\b{z}^\b{r}$. The set where $H$ vanishes will be called the [*singular variety*]{} of $F$ and denoted by $\sing$. Let $\disk (\b{z}), \torus (\b{z})$ denote respectively the polydisk and torus (both centered at the origin) on which $\b{z}$ lies.
A crude preliminary step in approximating $a_\b{r}$ is to determine its exponential rate; in other words, to estimate $\log |a_\b{r}|$ up to a factor of $1 + o(1)$. Let $\domain$ denote the (open) domain of convergence of $F$ and let $\logdom$ denote the logarithmic domain in $\R^{d+1}$, that is, the set of $\b{x} \in \R^{d+1}$ such that $e^{\b{x}} \in \domain$. If $\b{z^*} \in \domain$ then Cauchy’s integral formula
$$\tlabel{eq:cauchy-multi} a_\b{r} = \left (
\frac{1}{2 \pi i} \right )^{d+1} \int_{\torus (\b{z^*})}
\frac{F(\b{z})}{\b{z}^{\b{r}+\b{1}}} \, d\b{z}$$
shows that $a_\b{r} = O(|\b{z^*}|^{-\b{r}})$. Letting $\b{z^*} \to \partial
\domain$ gives $$\log | a_\b{r} | \leq - \b{r} \cdot \log |\b{z^*}| +
o(|\b{r}|),$$ and optimizing in $\b{z^*}$ gives $\log |a_\b{r} | \leq \rate
(\b{r}) + o(|\b{r}|)$ where
$$\tlabel{eq:rate}
\rate (\b{r}) := - \sup_{\b{x} \in \logdom} \b{r} \cdot \b{x} \, .$$
The cases in which the most is known about $a_\b{r}$ are those in which this upper bound is correct, that is, $\log |a_\b{r}| = \rate (\b{r}) +
o(|\b{r}|)$. To explain this, note first that the supremum in (\[eq:rate\]) is equal to $\b{r} \cdot \b{x}$ for some $\b{x} \in
\partial \logdom$. The torus $\torus (\b{e^x})$ must contain some minimal singularity $\b{z^*} \in \sing \cap \partial \domain$. Asking that $\log
|a_\b{r}| \sim - \b{r} \cdot \b{z^*}$ is then precisely the same as requiring the Cauchy integral (\[eq:cauchy-multi\]) — or the residue integral mentioned above — to be of roughly the same order as its integrand. This is the situation in which it easiest to estimate the integral.
Our program may now be summarized as follows. Associated to each minimal singularity $\b{z^*}$ is a cone $\operatorname{\b{K}}(\b{z^*}) \subseteq (\R^+)^{d+1}$. Given $\b{r}$, we find one or more $\b{z^*} = \b{z^*} (\b{r}) \in \sing \cap
\partial \domain$ where the upper bound is least. We then attempt to compute a residue integral there. This works only if $\b{r} \in \operatorname{\b{K}}(\b{z^*})$ and if the residue computation is of a type we can handle. Our program is guaranteed to succeed in some cases, and conjectured to succeed in others. It is known to fail only in some cases where the $a_\b{r}$ are not nonnegative reals (not the most important case in combinatorial or probabilistic applications) and even then a variant seems to work.
To amplify on this, define a point $\b{z^*} \in \sing$ to be [*minimal*]{} if $\b{z^*} \in \partial \domain$ and each coordinate of $\b{z^*}$ is nonzero. There are only three possible types of minimal singularities ), namely smooth points of $\sing$, multiple points and cone points (all defined below). It is conjectured that for all three types of points, and any $\b{r} \in \operatorname{\b{K}}(\b{z^*})$, we indeed have
$$\log |a_\b{r}| = \rate (\b{r}) + o(|\b{r}|) = - \b{r} \cdot \log
|\b{z^*}| + o(|\b{r}|) \, .$$ This is proved for smooth points in [@pemantle-wilson-multivariate1] via residue integration, and the complete asymptotic series obtained. It is proved in the present work for multiple points under various assumptions; the fact that these do not cover all cases seems due more to taxonomical problems rather than the inapplicability of the method. The problem remains open for cone points, along with the problem of computing asymptotics.
It is also shown in [@pemantle-wilson-multivariate1] that when $a_\b{r}$ are all nonnegative, then $\b{r} \in \operatorname{\b{K}}(\b{z^*} (\b{r}))$, and therefore that a resolution of the above problem yields a complete analysis of nonnegative sequences with pole singularities.
When the hypothesis that $a_\b{r}$ be real and nonnegative is removed, it is not always true that $\b{r}\in \operatorname{\b{K}}(\b{z^*} (\b{r}))$. In all examples we have worked, there have been points $\b{z^*} \notin
\overline{\domain}$ for which $\b{r} \in \operatorname{\b{K}}(\b{z^*})$ and a residue integral near $\b{z^*}$ may be proved a good approximation to $a_\b{r}$; the method is to contract the torus of integration to the origin in some way other than simply iterating the contraction of each coordinate circle to a point. Thus a second open question is to settle whether there is always such a point $\b{z^*}$ in the case of mixed signs (see the discussion in [@pemantle-wilson-multivariate1 Section 7]).
The scope of the present article is as follows. We define multiple points and carry out the residue integral arising near a strictly minimal, multiple point. Unlike the case for smooth points, the integral is not readily recognizable as a standard multivariate Fourier-Laplace integral, and a key result of this article is a more manageable representation of the residue to be integrated (Corollary \[cor:residue sum\] and Lemma \[lem:tooscint\]). The Fourier-Laplace integrals arising fall just outside the scope of the standard references, and we have been led to develop generalizations of known results. These results, which would take too much space here, are included in [@pemantle-wilson-analysis].
The asymptotics arising from multiple point singularities differ substantially from asymptotics in the smooth case. In the remainder of this introduction, we give examples to illustrate this.
Let $F(z,w)$ be a two-variable generating function and suppose that the point $(1,1)$ is a double pole of $F$ (thus $F = \numer / \denom$ with $\numer (z,w)
\neq 0$ and $\denom (z,w)$ vanishing to order $2$ at $(1,1)$). If $F$ has no other poles $(z,w)$ with $|z| , |w| \leq 1$, and if the two branches of the singular variety $\sing$ meet transversely at $(1,1)$ as in Figure \[fig:simple\], then for some positive constants $c$ and $C$,
$$\tlabel{eq:plateau 1}
a_{rs} = C + O(e^{-c |(r,s)|})$$
for all $(r,s)$ in a certain cone, $\operatorname{\b{K}}$, in the positive integer quadrant. This is proved in Theorem \[thm:double-point-generic\] below, and the constant $C$ computed. Exact statement of the transversality hypothesis requires some discussion of the geometry of $\sing$. The constant $C$ is computed in terms of some algebraic quantities derived from $F$. The cone $\operatorname{\b{K}}$ in which this holds is easily described in terms of the tangents to the branches of the double pole. The need for some preliminary algebraic and geometric analysis to define transversality and to compute $C$ and $\operatorname{\b{K}}$ motivates our somewhat lengthy Section \[sec:prelim\].
Example \[eg:figure 8\] below shows the details of this computation for a particular $F$. The exponential bound on the error follows from results in [@pemantle-finite-dimension] which are cited in Section \[sec:proofs\]. If the multiple pole is moved to a point $(z^*,w^*)$ other than $(1,1)$, a factor of $(z^*)^{-r} (w^*)^{-s}$ is introduced.
Compare this with the case where $\sing$ is smooth, intersecting the positive real quadrant as shown in Figure \[fig:smooth\]. In this case one has Ornstein-Zernike (central limit) behavior. Suppose, as above, that $(1,1)\in \sing$ and $F$ has no other poles $(z,w)$ with $|z| , |w| \leq 1$. Then as $(r,s) \to \infty$ with $r/s$ fixed, $a_{rs}$ is rapidly decreasing for all but one value of $r/s$; for that distinguished direction, $a_{rs} \sim C |(r,s)|^{-1/2}$ for some constant, $C$, and the error terms may be developed in a series of decreasing powers of $|(r,s)|$. The two main differences between the double and single pole cases are thus the existence of a plateau in the double pole case, and the flatness up to an exponentially small correction versus a correction of order $|(r,s)|^{-3/2}$ for a single pole.
Alter the previous example so that $\sing$ has a pole of some order $n+1$ at $(1,1)$, as in Figure \[fig:tent\]. Then the formula (\[eq:plateau 1\]) becomes instead
$$a_{rs} = P(r,s) + O(e^{-c |(r,s)|})$$ where $P$ is a piecewise polynomial of degree $n-1$. In Example \[eg:distinct-tangents\] below, $P$ is explicitly computed in the case $n+1 = 3$. Again, we see the chief differences from the smooth case being a cone of non-exponential decay, and an exponentially small correction to the polynomial $P(r,s)$ in the interior of the cone.
In higher dimensions there are more possible behaviors, but the same sorts of results hold. There is a cone on which the exponential rate has a plateau; under some conditions the correction terms within this cone are exponentially small. The following table summarizes the results proved in this paper; transversality assumptions have been omitted.\
[**Theorem number**]{} [**Hypotheses**]{} [**Asymptotic behavior**]{}
-------------------------------------- --------------------------------- ------------------------------------------
Theorem \[thm:double-point-generic\] two curves in 2-space $C + \expsmall$
Theorem \[thm:compnondeg\] $d+1$ sheets in $(d+1)$-space $C + \expsmall$
Theorem \[thm:nondeg\] more sheets than the dimension $\poly + \expsmall$
Theorem \[thm:transverse\] fewer sheets than the dimension asymptotics start with $|\rr|^{n/2-d/2}$
Theorem \[thm:degenerate\] all sheets tangent asymptotics start with $|\rr|^{n-d/2}$
\
We conclude the introduction by giving a combinatorial application of Example \[eg:simple\].
An independent sequence of random numbers uniform on $[0,1]$ is used to generate biased coin-flips: if $p$ is the probability of heads then a number $x \leq p$ means heads and $x > p$ means tails. The coins will be biased so that $p=2/3$ for the first $n$ flips, and $p=1/3$ thereafter. A player desires to get $r$ heads and $s$ tails and is allowed to choose $n$. On average, how many choices of $n \leq r + s$ will be winning choices?
The probability that $n$ is a winning choice for the player is precisely $$\sum_{a+b=n}\binom{n} {a} (2/3)^a (1/3)^b \binom{r+s-n} {r-a} (1/3)^{r-a}
(2/3)^{s-b} \, .$$ Let $a_{rs}$ be this expression summed over $n$. The array $\{ a_{rs} \}_{r , s \geq 0}$ is just the convolution of the arrays $\binom{r+s} {r} (2/3)^r (1/3)^s$ and $\binom{r+s} {r} (1/3)^r
(2/3)^s$, so the generating function $F(z,w) := \sum a_{rs} z^r w^s$ is the product $$F (z,w) = \frac{1} {(1 - \frac{1} {3} z - \frac{2} {3} w)
(1 - \frac{2} {3} z - \frac{1} {3} w)} \, .$$ Applying Theorem \[thm:double-point-generic\] with $G \equiv 1$ and $\det \operatorname{{\b{H}}}= -1/9$, we see that $a_{rs} = 3$ plus a correction which is exponentially small as $r,s \to \infty$ with $r/(r+s)$ staying in any subinterval of $(1/3,2/3)$. A purely combinatorial analysis of the sum may be carried out to yield the leading term, 3, but says nothing about the correction terms. The diagonal extraction method of [@hautus-klarner-diagonal] yields very precise information for $r=s$ but nothing more general in the region $1/3< r/(r+s) < 2/3$.
Preliminary definitions and notation
====================================
For each $\b{z}\in\mathbb{C}^{d+1}$, the truncation $(z_1, \dots ,z_d)$ will be denoted $\w{\b{z}}$, and the last coordinate $z_{d+1}$ simply by $z$. We do not specify the size for constant vectors — for example $\b{1}$ denotes the vector $(1, \dots ,1)^T$ of whatever size is appropriate. Thus we write $\w{\b{1}}=\b{1}$. A minimal singularity $\b{z^*}$ is [*strictly minimal*]{} if $\sing \cap \disk (\b{z^*}) = \{ \b{z^*} \}$. When a minimal point is not strictly minimal, one must add (or integrate) contributions from all points of $\sing \cap \torus (\b{z^*})$. This step is routine and will be carried out in a future article; we streamline the exposition here by assuming strict minimality.
This article deals entirely with minimal points $\b{z^*}$ of $\sing$ near which $\sing$ decomposes as a union of sheets $\sing_j$, each of which is a graph of an analytic function $z=u_j(\w{\b{z}})$. The algebraic description of this situation is as follows. By the Weierstrass Preparation Theorem [@griffiths-harris-principles] there is a neighborhood of $\b{z^*}$ in which we may write $H(\b{z}) = \chi (\b{z})
W(\b{z})$ where $\chi$ is analytic and nonvanishing and $W$ is a [*Weierstrass polynomial*]{}. This means that $$W(\b{z}) = z^{n+1} +
\sum_{j=0}^n \chi_j (\zh) z^j$$ where the multiplicity $n+1$ is at least 2 if $\b{z^*}$ is not a smooth point, and the analytic functions $\chi_j$ vanish at $\zsh$.
Now suppose that $\b{z^*}$ has all coordinates nonzero. Recalling from Lemma 6.1 of [@pemantle-wilson-multivariate1] that any such minimal point of $\sing$ is locally homogeneous, we see in fact that $\chi_j$ vanishes to homogeneous degree $n+1-j$ at $\zsh$ and that $\chi_0$ has nonvanishing pure $z_i^{n+1}$ terms for each $1 \leq i \leq d$. Then $\sing$ is locally the union of smooth sheets if the degree $n+1$ homogeneous part of $W$ (the leading term) factors completely into [*distinct*]{} linear factors, while if $\sing$ is locally the union of smooth sheets then we have a factorization
$$\tlabel{eq:local fac} H(\b{z})= \chi (\b{z}) \prod_{j=0}^n [z - u_j
(\zh)]$$
for analytic ([*not necessarily distinct*]{}) functions $u_j$ mapping a neighborhood of $\zsh$ to a neighborhood of $z^*$.
If the expansion of $H$ near $\b{z^*}$ vanishes to order $n+1$ in $z$, then there are always $n+1$ solutions (counting multiplicity) to $H(\zh , z) = 0$ for $\zh$ near $\zsh$. These vary analytically, but may be parametrized by $n+1$ analytic functions $u_j$ only if there is no monodromy, that is, if one stays on the same branch moving around a cycle in the complement of the singular set. Thus a singularity is one whose monodromy group is trivial. We remark also that in two variables, local homogeneity of the minimal point $\b{z^*}$ nearly implies it is a multiple point. The only other possibility is a cusp whose tangents are all equal; such a singularity turns out not to affect the leading order asymptotics and to affect the lower order asymptotics only in a single direction.
It turns out to be more convenient to deal with the reciprocals $v_j=1/u_j$. The basic setup throughout the rest of this article is as follows.
The point $\b{z^*}$ of $\sing$ is a $\weierdef$ if there are analytic functions $v_0, \dots
,v_n, \phi$ and a local factorization
$$\tlabel{eq:weier} F(\b{z})=\frac{\phi(\b{z})}{\prod_{j=0}^n
\left(1-zv_j(\w{\b{z}})\right)} \, ,$$
which we call the $\weierformdef$ of $F$, such that
- $(z^*)_{d+1}v_j (\w{\b{z^*}})=1$ for all $j$;
- each $\displaystyle \frac{\partial v_j}{\partial z_k} (\w{\b{z^*}}) \neq 0$;
- $z_{d+1}v_j (\w{\b{z}})=1$ for some $j$ if and only if $\b{z} \in \sing$.
Let $\sing_j$ denote the local hypersurface parametrized by $z = u_j
(\zh)$. Then the first of the conditions says that each $\sing_j$ passes through $\b{z^*}$: there are no extraneous factors in the denominator representing surface elements not passing through $\b{z^*}$. The last condition says that the zeros of the denominator are exactly the poles of $F$: there are no extraneous factors in the denominator vanishing at $\b{z^*}$ and cancelling a similar divisor in the numerator.
It may appear that some generality has been lost in imposing the second condition, since we are assuming that each sheet of $\sing$ projects diffeomorphically onto any coordinate hyperplane. This latter property is in fact guaranteed by the non-vanishing of the pure $z_i^{n+1}$ terms of $W$.
To compute $\phi$ directly from $\numer$ and $\denom$, differentiate (\[eq:local fac\]) $n+1$ times in the $z := z_{d+1}$ coordinate at the point $\b{z^*}$ to write $$\left ( \frac{\partial} {\partial z} \right )^{n+1}
\denom (\b{z^*}) = (n+1)! \, \chi (\b{z^*}) \, .$$ We may then write $$\frac{\phi} {\prod_{j=0}^n (1 - z v_j \b{\w{z}})} = \frac{\numer}
{\denom} = \frac{\numer} {\chi \prod_{j=0}^n (-u_j) \prod_{j=0}^n
(1 - z v_j (\b{\w{z}}))}$$ and solve for $\phi$ at $\b{z} = \b{z^*}$ to obtain $$\tlabel{eq:comp phi}
\phi (\b{z^*}) = \frac{(n+1)!} {(-(z^*)_{d+1})^{n+1}} \, \frac{G(\b{z^*})}
{\left ( \frac{\partial} {\partial z_{d+1}} \right )^{n+1}
\denom (\b{z^*})} \, .$$
For the remaining definitions, fix a strictly minimal element $\b{z^*}\in\mathcal{V}$ which is a multiple point, and let $v_0, \dots ,v_n$ be as above.
For each sheet $\sing_j$ and multiple point $\b{z^*}$, let $\operatorname{\b{dir}}_j(\b{z^*})$ be the vector defined by $$\operatorname{\b{dir}}_j(\b{z^*}) := \left. \left(\frac{z_1}{z_{d+1}} \frac{\partial v_j}{\partial z_1}, \dots , \frac{z_d}{z_{d+1}}\frac{\partial v_j}{\partial z_d}, 1\right) \right|_{\b{z} = \b{z^*}}.$$ We denote by $\operatorname{\b{K}}(\b{z^*})$ the positive hull of all the $\operatorname{\b{dir}}_j(\b{z^*})$ and by $\operatorname{\b{K}}_0(\b{z^*})$ their convex hull, in other words the intersection of $\operatorname{\b{K}}(\b{z^*})$ with the hyperplane $z_{d+1}=1$. Geometrically, $\operatorname{\b{K}}$ is precisely the collection of outward normal vectors to support hyperplanes of the logarithmic domain of convergence of $F$ at the point $(\log |z^*_1| , \ldots , \log |z^*_{d+1}| )$; see [@pemantle-wilson-multivariate1] for details.
Let $\cmat(\b{z^*})$ be the matrix whose $j$th row is $\operatorname{\b{dir}}_j(\b{z^*})$. We say that $\b{z^*}$ is [**nondegenerate**]{} if the rank of $\cmat$ is $d+1$, [**transverse**]{} if the rank is $n+1$, and [**completely nondegenerate**]{} if it is both transverse and nondegenerate; in this case necessarily $n=d$ and the multiplicity of each sheet is $1$.
If $K$ is a subset of $\operatorname{\b{K}}$ or $\operatorname{\b{K}}_0$ consisting of vectors whose directions are bounded away from the walls (equivalently the image of $K$ under the natural map $\bar{}: \mathbb{R}^{d+1}
\setminus \{ \b{0} \} \to \mathbb{RP}^d$ is a compact subset of the interior of $\overline{\operatorname{\b{K}}}$) then we shall say that $K$ is [****]{} to $\operatorname{\b{K}}$ (or $\operatorname{\b{K}}_0$).
The importance of $\operatorname{\b{K}}$ is that analysis near $\b{z^*}$ will yield asymptotics for $\b{r}$ in precisely the directions in $\operatorname{\b{K}}(\b{z^*})$. In the smooth case, $n = 0$ and so $\operatorname{\b{K}}(\b{z^*})$ reduces to a single ray. The point $\b{z^*}$ is transverse if and only if the normals there to the surfaces $\sing_j$ span a space of dimension $n+1$. Alternatively, the $n+1$ tangent hyperplanes intersect transversally. When $n>d$, transversality must be violated; nondegeneracy means that there is as little violation as possible.
Main theorems and illustrative examples
=======================================
All of the results in this section are ultimately proved by reducing the problem to the computation of asymptotics for a Fourier-Laplace integral (the proofs are presented in later sections). Owing to the large number of possibilities arising in this analysis, constructing a complete taxonomy of cases is rather challenging, and the number of potential theorems is enormous.
We have chosen to present a series of theorems of varying complexity and generality. Taken together, they completely describe asymptotics associated with nondegenerate multiple points. Our analysis of other types of multiple points requires additional (mild) hypotheses, which will almost always be satisfied in applications. The most important case is that of transversal points, but we also treat various types of tangencies and degeneracies. However, it is always possible that a practical problem involving a meromorphic generating function may not fit neatly into our classification scheme. We hope to convince the reader that in such a situation our basic method will yield Fourier-Laplace integrals from which asymptotics can almost certainly be extracted in a systematic way.
In two variables, our results specialize to the following cases. If $\sing$ is locally the union of $n+1$ analytic graphs, then either at least two tangents are distinct, in which case Theorem \[thm:nondeg\] applies, or all tangents coincide. This latter case is more complicated and we require some extra hypotheses; see Theorem \[thm:degenerate\].
We begin with the simplest case, in which the result admits a relatively self-contained statement, with as little extra notation as possible.
Let $F$ be a meromorphic function of two variables, not singular at the origin, with $F(z,w)=\numer(z,w)/\denom(z,w)=\sum_{r,s} a_{rs}z^rw^s$.
Suppose that $(z^*,w^*)$ is a strictly minimal, double point of $\sing$. Let $\operatorname{{\b{H}}}(z^*,w^*)$ denote the Hessian of $H$ at $(z^*,w^*)$ and suppose that $\det \operatorname{{\b{H}}}(z^*,w^*)\neq 0$.
Then
1. for each subset $K$ of $\operatorname{\b{K}}(z^*,w^*)$, there is $c > 0$ such that
$$a_{rs} = (z^*)^{-r}(w^*)^{-s} \left ( \frac{G(z^*,w^*)}{\sqrt{-(z^*)^2(w^*)^2
\det \operatorname{{\b{H}}}(z^*,w^*)}} + O(e^{-c |(r,s)|}) \right ) \qquad
\text{uniformly for $(r,s)\in K$}.$$
2. if $\delta = r/s$ lies on the boundary of $\overline{\operatorname{\b{K}}(z^*,w^*)}$, then in direction $\delta$ there is a complete asymptotic expansion $$a_{rs} \sim (z^*)^{-r}(w^*)^{-s} \sum_{k\geq 0} b_k s^{-(k+1)/2}$$ where $b_0 = \frac{G(z^*,w^*)}{2\sqrt{-(z^*)^2(w^*)^2 \det \operatorname{{\b{H}}}(z^*,w^*)}}.$
The geometric significance of the nonsingularity of $\operatorname{{\b{H}}}(z^*,w^*)$ is that this is equivalent to the curves $\sing_0, \sing_1$ intersecting transversally. If, on the other hand, $\det \operatorname{{\b{H}}}(z^*,w^*)=0$ (equivalently the curves are tangent), then higher order information is required in order to compute the relevant asymptotics. We treat the latter situation in Theorem \[thm:degenerate\].
Note that in the situation of the above theorem, the asymptotic exponential rate of $a_{rs}$ is constant on the interior of the cone $\operatorname{\b{K}}(z^*,w^*)$, that is, $a_{rs} \sim \exp (- (r,s) \cdot \vv)$ where $\vv
= (\log z^* , \log w^*)$ is constant on $\operatorname{\b{K}}(z^*,w^*)$; this differs considerably from the asymptotics previously derived for smooth points [@pemantle-wilson-multivariate1].
Consider $F=1/\denom$, where $\denom(z,w)=19-20z-20w+5z^2+14zw+5w^2-2z^2w-2zw^2+z^2w^2$. The real points of $\sing$ are shown in Figure \[fig:eight\].
The only common zero of $\denom$ and $\operatorname{\nabla}\denom$ is $(1,1)$, and so this is the only candidate for a double point. All other strictly minimal elements of $\sing$ must be smooth. The second order part of the Taylor expansion of $H$ near $(1,1)$ is $4(w-1)^2+10(z-1)(w-1)+4(z-1)^2=4[w-(3-2z)][w-(3/2-z/2)]$. The Hessian determinant at $(1,1)$ is therefore $-36$. The two tangent lines have slopes $-1/2$ and $-2$, so that asymptotics in directions $r/s=\kappa\in [1/2,2]$ cannot be obtained by smooth points. By Theorem \[thm:double-point-generic\], for slopes in the interior of this interval, the asymptotic $a_{rs}\sim 1/6$ holds, whereas on the boundary of the cone (slopes $1/2$ or $2$), $a_{rs}\sim 1/12$. Directions corresponding to slopes outside the interval $[1/2, 2]$ may be treated by the methods of [@pemantle-wilson-multivariate1]. The exponential order is nonzero in these directions.
This example demonstrates that the local behavior of $H$ at the strictly minimal point $(1,1)$ determines asymptotics. In fact $H(1+z, 1+w)$ is homogeneous and irreducible in $\mathbb{C}[z,w]$ and hence does not factor globally (in the power series ring $\mathbb{C}[[z,w]]$).
The previous theorem treated the simplest case, when $d=n=1$, and our subsequent results will be labelled in a similar way. We begin with some theorems involving nondegenerate points. The simplest case is when the point is completely nondegenerate (a generalization of the hypothesis of Theorem \[thm:double-point-generic\]).
Let $F$ be a function of $d+1$ variables with $F(\b{z}) =
\numer(\b{z})/\denom(\b{z}) = \sum_{\b{r}}a_{\b{r}}\b{z}^\b{r}$. Suppose that $\b{z^*}$ is a strictly minimal, completely nondegenerate multiple point of $\sing$ and that $F$ is meromorphic in a neighborhood of $\torus (\b{z^*})$.
Then there is $C$ such that for each subset $K$ of $\operatorname{\b{K}}(\b{z^*})$, there is $c > 0$ satisfying $$a_{\b{r}}\sim (\b{z^*})^{-\b{r}} \left ( C + O(e^{- c |\rr|}) \right )
\text{ uniformly for $\rr \in K$}.$$ The constant $C$ is given by $$C = \frac{d! \, \phi(\b{z^*})}{|\det \cmat (\b{z^*})|}.$$
The picture to keep in mind here is of $d+1$ complex hypersurfaces in $\CC^{d+1}$ intersecting at a single point. Notice that we again obtain asymptotic constancy on the interior of the cone of allowable directions, provided $\numer(\b{z^*})\neq 0$. If $\b{r}$ belongs to the boundary of $\operatorname{\b{K}}$, many different asymptotics are possible.
In case $d<n$, a modification of the last result is required, involving some linear algebra which we now introduce. Let $\simp=\simp_n$ denote the standard $n$-simplex in $\mathbb{R}^{n+1}$, $$\simp := \left \{ \bb{\alpha} \in \mathbb{R}^{n+1} \left |
\sum_{j=0}^{n}\alpha_j = 1 \right. \text{ and } \alpha_j \geq 0
\text{ for all } j \right \}.$$ The subspace $\b{1}^\perp$ parallel to $\simp$ will be denoted by $\hyper$. Assuming nondegeneracy of a minimal point $\b{z^*}$, the rows of $\cmat(\b{z^*})$ are $n+1$ vectors spanning $\R^{d+1}$, whence for any $\direc\in
\mathbb{R}^{d+1}$ the space of solutions $\mathcal{A}:= \mathcal{A}(\direc) := \{
\bb{\alpha} \mid \bb{\alpha} \cmat (\b{z^*}) = \direc \}$ will always be $(n-d)$-dimensional. The set $\mathcal{A} \cap \simp$ is just the set of coefficients of convex combinations of rows of $\cmat$ that yield the given vector $\direc$. If $\direc \in \operatorname{\b{K}}_0$ then $\mathcal{A}$ is a subset of the translate of $\hyper$ containing $\simp$, thus if $\direc$ is in the interior of $\operatorname{\b{K}}_0$ then $\mathcal{A} \cap \simp$ is $(n-d)$-dimensional. Let $\mathcal{A}^\perp$ denote the orthogonal complement in $\hyper$ of $\mathcal{A}_0$, the set $\mathcal{A}$ translated to the origin. The dimension (later denoted $\rho$) of $\mathcal{A}^\perp$ is $d$. The space $\mathcal{A}^\perp$ does not depend on $\direc$, since changing $\direc$ only translates $\mathcal{A}$.
The role that $\mathcal{A}$ will play is this: there will be an integral over $\simp$ whose stationary points are essentially the points of $\mathcal{A}$, each contributing the same amount; thus the total contribution will be the measure of $\mathcal{A}$ in the [*complementary measure*]{} which we denote by $\sigma$. The proper normalization in the definition of $\sigma$ is $$\sigma (S) := \sigma (S \cap \mathcal{A} \cap \simp)
:= \frac{\mu_{n-d} (S \cap \mathcal{A} \cap \simp)} {\mu_n (\simp)}$$ where $\mu_k$ denotes $k$-dimensional volume in $\mathbb{R}^{n+1}$. One may think of this as: $\sigma \times \mu_d =$ the normalized volume measure, $\mu$, on $\simp$ ([*Warning*]{}: when $\mathcal{A}$ is a single point, it is tempting to assume that $\sigma (\mathcal{A}) = 1$, but in fact in this case $\sigma (\mathcal{A}) = \mu_n(\simp)^{-1}$.)
Assume that the columns of $\cmat$ are linearly independent. Then the projection of the column space of $\cmat$ onto $\mathcal{A}^\perp$ has dimension $d$.
The matrix $\cmat$ is the sum of three columnwise projections, namely one onto $\mathcal{A}_0$, one onto $\mathcal{A}^\perp$ and one onto the span of $\b{1}$. The sum of the first two projections annihilates the last column of $\cmat$, mapping the column space to a space of dimension $d$. By definition of $\mathcal{A}$, the space $\mathcal{A} \cmat$ is a single point, hence $\mathcal{A}_0 \cmat = 0$, meaning that the first projection is null. Therefore, the second projection has rank $d$.
When the columns of $\cmat$ are linearly independent, let $\overline{\cmat}$ denote the matrix representing the linear transformation $\vv \mapsto \vv \cmat$ on $\mathcal{A}^\perp$ with respect to some basis of $\mathcal{A}^\perp$. Note that $\overline{\cmat}$ is independent of $\direc$ since $\mathcal{A}^\perp$ is, and that $\det \overline{\cmat}$ is independent of the choice of basis of $\mathcal{A}^\perp$. For general $\cmat$, we extend this definition so that $\overline{\cmat}$ is the projection of the column space of $\cmat$ onto the space $\mathcal{A}^\perp$, then represented in a (fixed but arbitrary) orthonormal basis of $\mathcal{A}^\perp$.
In stating subsequent results, we shall rely increasingly on derived data such as $\overline{\cmat}$. It is possible in principle to give formulae for these asymptotics in terms of the original data $\numer$ and $\denom$, but such expressions rapidly become too cumbersome to be useful.
Let $F$ be a function of $d+1$ variables, with $F(\b{z})=\numer(\b{z})/\denom(\b{z})=\sum_{\b{r}}a_{\b{r}}\b{z}^\b{r}$. Suppose that $\b{z^*}$ is a strictly minimal, nondegenerate multiple point of degree $n+1$ of $\sing$ and that $F$ is meromorphic in a neighborhood of $\torus (\b{z^*})$.
Then $\operatorname{\b{K}}(\b{z^*})$ is a finite union of cones $\operatorname{\b{K}}_j$ such that for each subset $K$ of each $\operatorname{\b{K}}_j$, there are $c>0$ and a polynomial $P$ of degree at most $n-d$, such that $$a_{\b{r}} = (\b{z^*})^{-\b{r}} \left ( P(\rr) + O(e^{-c |\rr|}) \right )
\text{\qquad uniformly for $\rr \in K$.}$$ Here $$P(\rr) = \frac{\phi(\b{z^*})\sigma(\mathcal{A}(\direc) \cap \simp)}{|\det
\overline{\cmat(\b{z^*})}|} \, (r_{d+1})^{n-d} + O((r_{d+1})^{n-d-1})\, ,$$ where $\mathcal{A}(\direc)$ is the solution set to $\bb{\alpha} \cmat(\b{z^*})
= \direc$ with $\direc = \rr / r_{d+1} \in \operatorname{\b{K}}_0(\b{z^*})$, and $\sigma$ is the complementary measure.
The approximation to $a_\rr$ is actually piecewise polynomial and is asymptotically valid throughout the interior of $\operatorname{\b{K}}$. The correction term may, however, fail to be exponentially small on some lower-dimensional surfaces in the interior of $\operatorname{\b{K}}$ where the piecewise polynomial is pieced together.
We have broken the coordinate symmetry in the formula for the leading term by parametrizing $\rr$ in terms of $r_{d+1}$ and $\direc$. See section \[sec:future\] for more comments.
The simplest interesting case to which Theorem \[thm:nondeg\] applies is that where $\sing$ is the union of $3$ curves in $2$-space which intersect at a strictly minimal point. Suppose that the strictly minimal singularity in question is at $(1,1)$. In this case since $d=1$ and $n=2$, Theorem \[thm:nondeg\] shows that $$a_{rs}\sim P(r,s)$$ for some piecewise polynomial $P$ of degree at most $1$, whenever $\delta:= r/s$ lies in the interior of the interval formed by the slopes $c_j=v_j'(1)$. We shall obtain a more explicit expression for $P$.
Without loss of generality, we suppose that $0 < c_0 \leq c_1 \leq c_2$, at least one of the two inequalities being strict. Assume for now that both inequalities are strict: $c_0 < c_1 < c_2$. Let $\mathcal{A}_\delta=\{(\alpha_0,\alpha_1,\alpha_2)\in \simp \mid \alpha_0
c_0 + \alpha_1 c_1 + \alpha_2 c_2 = \delta\}$. When $\delta$ belongs to the convex hull of $c_0, c_1$ and $c_2$, the affine set $\mathcal{A}_\delta$ is a line segment whose endpoints are on the boundary of the simplex $\simp$. Since $\mathcal{A}$ is orthogonal to $\b{c}$, the set $\mathcal{A}^\perp$ is parallel to the projection $\bar{\b{c}}$ of $\b{c}$ onto $\hyper$. Letting $\mu$ denote the mean of the $c_j$, $\Sigma/\sqrt{3}$ the standard deviation, we see that the projection is $\b{c} - \mu \b{1}$ and its euclidean length is $\Sigma$.
If $c_0<\delta<c_1$, then one endpoint of the line segment is on the face $\alpha_2=0$, with $\alpha_0=\frac{c_1-\delta}{c_1-c_0},\alpha_1=\frac{\delta-c_0}{c_1-c_0}$. The other endpoint is when $\alpha_1=0$, with $\alpha_0=\frac{c_2-\delta}{c_2-c_0},
\alpha_2=\frac{\delta-c_0}{c_2-c_0}$. The squared euclidean length of this line segment simplifies to $$3\frac{(\delta-c_0)^2 \Sigma^2}{(c_1-c_0)^2 (c_2-c_0)^2}.$$ A similar argument gives the answer when $c_1<\delta <c_2$. The squared euclidean length of the line segment is then $$3\frac{(\delta-c_2)^2 \Sigma^2}{(c_2-c_1)^2 (c_2-c_0)^2} .$$ Both answers agree, and by continuity give the correct answer, when $\delta=c_1$. Since the area of $\simp_2$ is $\sqrt{3}/2$, the complementary measure $\sigma$ of $\mathcal{A}_\delta$ is $2/\sqrt{3}$ times its euclidean length. Thus we obtain $P(r,s) = sf(r/s)$, where $$f(\delta) =
\begin{cases}
2 \frac{\delta - c_0}{(c_1 - c_0) (c_2 - c_0)}, \qquad \text{if
$c_0\leq \delta\leq c_1$};\\
2 \frac{c_2 - \delta}{(c_2 - c_1) (c_2 - c_0)}, \qquad \text{if
$c_1\leq \delta\leq c_2$}.
\end{cases}$$ Thus $$\label{eq:P}
P(r,s) =
\begin{cases}
2 \frac{r - c_0 s}{(c_1 - c_0) (c_2 - c_0)}, \qquad
\text{if $c_0\leq r/s \leq c_1$};\\
2 \frac{c_2 s - r}{(c_2 - c_1) (c_2 - c_0)}, \qquad
\text{if $c_1\leq r/s \leq c_2$}.
\end{cases}$$ Note that $f=0$ on the boundary of the cone, namely when $\delta=c_0$ or $\delta=c_2$. The above formula extends to the case $c_0=c_1$ or $c_1=c_2$ in the obvious way. This example illustrates the piecewise polynomial nature of the asymptotics in $\operatorname{\b{K}}$. The approximation $a_{rs} \sim P(r,s)$ is exponentially close for $\delta$ in compact subintervals of $(c_0 , c_1) \cup (c_1 , c_2)$, but requires polynomial correction when $\delta \to c_1$ as well as at the endpoints $c_0$ and $c_2$.
More general formulae for $P$ in this case are described in the forthcoming work [@baryshnikov-pemantle].
We move on to investigate situations which are degenerate according to our terminology. We begin with the simplest case, namely when the minimal point is transverse. Note that in this case, the set $\mathcal{A}$ of linear combinations of the rows of $\cmat$ that yield $\direc$ is always a single point since the rows of $\cmat$ are linearly independent.
If $\b{z^*}$ is a strictly minimal multiple point and $\bb{\alpha}$ is an element of $\simp$, we define the matrix $Q = Q(\b{z^*} , \bb{\alpha})$ to be the Hessian matrix of the function $$\bb{\w{\theta}} \mapsto - \log \bb{\alpha} \b{v} (\b{\w{z^*}} e^{i \bb{\w{\theta}}} ) \, .$$ at the point $\bb{\w{\theta}} = \b{0}$. We define the matrix $M$ by $$\tlabel{eq:big hessian}
M := M(\b{z^*} , \bb{\alpha}) := \left(\begin{matrix}0&-i \overline{\cmat}(\b{z^*}) \\
-i \overline{\cmat}^T(\b{z^*})&Q(\b{z^*}, \bb{\alpha}) \end{matrix}\right).$$
Let $F$ be a function of $d+1$ variables, with $F(\b{z})=\numer(\b{z}) / \denom(\b{z})=\sum_{\b{r}}a_{\b{r}}\b{z}^\b{r}$. Suppose that $\b{z^*}$ is a strictly minimal, transverse multiple point of degree $n+1$ of $\sing$ and that $F$ is meromorphic in a neighborhood of $\torus(\b{z^*})$.
Let $K$ be a subset of $\operatorname{\b{K}}_0 (\b{z^*})$ such that $\det M(\b{z^*} , \bb{\alpha (\direc)})\neq 0$ for $\direc \in K$, where $\bb{\alpha} (\direc)$ is the unique point in $\mathcal{A} (\direc)$. Then there is a complete asymptotic expansion $$a_{\b{r}}\sim (\b{z^*})^{-\b{r}} \sum_{k\geq 0} b_k
(r_{d+1})^{\frac{n-d}{2}-k} \; \; \text{ uniformly for $\direc \in K$}.$$
If $\numer(\b{z^*})\neq 0$, the leading coefficient is given by $$b_0 = \frac{n!} {\sqrt{n+1}} \, \frac{(2\pi)^{\frac{n-d}{2}} \phi(\b{z^*})}
{\det M(\b{z^*}, \bb{\alpha (\direc)})^{1/2}}$$ where the square root is the product of principal square roots of the eigenvalues.
Consider the trivariate sequence $(a_{r,s,t})$ whose terms are zero if any index is negative, and is otherwise given by the boundary condition $a_{0,0,0} = 1$ and the recurrence $$\begin{aligned}
16 a_{r,s,t} & = & 12 a_{r-1,s,t} + 12 a_{r,s-1,t} + 8 a_{r,s,t-1}
-5 a_{r-1,s-1,t} - 3 a_{r-1,s,t-1} -3 a_{r,s-1,t-1} \\
&& -2 a_{r-2,s,t} - 2 a_{r,s-2,t} - a_{r,s,t-2}. \end{aligned}$$
Let $F(x,y,z)=\sum_{r,s,t} a_{r,s,t} x^r y^s z^t$ be the associated generating function. Then we obtain directly the explicit form $$F(x,y,z)=\frac{16}{(4-2x-y-z)(4-x-2y-z)}.$$ The point $(1,1,1)$ is a multiple point of $\sing$ which is readily seen to be strictly minimal. In fact this point is on the line segment of multiple points $\{ (1,1,1) + \lambda (-1,-1,3) : -1/3 <
\lambda < 1 \}$. We focus here on the point $(1,1,1)$ in particular because it is the only point giving asymptotics which do not decay exponentially. Near $(1,1,1)$, the set $\sing$ is parametrized by $z = 4 - 2 x - y$ and $z = 4 - x - 2 y$. The cone $\operatorname{\b{K}}$ is the positive hull of $(2,1,1)$ and $(1,2,1)$, with $\operatorname{\b{K}}_0$ being the line segment between these. Other multiple points on the line segment govern other two dimensional cones. For example, the point $(1/3, 1/3, 3)$ corresponding to $\lambda = 2/3$ governs the cone $\operatorname{\b{K}}_{2/3}$ which is positive hull of $(2/3 , 1/3 , 3)$ and $(1/3 , 2/3 , 3)$, or in other words, $\operatorname{\b{K}}_{2/3}$ is the set of $(r,s,t)$ with $r+s = t/3$ and $r , s \geq t/9$. These other cones sweep out, as $\lambda$ varies, a polyhedral cone with non-empty interior. There are still directions outside the cone, namely directions $(r,s,t)$ with $\min \{ r , s \} \leq (r+s)/3$; asymptotics in these directions are governed by smooth points on one of the two hyperplanes. We return to the examination of the non-exponentially decaying asymptotic directions.
The multiple point $(1,1,1)$ yields asymptotics for $a_{rst}$ with $r+s = 3t$ and $r,s \geq t$. Given $\direc = (2 - \alpha , 1 + \alpha ,
1) \in \operatorname{\b{K}}_0$, the set $\mathcal{A}$ is the single point $(1 - \alpha
, \alpha)$. The complement $\mathcal{A}^\perp$ is always all of $\hyper$, which has an orthonormal basis $\{\xx\}$, $\xx := (\sqrt{1/2}
, -\sqrt{1/2})$.
The matrix $\overline{\cmat}$ is equal to $((2,1) \cdot \xx , (1,2) \cdot
\xx) = \xx$. This leads to $\det M = (A+B+C+D)/2$, where $A,B,C$ and $D$ are the entries of the restricted Hessian, $Q$. A routine computation shows that $A+B+C+D = 12$, leading to $\det M = 6$ (as is always the case for transverse intersections of the pole set, the determinant of $M$ does not vary with direction inside the cone of a fixed multiple point). Computing $\phi (\b{1})$ from (\[eq:comp phi\]) then gives $$\phi (\b{1}) = \frac{2!} {(-1)^2} \, \frac{16} {2} = 16 \, .$$ Plugging this into Theorem \[thm:transverse\] we find (to first order, as $t\to \infty$) that $$a_{r,s,t} \sim \frac{16}{\sqrt{24\pi t}} \qquad \text{ if $r+s=3t$ and
$r/s\in (1/2, 2)$}.$$ The above first order approximation differs from the true value of $a_{rst}$ by less than $0.3\%$ when $(r,s,t) = (90,90,60)$.
As mentioned in the introduction, instead of producing enough variants of these theorems for a complete taxonomy, we will stop with just one more result. The case we describe is the most degenerate, namely where all the sheets $\sing_j$ are tangent at $\b{z^*}$, so that $\operatorname{\b{K}}(\b{z^*})$ is a single ray. In this case $\det M$ varies with $\bb{\alpha}$ and the resulting formula is an integral over $\bb{\alpha}$. The methods used here may be adapted to prove results when the degeneracy is somewhere between this extreme and the nondegenerate cases.
Let $F$ be a function of $d+1$ variables, not singular at the origin, with $F(\b{z})=\numer(\b{z})/\denom(\b{z})=\sum_\b{r} a_\b{r}\b{z}^{\b{r}}$. Suppose that $\b{z^*}$ is a strictly minimal, multiple point of degree $n+1$ of $\sing$ and that $F$ is meromorphic in a neighborhood of $\torus (\b{z^*})$. Further suppose that $\operatorname{\b{K}}(\b{z^*})$ is a single ray.
If $\det Q(\b{z^*}, \bb{\alpha})\neq 0$ on $\simp$, then for $\b{r}\in \operatorname{\b{K}}(\b{z^*})$ we have
$$a_\b{r} = \b{z^*}^{-\b{r}} [b_0 (r_{d+1})^{n-d/2} +
O(r_{d+1})^{n - d/2 - 1}] \, .$$
The value of $b_0$ is given by $$b_0 = \frac{\phi(\b{z^*})}{(2\pi)^{d/2}}\int_\simp \det
Q(\b{z^*}, \bb{\alpha})^{-1/2}\,d\meas(\bb{\alpha}).$$
Theorem \[thm:degenerate\] covers the situation where all the sheets coincide, as opposed to merely being tangent. That situation can also be analyzed in a different way by a slight modification of the proof of the smooth case, as discussed in [@pemantle-wilson-multivariate1]. The methods of [@pemantle-wilson-multivariate1] yield a complete asymptotic expansion.
The simplest case illustrating Theorem \[thm:degenerate\] is that of two curves $w = u_j(z)$ in $\mathbb{C}^2$, intersecting tangentially at the strictly minimal point $\b{z}=(1,1)$. Suppose that $\operatorname{Re}\log
v_j(e^{i\theta}) = -d_j \theta^2 + \cdots$ with $d_j >0$. The simplex in question is one-dimensional and we identify it with the interval $0\leq
t\leq 1$. Then for $0\leq t\leq 1$ the matrix $M(\b{z}, t)$ is just the $1\times 1$ matrix with entry $d_t=(1-t)d_0+td_1$. Hence when $d_0\neq d_1$ we obtain asymptotics in the unique direction $\direc$ of the theorem:
$$\begin{aligned}
a_{rs}&\sim \frac{\phi(1,1)\sqrt{s}}{\sqrt{2\pi}} \int_0^1 (d_t)^{-1/2}\, dt\\
&= \frac{\phi(1,1)\sqrt{s}}{\sqrt{2\pi}} \frac{1}{d_1- d_0}\int_{d_0}^{d_1}
y^{-1/2} \,dy\\
&= \frac{2\phi(1,1)\sqrt{s}}{\sqrt{2\pi}} \frac{\sqrt{d_1}-\sqrt{d_0}}{d_1-
d_0}\\
&= \frac{\phi(1,1)\sqrt{s}}{\sqrt{2\pi}} \frac{2}{\sqrt{d_0}+\sqrt{d_1}}.\end{aligned}$$
This final formula is easily seen to hold also in the case $d_0=d_1$, in which case the formula $\phi (1,1)\sqrt{s} / \sqrt{2 \pi d_0}$ agrees with the formula in [@pemantle-wilson-multivariate1].
Since analysis in $1$ variable is considerably easier than in general, it is possible to derive a result for plane curves even when $d_j$ vanishes. We omit the details here, but note that such a derivation is possible because (by minimality) we must have $\log wv_j(
z e^{i\theta})=ic_j\theta -d \theta^m + \cdots$, where $c_j\geq 0$, $m$ is even and $\operatorname{Re}d > 0$.
If all tangents are equal at $(z^*,w^*)$, then $\operatorname{\b{K}}_0 (z^*,w^*)$ is a singleton. If all $\operatorname{Re}\log w v_j (z e^{i\theta}) $ vanish to the same exact order $m$ and $(r,s) \in \operatorname{\b{K}}(z^*,w^*)$, then $$a_{rs} = (z^*)^{-r}(w^*)^{-s} [b_0 s^{n-1/m} + O(s^{n - 2/m})]$$ with $b_0$ given by $$b_0=\frac{\phi(z^*,w^*) \Gamma(1/m)}{2\pi(1-1/m)}
\frac{d_1^{1-1/m}-d_0^{1-1/m}}{d_1-d_0}.$$ The above formula for $b_0$ also holds in the limit when $d_0 = d_1$, when it becomes $\displaystyle{b_0=\frac{\phi(z,w)\Gamma(1/m)}{2\pi(d_0)^{1/m}}}$.
Proofs
======
Throughout, we assume that $\b{z^*}$ is a fixed strictly minimal multiple point of $\sing$, of multiplicity $n+1$, and $F(\b{z}) =
\phi(\b{z}) / \prod_{j=0}^{n} (1 - z v_j(\w{\b{z}}))$ near $\b{z^*}$ as in Definition \[defn:preparation\]. Recall our notational conventions $s := r_{d+1}, z := z_{d+1}$. To ease notation we shall later assume that $\b{z^*} = \b{1}$.
The proofs follow a $4$-step process. First we use the residue theorem in one variable to reduce the dimension by $1$ and to restrict attention to a neighbourhood of $\b{\w{z^*}}$. Second, the residue sum resulting is rewritten in a form more amenable to analysis. Third, the Cauchy-type integral is recast in the Fourier-Laplace framework by the usual substitution $\b{\w{z}} = \b{\w{z^*}} e^{i\bb{\theta}}$. Finally, detailed analysis of these Fourier-Laplace integrals (including some generalizations of known results we have included separately in [@pemantle-wilson-analysis] to save space here) yields our desired results.
Restriction to a neighborhood of $\b{z^*}$ {#restriction-to-a-neighborhood-of-bz .unnumbered}
------------------------------------------
We first apply the residue theorem in one variable: the proof of the following proposition is entirely analogous to that of [@pemantle-wilson-multivariate1 Lemma 4.1].
Let $R(s,\b{\w{z}},\varepsilon)$ denote the (finite) sum of the residues of $g: z\mapsto z^{-s-1} F(\b{\w{z}}, z)$ at $z = u_j(\b{\w{z}})$ inside the ball $|z - z^*| < \varepsilon$.
Let $\nbd'$ be a neighborhood of $\w{\b{z^*}}$ in $\torus(\w{\b{z^*}})$. Then for sufficiently small $\varepsilon > 0$, $$\tlabel{eq:tores}
|\b{z}^{\rr}| \left|a_\rr - (2\pi i)^{-d} \int_{\nbd'} -
\b{ \w{z} }^{-\b{ \w{r} } - \b{1} } R(s,\b{\w{z}},\varepsilon)
\, d\b{ \w{z} } \right |
\text{ is exponentially decreasing in $s$ as $s \to \infty$.}$$ This is uniform in $\b{r}$ as $\b{r}/s$ varies over some neighborhood of $\operatorname{\b{K}}_0(\b{z^*})$.
Rewriting the residue sum {#rewriting-the-residue-sum .unnumbered}
-------------------------
After using the local residue formula we must perform a $d$-dimensional integration of the residue sum $R$. The following lemma, whose proof may be found in [@devore-lorentz-constructive p. 121, Eqs 7.7 & 7.12], will yield a more tractable form for $R$. Recall from Section \[sec:results\] the relevant facts about simplices. In the following, the notation $\bb{\alpha} \b{v}$ denotes the scalar product $\sum_{j=0}^n \alpha_j v_j$, and $h^{(n)}$ the $n$th derivative of $h$.
Let $h$ be a function of one complex variable, analytic at $0$, and let $\meas$ be the normalized volume measure on $\simp_n$. Then $$\sum_{j=0}^{n} \frac{h(v_j)}{\prod_{r \neq j} (v_j - v_r)} =
\int_{\simp_n} h^{(n)} (\bb{\alpha}\b{v}) \, d\meas (\bb{\alpha})$$ both as formal power series in $n+1$ variables $v_0, \dots ,v_{n}$ and in a neighborhood of the origin in $\mathbb{C}^{n+1}$.
\[Residue sum formula\] Let $R(s,\w{\b{z}},\varepsilon)$ be the sum of the residues of the function $g: z\mapsto z^{-s-1}F(\b{z})$ inside the ball $|z - z^*| < \varepsilon$. Define $h_{s,\w{\b{z}}} (y) =
y^{s+n} \phi(\w{\b{z}}; 1/y)$.
Then for sufficiently small $\varepsilon$, there is $\delta > 0$ such that for $|\w{\b{z}} - \w{\b{z^*}}| < \delta$, $$\tlabel{eq:residue} R(s,\w{\b{z}},\varepsilon) = \int_{\simp}
h_{s,\w{\b{z}}}^{(n)} (\bb{\alpha}\b{v})\,d\meas(\bb{\alpha}).$$
First suppose that the functions $v_0, \dots, v_{n}$ are distinct. Choose $\varepsilon$ sufficiently small and $\delta > 0$ such that $g$ has exactly $n+1$ simple poles in $ | z - z^* | < \varepsilon$ whenever $|\w{\b{z}} - \w{\b{z^*}}| < \delta$. Then there are $n+1$ residues, the $j$th one being $$\frac{v_j (\w{\b{z}})^{s+n} \phi (\w{\b{z}} , 1 / v_j
(\w{\b{z}}))}{\prod_{r \neq j}(v_r (\w{\b{z}}) - v_j (\w{\b{z}}))} =
\frac{h_{s,\w{\b{z}}} (v_j(\w{\b{z}}))}{\prod_{r \neq j} (v_r
(\w{\b{z}}) - v_j (\w{\b{z}}))}.$$ The result in this case now follows by summing over $j$ and applying Lemma \[lem:simplex\].
In the case when $v_0, \dots , v_{n}$ are not distinct, let $v_j^t$ be functions approaching $v_j$ as $t \rightarrow 0$, such that $v_j^t$ are distinct for $t$ in a punctured neighborhood of $0$, and let $g^t(z) =
z^{-s-1}\phi (\b{z}) / \prod_{j=0}^{n} (z - 1 / v_j^t (\w{\b{z}}))$, so that $g^t \rightarrow g$ as well. The sum of the residues of $g^t$ may be computed by integrating $g^t$ around the circle $|z-z^*|=\varepsilon$, and since $g^t \rightarrow g$, this sum approaches the sum of the residues of $g$. Since the expression in is continuous in the variables $v_j$, this proves the general case.
The residue sum formula and Proposition \[prop:localize\] combine to show that for some neighborhood $\nbd'$ of $\b{\w{z^*}}$ in $\torus(\b{z^*})$, $$\tlabel{eq:iterint}
|\b{(z^*)^r}| \left| a_{\b{r}} - (2\pi i)^{-d} \int_{\nbd'}
- \w{\b{z}}^{-\w{\b{r}} - \b{1}} \int_{\simp}h_{s,\w{\b{z}}}^{(n)}(\bb{\alpha}\b{v})
\,d\meas(\bb{\alpha})\,d\w{\b{z}}\right| \quad \text{ is exponentially
decreasing in $s$},$$ where, as usual, we have let $s$ denote $r_{d+1}$.
Recasting the problem in the Fourier-Laplace framework {#recasting-the-problem-in-the-fourier-laplace-framework .unnumbered}
------------------------------------------------------
In this subsection we assume throughout (purely in order to ease notational complexity) that $\b{z^*} = \b{1}$.
\[Reduction to Fourier-Laplace integral\] For $0 \leq k \leq n$, $e^{i\w{\bb{\theta}}} \in \nbd'$ and $\bb{\alpha} \in \mathbb{R}^{n+1}$, define $$\begin{aligned}
p_k (s) & := & \frac{n! (s+n)!} {k! (n-k)! (s+k)!} \\[1ex]
f (\w{\bb{\theta}} , \bb{\alpha}) & := & \frac{i \w{\b{r}}} {s}
- \log (\bb{\alpha} \vv (e^{i \w{\bb{\theta}}}) ) \\[1ex]
\mult_k (\w{\bb{\theta}} , \bb{\alpha}) & := & \left. \left ( \frac{d} {dy} \right )^k
\phi (e^{i\w{\bb{\theta}}} , 1/y) \right|_{y = \vv (e^{i \bb{\w{\theta}}})} \\[1ex]
I(s; f, \mult_k) & := & \int_\nice e^{-s f}\mult_k \, d\mu_{n+d}.\end{aligned}$$ Then there is a neighborhood $\nbd$ of $\b{0}$ in $\mathbb{R}^d$, a product of compact intervals, such that $$\tlabel{eq:multint}
\left| a_{\b{r}} - (2\pi)^{-d} \frac{n!}{\sqrt{n+1}}
\sum_{k=0}^{n}p_k(s) I(s; f, \mult_k) \right|
\quad\text{is exponentially decreasing in $s$}\, ,$$ where $\nice := \nbd \times \simp_n$.
Note for later that each $\psi_k$ is independent of $\bb{\alpha}$, as is $f(\b{0}, \bb{\alpha})$.
Recall that $\sigma (\mathcal{A}) = n ! / \sqrt{ n + 1 } = V_n(\simp_n)^{-1}$. Applying Leibniz’ rule for differentiating $h$ yields
$$\begin{aligned}
h_{s, \w{\b{z}}}^{(n)}(y)&=\sum_{k=0}^n \binom{n}{k} \left(
\frac{d} {dy} \right )^{n-k} y^{s+n}
\left( \frac{d} {dy} \right )^{k} \phi(\w{\b{z}},1/y)\\
&=\sum_{k=0}^n \frac{n!(s+n)!}{k!(n-k)!(s+k)!} y^{s+k}
\left ( \frac{d} {dy} \right )^{k} \phi(\w{\b{z}},1/y)\\
&=y^s\sum_{k=0}^n p_k(s) y^k \left ( \frac{d} {dy} \right )^k
\phi (\b{\w{z}} , 1/y) \, .\end{aligned}$$
Substituting this into (\[eq:iterint\]) with $\w{\b{z}}
= e^{i\w{\bb{\theta}}}$, shrinking $\nbd'$ if necessary and letting $\nbd$ be the inverse image under $\w{\bb{\theta}} \mapsto
e^{i\w{\bb{\theta}}}$ completes the proof and the third step.
Analysis of the Fourier-Laplace integral {#analysis-of-the-fourier-laplace-integral .unnumbered}
----------------------------------------
The proofs of all the theorems in Section \[sec:results\] proceed in parallel while we compute the asymptotics for (\[eq:multint\]) with the aid of the complex Fourier-Laplace expansion. To that end, observe that $\nice$ is a compact product of simplices and intervals of dimension $n+d$ inside $\R^{n+d+1}$, $f$ and $\psi_k$ are complex analytic functions on a neighbourhood of $\nice$ in $\hyper \times \R^d$ (this requires a sufficiently small choice of $\varepsilon$ in Proposition \[prop:localize\]), and $\operatorname{Re}f\geq 0$ on $\nice$.
To apply the standard theory, we must locate the critical points of $f$ on $\nice$. Recall from Definition \[defn:cone\] the matrix $\cmat = \cmat(\b{z^*})$ whose rows are extreme rays in the cone of outward normals to support hyperplanes to the logarithmic domain of convergence of $F$.
For $\direc := \b{\w{r}} / s \in \operatorname{\b{K}}_0$, let $\mathcal{A}(\direc)$ denote the solution set $\{ \bb{\alpha}\in
\mathbb{R}^{n+1} \mid \bb{\alpha} \cmat = \direc \}$. Let $\statset$ denote the set of critical points of $f$ on $\nice$. Then $\statset$ consists precisely of those points $(\b{0}, \bb{\alpha})$ with $\bb{\alpha}\in \mathcal{A}(\direc) \cap \simp$.
By strict minimality of $\b{z^*}$, for each $j$ the modulus of $v_j
(\b{\w{z}})$ achieves its maximum only when $\b{\w{z}} = \b{\w{z^*}}$. Thus any convex combination of $v_j (\b{\w{z}})$ with $\b{\w{z}} \neq
\b{\w{z^*}}$ has modulus less than $|v_j (\b{\w{z^*}})|$. In other words, when $\bb{\w{\theta}} \neq \b{0}$, the real part of $f$ is strictly positive, from which it follows that all critical points of $f$ are of the form $(\b{0} , \bb{\alpha})$ for $\bb{\alpha} \in \simp$.
In fact $f$ is somewhat degenerate: $f (\b{0} , \bb{\alpha}) = 0$ for all $\bb{\alpha} \in \simp$, so not only does the real part of $f$ vanish when $\bb{\w{\theta}} = \b{0}$, but also the $\bb{\alpha}$-gradient of $f$ vanishes there. We compute the $\bb{\w{\theta}}$-derivatives at $\bb{\w{\theta}} = \b{0}$ as follows. For $1 \leq j \leq d$, $$\frac{\partial f}{\partial \theta_j} = i
\left.
\left ( \frac{r_j} {s} - \frac{z_j}{z_{d+1}} \bb{\alpha}
\left ( \frac{\partial}{\partial z_j} \right )
\b{v}(\w{\bb{z}})\right )
\right |_{\b{\w{z}} = \b{\w{z^*}}} \, .$$ Recalling the definition of $\cmat$, we see that these vanish simultaneously if and only if $\direc = \bb{\alpha} \cmat$, which finishes the proof.
The set of critical points of $f$ on $\nice$ is thus the intersection of $\nice$ with an affine subspace of $\mathbb{R}^{n+d+1}$. We now wish to use known results on asymptotics of Fourier-Laplace integrals. Note that in the situation of Theorems \[thm:double-point-generic\], \[thm:compnondeg\] and \[thm:transverse\], $f$ has a unique critical point on $\nice$, whereas in Theorem \[thm:nondeg\], $\statset$ is essentially an affine subspace of $\simp$, and in Theorem \[thm:degenerate\] $\statset$ is essentially just $\simp$. Though the existing literature on asymptotics of integrals is extensive, we have been unable to find in it results applicable to all cases of interest of us here. We have therefore derived the extra asymptotics we need in [@pemantle-wilson-analysis] and cite the relevant ones below in Lemmas \[lem:smallcrit\] and \[lem:bigcrit\].
Recall that $\mathcal{A}^\perp$ is the orthogonal complement to (any and every) $\mathcal{A}(\direc)$ in $\hyper$; for each point $y=(\b{0}, \bb{\alpha})$ of $\statset$, let $B_y$ be the product of $\nbd$ with the affine subspace of $\simp$ containing $\bb{\alpha}$ and parallel to $\mathcal{A}^\perp$.
Then the integrals in Lemma \[lem:tooscint\] may be decomposed as $$\tlabel{eq:decompose}
\int_\nice e^{-s f} \psi_k \, d\mu_{n+d}
= \int_\statset \left(\int_{B_y} e^{-s f} \psi_k \, d\mu_{\rho +d} \right)
\, d\sigma(y) \, ,$$ where $\rho := \dim \mathcal{A}^\perp$.
We require a more explicit description of the Hessian matrix of $f$.
Let $y= (\b{0}, \bb{\alpha}) \in \statset$. At the point $y$, the Hessian of the restriction of $f$ to $B_y$ has the block form $$\tlabel{eq:hessian}
M(\b{z^*}, \bb{\alpha})=\left(\begin{matrix}0&-i \overline{\cmat(\b{z^*})}\\-i
\overline{\cmat(\b{z^*})}^T&Q(\b{z^*}, \bb{\alpha}) \end{matrix}\right).$$
In this decomposition:
- the zero block has dimensions $\rho\times \rho$, where $\rho=\dim \mathcal{A}^\perp=\operatorname{rank}\overline{\cmat}(\b{z^*})$
- the $j$th column of $\overline{\cmat}(\b{z^*})$ is the projection of the $j$th column of $\cmat(\b{z^*})$ onto $\mathcal{A}^\perp$ (expressed in some orthonormal basis)
- $Q(\b{z^*}, \bb{\alpha})$ is the Hessian at $y$ of the restriction of $f$ to the $\w{\bb{\theta}}$-directions, as defined in Definition \[def:M\].
Constancy of $f$ in the $\simp$ directions at $\bb{\w{\theta}} = \b{0}$ shows that the second partials in those directions vanish, giving the upper left block of zeros. Computing $(\partial / \partial \theta_j)
f$ up to a constant gives $$- \frac{i} {\bb{\alpha} \vv (\b{\w{z^*}})} \bb{\alpha} z_j \frac{\partial}
{\partial z_j} \vv (\b{\w{z^*}}) \,$$ and since $\bb{\alpha} \vv (\b{\w{z^*}})$ is constant when $\bb{\w{\theta}}
= \b{0}$, differentiating in the $\bb{\alpha}$ directions recovers the blocks $-i \overline{\cmat}$ and $-i \overline{\cmat}^T$. The second partials in the $\bb{\w{\theta}}$ directions are of course unchanged. To see that the dimension of $\mathcal{A}^\perp$ is the rank of $\overline{\cmat}$, observe that no nonzero element of $\mathcal{A}^\perp$ can be orthogonal to column space of $\cmat$, since then it would have been in $\mathcal{A}$; thus the projection of the columns of $\cmat$ onto $\mathcal{A}^\perp$ spans $\mathcal{A}^\perp$.
We may now prove Theorems \[thm:double-point-generic\], \[thm:compnondeg\] and \[thm:transverse\], in reverse order from most general to most special. In these cases the function $f$ in the representation (\[eq:multint\]) has only one critical point, so the decomposition (\[eq:decompose\]) is not really necessary.
As mentioned above, we have had to derive asymptotic expansions for some cases. We start with one of these, applicable to the situations of Theorems \[thm:double-point-generic\], \[thm:compnondeg\], and \[thm:transverse\].
Suppose that $f$ has a unique critical point $\b{0} \in \nice$, at which the Hessian $\operatorname{{\b{H}}}$ is nonsingular.
1. If $\b{0}$ has a neighborhood in $\nice$ diffeomorphic to $\R^d$ then as $s \to \infty$ there is an explicitly computable asymptotic expansion $$I(s; f , \mult_k) \sim \sum_{j=0}^\infty a_j s^{-d/2-j}.$$ In particular $$a_0 = \mult_k (\b{0}) \frac{(2 \pi)^{d/2}} {\sqrt{\det(\operatorname{{\b{H}}})}}$$ where the square root is defined to be the product of the principal square roots of the eigenvalues of $\operatorname{{\b{H}}}$.
2. If $\b{0}$ has a neighborhood in $\nice$ diffeomorphic to a $d$-dimensional half-space then there is an explicitly computable asymptotic expansion $$I(s ; f , \mult_k) \sim \sum_{j = 0}^\infty a_j s^{- d/2 - j/2}.$$ Here the value of $a_0$ is half that of the previous case.
This is a straightforward corollary of [@hormander-partial-differential1 Theorems 7.7.1, 7.7.5, 7.7.17 (iii)]. Details are given in [@pemantle-wilson-analysis].
Proof of Theorem \[thm:transverse\] {#proof-of-theoremthmtransverse .unnumbered}
-----------------------------------
We prove the result in the case $\b{z^*} = \b{1}$. The result in the general case follows directly via the change of variable $\b{z} \to \b{z}/ \b{z^*}$.
The rank of $\cmat$ is $n+1$, so $\mathcal{A}(\direc)$ is a single point and $f$ has a unique critical point, $(\b{0},\bb{\alpha})$. Hence $\mathcal{A}^\perp
= \hyper$ and has dimension $n$. The critical point is in the interior of $\nice$ as long as $\direc$ is in the interior of $\operatorname{\b{K}}_0$, which occurs if and only if $\b{r}$ is in the interior of $\operatorname{\b{K}}$.
Since $\mathcal{A}$ is a singleton, we may use the formula (\[eq:multint\]) and Lemma \[lem:smallcrit\]. The Fourier-Laplace expansion of $\int_\nice e^{-s f} \psi_k$ will have terms $s^{-(n + d)/2 - j}$ for all $j$, and when multiplied by the degree $n - k$ polynomial $p_k$, will contribute to the $s^{(n - d)/2 - k - j}$ terms for all $j$. Each of these terms is explicitly computable from the partial derivatives of $\numer$ and $\denom$ via $\psi_k$, but these rapidly become messy. The leading term of $s^{(n - d)/2}$ comes only from $k = 0$: $$a_\rr \sim (2 \pi)^{-d} \frac{n!} {\sqrt{n + 1}}
s^n \frac{(2 \pi)^{(n + d)/2} s^{(n - d)/2} \phi
(\b{z})} {\sqrt{\det M(\b{z^*}; \bb{\alpha})}}$$ which proves the theorem in the case $\b{z^*} = \b{1}$.
Proof of Theorem \[thm:compnondeg\] {#proof-of-theoremthmcompnondeg .unnumbered}
-----------------------------------
Here $\rank (\cmat) = d+1 = n+1$ and so the proof of Theorem \[thm:transverse\] applies. We can obtain a more explicit result in this case. The block decomposition of $M(y)$ in (\[eq:hessian\]) is into four blocks of dimensions $d \times d$, implying that $M(y)$ is nonsingular with determinant $(\det
\overline{\cmat})^2$ (in performing column operations to bring $M(y)$ to block diagonal form, we pick up a factor of $(-1)^{d^2}$ which cancels the factor $i^{2d}$). The last column of $\cmat$ is $\b{1}$, whose projection onto $\mathcal{A}^\perp$ is null; its projection onto the space spanned by $\b{1}$ has length $\sqrt{d+1}$, whence $|\det{\cmat}| = \sqrt{d+1} \,|\det
\overline{\cmat}|$.
Thus we obtain $$a_\rr = C + O(s^{-1})$$
where $$C = \frac{d! \, \phi (\b{z^*}) } {|\det \cmat(\b{z^*})|} \, .$$ To finish the proof, we quote [@pemantle-finite-dimension Corollary 3.2] to see that in fact $a_\rr - C$ is exponentially small on subcones of the interior of $\operatorname{\b{K}}$.
If $\direc$ is on the boundary of $\operatorname{\b{K}}$, it may happen that the point $(\b{0} , \bb{\alpha})$ has $\bb{\alpha}$ lying on a face of $\simp$ of dimension $d - 1$. In that case, using the second formula from Lemma \[lem:smallcrit\] instead of the first gives a leading term with exactly half the previous magnitude. Here we do not claim exponential decay of $a_\rr - C$. In Example \[eg:figure 8\], this occurs when $r/s \in \{ 1/2 , 2 \}$.
Proof of Theorem \[thm:double-point-generic\] {#proof-of-theoremthmdouble-point-generic .unnumbered}
---------------------------------------------
The second case is covered by the previous remark.
Details for the first case follow. The analysis of Theorem \[thm:compnondeg\] applies. We derive a more explicit result, first supposing that $(z^*, w^*) = (1,1)$.
For $j\in \{0,1\}$, write $v_j(e^{i\theta}) = i c_j\theta
+ O(\theta^2)$, where $c_j\geq 0$. The matrix $\cmat$ in this case is simply $(\begin{smallmatrix} c_0 & 1 \\ c_1 & 1 \end{smallmatrix})$. It will be seen below that $\det \cmat = 0$ if and only if $\det \b{H} = 0$, so for the purposes of this theorem we may and shall suppose that $c_0 \neq c_1$.
We have $$a_{rs} \sim \frac{\phi(1,1)} {|c_0 - c_1|} \qquad
\text{uniformly for $r/s$ \interior to $[c_0, c_1]$}.$$ This expression for the constant is quite handy, but to recover the statement of the theorem, we also express it in terms of the partial derivatives of $\denom$. Recall that $\denom = \chi Q$ where $Q(z,w) =
(1 - w v_0(z))(1 - w v_1(z))$ and $\chi \neq 0$ near $(1,1)$. Then at $(1,1)$, $Q = Q_z = Q_w = 0$ and so $\denom_{zz} = \chi Q_{zz} = 2 \chi c_0
c_1$. Similarly, $\denom_{wz} = \chi(c_0 + c_1)$ and $\denom_{ww} =
2\chi$. Thus $c_0, c_1$ are the roots of the quadratic $(\denom_{ww}) x^2
- (2 \denom_{wz}) x + \denom_{zz} = 0$. Solving via the quadratic formula gives $|c_0 - c_1| = 2 \sqrt{- \det \operatorname{{\b{H}}}} / \denom_{ww}$. This shows that $\det \b{H} = 0$ if and only if $c_0 = c_1$, as asserted above.
Formula (\[eq:comp phi\]) then yields $$a_{rs} \sim \frac{\phi (1,1)} {|c_0 - c_1|} =
\frac{2 \numer (1,1)} {\denom_{ww} (1,1) |c_0 - c_1|} =
\frac{\numer (1,1)} {\sqrt{ - \det \operatorname{{\b{H}}}(1,1)}} \, .$$
The general result follows by a change of scale mapping the multiple point $(z^* , w^*)$ to $(1,1)$. Details of the computation are as follows. Make the change of variables $\widetilde{z} = z/z^*,
\widetilde{w} = w/w^*$, and write $\widetilde{G}$ for $G$ considered as a function of $\widetilde{z}, \widetilde{w}$. In an obvious notation, $\widetilde{a}_{rs} = (z^*)^r (w^*)^s a_{rs}$. Then, evaluating at $(\widetilde{z},\widetilde{w}) = (1,1)$, we obtain $H_{\widetilde{z}} =
z^*H_z$, $H_{\widetilde{w}} = w^*H_w$. Repeating this yields $H_{\widetilde{z} \widetilde{z}} = (z^*)^2 H_{zz}$, $H_{\widetilde{w}{\widetilde{z}}} = z^*w^*H_{wz}$ and $H_{\widetilde{w}{\widetilde{w}}} = (w^*)^2H_{ww}$.
The special case above now yields $$\widetilde{a}_{rs} \sim
\frac{\widetilde{G}(1,1)} {\sqrt{ - \widetilde{D}(1,1)}} =
\frac{G(z^*,w^*)} {\sqrt{ - (z^*)^2 (w^*)^2 \det \operatorname{{\b{H}}}(z^*,w^*)}}$$ from which the result follows immediately.
To prove Theorems \[thm:nondeg\] and \[thm:degenerate\] we require a result on asymptotics of integrals that allows for non-isolated critical points. The lemma below says that, as seems intuitively plausible, we may compute asymptotic expansions in directions transverse to $\statset$ and then integrate their leading terms along $\statset$ to get the leading term of the original expansion.
Let $y\in \statset$ and let $M(y)$ be the Hessian at $y$ of the restriction of $f$ to $B_y$. If $\mult_k(y) \neq 0$ and $\det M(y)$ is nonzero on $\statset$ then $$\int_\nice e^{-s f} \mult_k \, d\bb{\w{\theta}} \times d\mu
\sim \left(\frac{2\pi}{s}\right)^{(\rho +d)/2} \int_\statset
\frac{\mult_k (y)}{\sqrt{\det M(y)}} \,d\sigma(y).$$
This is a specialization of [@pemantle-wilson-analysis Thm 2] (the proof of that result is considerably more involved than in the isolated critical point case).
Lemma \[lem:bigcrit\] yields the following expansion: $$\tlabel{eq:integrated}
a_\rr \sim (2 \pi)^{-d} p_0 (s) \left(\frac{2\pi}{s}\right)^{(\rho +d)/2}
\int_\statset \frac{\psi_0 (y)} {\sqrt{\det M(y)} } \, d\sigma (y).$$
We now proceed to complete the proofs by applying in each case. We give full details only for the case $\b{z^*} = \b{1}$; the general case again follows from the change of variable $\b{z} \mapsto \b{z}/\b{z^*}$ and we leave the details to the reader.
Proof of Theorem \[thm:nondeg\] {#proof-of-theoremthmnondeg .unnumbered}
-------------------------------
By the nondegeneracy assumption, $\rank (\cmat) = d+1$, hence $\rho := \rank (\overline{\cmat}) = d$ and $\dim (B_y) = 2d$. The set $\statset$ has dimension $n - d$; we assume this to be strictly positive since the case $n = d$ has been dealt with already in Theorem \[thm:compnondeg\]. At each stationary point $(\b{0} , \bb{\alpha}) \in \statset$ we find the Hessian determinant of $f$ restricted to $B_y$ to be the constant value $(\det \overline{\cmat})^2$. Now gives $$\begin{aligned}
a_\rr & \sim & (2\pi)^{-d} p_0 (s) s^{-d} (2 \pi)^d \phi (\b{1})
\int_\statset \det M(y)^{-1/2} \, d\sigma (y) \\[1ex]
& \sim & s^{n-d} \frac{ \phi (\b{1}) \sigma (\statset)} {|\det
\overline{\cmat}| } \, ,\end{aligned}$$ which establishes the formula for the leading term. That the expansion is a polynomial plus an exponentially smaller correction again follows from [@pemantle-finite-dimension Theorem 3.1].
The expansion via Leibniz’ formula is not generally integrable past the leading term, and therefore does not give a method of computing the lower terms of the polynomial $P$.
Proof of Theorem \[thm:degenerate\] {#proof-of-theoremthmdegenerate .unnumbered}
-----------------------------------
Here $\operatorname{\b{K}}_0 = \{ \direc \}$, and so $\mathcal{A}(\direc) = \simp$. Thus $\dim \mathcal{A}^\perp = 0$, so that for $y= (\b{0}, \bb{\alpha}) \in \statset$, $M(y) = Q(\b{z^*}, \bb{\alpha})$. Use (\[eq:integrated\]) once more with $\dim (B_y) = d$ to get $$a_\rr \sim (2\pi)^{-d} p_0 (s) s^{-d/2} (2 \pi)^{d/2}
\int_\simp \frac{\phi (\b{z})}{\sqrt{\det Q(\b{z^*}, \bb{\alpha})} } \,
d\mu (\bb{\alpha})$$ which simplifies to the conclusion of the theorem.
Further discussion
==================
[**Asymptotics in the gaps.**]{} The problem of determining asymptotics when $\b{\w{r}} / s$ converges to the boundary of $\operatorname{\b{K}}_0$ is dual to the problem raised in [@pemantle-wilson-multivariate1] of letting $\b{\w{r}} / s$ converge to $\partial \operatorname{\b{K}}_0$ from the outside. Solutions to both of these problems are necessary before we understand asymptotics “in the gaps”, that is, in any region asymptotic to and containing a direction in the boundary of $\operatorname{\b{K}}_0$. For example, in the case Example \[eg:tangent2D\] with $\direc (1,1)$, what are the asymptotics for $a_{r , r + \sqrt{r}}$ as $r \to \infty$?
In addition to the obvious discontinuity in our asymptotic expansions above at the boundary of $\operatorname{\b{K}}_0$ caused by the change from multiple to smooth point regime, in the nondegenerate case other discontinuities arise. The leading term may change degree around the boundary of $\operatorname{\b{K}}_0$. This reflects the behavior of the corresponding Fourier-Laplace integrals, as the affine subspace $\mathcal{A}$ of positive dimension moves from intersecting the interior of $\simp$ to only intersecting a lower-dimensional face. Subtler effects occur as $\mathcal{A}$ passes a corner in the interior of $\operatorname{\b{K}}$, when the leading term is continuous but lower terms may not be.
A proper analysis of all these issues in the Fourier-Laplace integral framework requires the consideration of integrals whose phase can vary with $\lambda$. This is the subject of work in progress by M. Lladser [@lladser-thesis].
[**Effective computation.**]{} The greatest obstacle to making all these computations completely effective lies in the location of the minimal point $\b{z^*}$ given $\rr$. Assuming the existence of a $\b{z^*} (\rr)$ with $\rr \in \operatorname{\b{K}}(\b{z^*})$, how may we compute $\b{z^*} (\rr)$ and test whether it is a minimal point? Since the moduli of the coordinates of $\b{z^*}$ are involved in the definition of minimality, this is a problem in real rather than complex computational geometry and does not appear easy. For example, how easily can one prove that the double point in Example \[eg:figure 8\] is minimal?
[**Minimal points.**]{} Many of our theorems rule out analysis of a minimal point $\b{z^*}$ if one of its coordinates $(z^*)_j$ is zero. The directions in $\operatorname{\b{K}}(\b{z^*})$ will always have $r_j = 0$, in which case the analysis of coefficient asymptotics reduces to a case with one fewer variable. Thus it appears no generality is lost. If we are, however, able to solve the previous problem, wherein $\b{r} / s$ converges to $\partial \operatorname{\b{K}}_0$, then we may choose to let $\b{r} / s$ converge to something with a zero component. The problem of asymptotics when some $r_j = o(r_k)$ now makes sense and is not reducible to a previous case. Presumably these asymptotics are governed by the minimal point $\b{z^*}$ still, but it must be sorted out which of our results persist when $(z^*)_j = 0$. Certainly the geometry near $\b{z^*}$ has more possibilities, since it is easier to be a minimal point (it is easier to maintain $|z_j| \geq |(z^*)_j|$ for $\b{z}$ near $\b{z^*}$ when $(z^*)_j = 0$).
In this article and its predecessor we have not treated toral minimal points (minimal points $\b{z^*}$ that are not strictly minimal, and not isolated in $\torus(\b{z^*})$). Nor have we analysed strictly minimal points that are cusps. The extension of our results here to those cases is in fact rather routine, and will be carried out in future work.
[**Local geometry of critical points.**]{} Many of our results fail when the Hessian of $f$ unexpectedly vanishes. Surprisingly, we do not know whether this ever happens in cases of interest. Specifically, we do not know of a generating function with nonnegative coefficients, for which the Hessian of $f$ on $B_y$ (notation of Section \[sec:proofs\]) ever vanishes. It goes without saying, therefore, that we do not know whether cases ever arise of degeneracies of mixed orders, such as may arise in the case of the remark following Example \[eg:tangent2D\].
[**Homological methods.**]{} The methods of this paper may be described as reasonably elementary though somewhat involved. The necessary complex contour and stationary phase integrals are well known, except perhaps for the “resolving lemma”, Lemma 4.2. While writing down these results, we found a more sophisticated approach involving some algebraic topology (Stratified Morse Theory) and singularity theory. This approach delivers similar results in a more general framework. Among other things, it clarifies and generalizes formulae such as the expression (\[eq:P\]) derived by hand above. Still, the present approach has an advantage other than its chronological priority and its independence from theory not well known by practitioners of analytic combinatorics. Homological methods, as far as we can tell, are not capable of determining asymptotics in “boundary” directions, such as are given in part (ii) of Theorem 3.1. It seems that one needs to delve into the fine details of the integral, as we do in the present work, in order to handle these cases. Much of the interest in asymptotic enumeration centers on phase boundaries, so the weight of these cases is far from negligible. We hope some day to integrate the two approaches, but at present each does something the other cannot.
Acknowledgement {#acknowledgement .unnumbered}
---------------
We thank Andreas Seeger for giving a reference to an easier proof of Lemma \[lem:simplex\].
[^1]: Research supported in part by NSF grants DMS-9996406 and DMS-0103635
|
---
abstract: 'Scientists from the OPERA experiment have measured neutrinos supposedly travelling at a velocity faster than light contrary to the theory of relativity. Even when the measurements are precise, the interpretation of this problem is being misunderstood. Here it is very easily solved and explained within the theory of relativity itself proving that neutrinos are not travelling faster than the speed of light and the early time arrival is due to the the presence of a gravitational field.'
author:
- |
\
Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas\
Universidad Nacional Autónoma de México, UNAM\
A. Postal 20-726, 01000, México DF, México\
title: '[**A very simple solution to the OPERA neutrino velocity problem**]{}'
---
Introduction
============
Scientists from the OPERA experiment [@opera] have measured neutrinos supposedly travelling at a velocity faster than light therefore in a possibility of contradicting the theory of relativity. The claim comes from (as cited in their abstract) “An early arrival time of CNGS muon neutrinos with respect to the one computed assuming the speed of light in vacuum was measured.”
Since the appearance of this claim many papers have given explanations to a possible solution. Some theoretical [@cg], [@c], [@h], and at least one experimental which refutes the superluminal velocity [@icarus]. Here we give a theoretical solution based completely in the theory of relativity and show that this early time arrival is not related to neutrinos travelling faster than the speed of light but to the time computed from solving the problem in the presence of a gravitational field. This simply means that the problem is solved as taking into account the general theory of relativity.
Particles travelling in the gravitational field.
================================================
The OPERA experiment was situated 1,400 metres below the Earth’s surface and neutrinos were travelling from CERN 731 kilometres away. The neutrinos were travelling inside the Earth from one city to another. Two clocks at the emission and reception points were synchronised from a satellite. Without loss of generality, the problem of the OPERA experiment can be approximated and stated theoretically in general relativity in the following way:
*Suppose that on Earth a massive particle is travelling at velocity $v$ in a circular orbit (or just in an arc $\Delta \phi$) at a fixed radial distance $r=R$. Calculate the proper time of travel measured by an observer which is fixed at the same radial distance.*
Even when Earth rotates, is not spherical, and the neutrino particles do not travel in a circular orbit, the problem can be completely approximated as it has been stated since for the proof this is enough. The important thing is the calculation itself and explain it theoretically as simple as possible.
Let us solve the problem. The general relativity solution around any spherically symmetric object(well approximated around the Earth too) is given by the Schwarzschild one.
$$ds^2 = - \bigg( 1- \frac{2M}{r} \bigg) dt^2 + \bigg( 1- \frac{2M}{r} \bigg)^{-1} dr^2 + r^2 (d \theta^2
+ \sin^2 \theta d \phi^2 )$$
We use units in which $c =1$ and also $G=1$. A massive particle in Schwarzschild which travels in a circular orbit $r=R$ at any velocity $v$ has worldline given by
$$x^{\mu} (\tau) = \bigg( \ \gamma \tau \ , \ R \ , \ \frac{\pi}{2} \ , \
\frac{\gamma v}{R} \tau \ \bigg)$$
where
$$\gamma= \frac{1}{\sqrt{1-v^2 - \frac{2M}{R}}}$$
and $\tau$ is a parameter. This parametrisation is such that the 4-velocity vector satisfies
$$g_{\mu \nu} \frac{dx^{\mu}}{d\tau} \frac{dx^{\nu}}{d\tau}= -1$$
In order for the particle to mantain its motion on the circular orbit there is need of a radial acceleration. However, we do not need it for our calculation.
From the worldline we have that
$$\frac{d \phi}{d \tau} = \frac{\gamma v}{R}$$
An observer at infinity(satellite) will measure
$$\frac{d \phi}{dt} = \frac{d \phi}{d \tau} \frac{d \tau}{dt}= \frac{v}{R}$$
For an observer stationary(fixed at the radial distance $R$) his proper time $dt^{'}$ is related to the time interval $dt$ by
$$dt^{'} = \sqrt{\bigg( 1- \frac{2M}{R} \bigg)} \ dt$$
and the his proper angle displacement $d\phi^{'}$ is related to $d\phi$ by
$$d\phi^{'} = R \ d\phi$$
Therefore the angular velocity of the particle in orbital motion as measured by the stationary observer at $R$ is
$$\frac{d \phi^{'}}{dt^{'}} = \frac{R}{ \sqrt{\bigg( 1- \frac{2M}{R} \bigg)}} \ \frac{d \phi}{dt}=
\frac{v}{ \sqrt{\bigg( 1- \frac{2M}{R} \bigg)}}$$
The time measured by the stationary observer at $R$ for the particle to travel an angle displacement $\Delta \phi^{'}$ on Earth is then given by
$$\Delta t^{'} = \frac{\Delta \phi^{'}}{v} \sqrt{\bigg( 1- \frac{2M}{R} \bigg)}$$
The experiment was performed below Earth’s surface, but the trajectory of each neutrino can be approximately thought as a circular arc $\Delta \phi^{'}$, which as a numerical value is 731 kilometres. If the problem of the measured time of the travelling particles is considered only in a special relativistic way without taking into account the general theory of relativity, the time would be $\Delta t^{'} = \Delta \phi^{'} / v$. However formula $(10)$ gives the correct answer of the measured time by a stationary observer when these particles travel an angle distance $\Delta \phi^{'}$. It can be seen that the time given by formula $(10)$ is in fact shorter than the time the particles will take if they were travelling in flat space.
Therefore the OPERA early arrival time of neutrinos results are understood not because the particles are travelling faster than the speed of light(in flat space) but because of formula $(10)$ in curved space.
[99]{}
T.Adam et al. Measurement of the neutrino velocity with the OPERA detector in the CNGS beam, arXiv:1109.4897v1 \[hep-ex\]
Andrew G. Cohen, Sheldon L. Glashow, New Constraints on Neutrino Velocities, arXiv:1109.6562v1 \[hep-ph\]
Carlo R. Contaldi, The OPERA neutrino velocity result and the synchronisation of clocks, arXiv:1109.6160v2 \[hep-ph\]
Gilles Henri, A simple explanation of OPERA results without strange physics, arXiv:1110.0239v1 \[hep-ph\]
ICARUS Collaboration, A search for the analogue to Cherenkov radiation by high energy neutrinos at superluminal speeds in ICARUS, arXiv:1110.3763v2 \[hep-ex\]
|
---
abstract: |
A non-empty subset $A$ of $X=X_1\times\cdots \times X_d$ is a (proper) box if $A=A_1\times \cdots\times A_d$ and $A_i\subset X_i$ for each $i$. Suppose that for each pair of boxes $A$, $B$ and each $i$, one can only know which of the three states takes place: $A_i=B_i$, $A_i=X_i\setminus B_i$, $A_i\not\in\{B_i, X_i\setminus B_i\}$. Let $\ka F$ and $\ka G$ be two systems of disjoint boxes. Can one decide whether $\bigcup \ka F=\bigcup \ka G$? In general, the answer is ‘no’, but as is shown in the paper, it is ‘yes’ if both systems consist of pairwise dichotomous boxes. Several criteria that enable to compare such systems are collected. The paper includes also rigidity results, which say what assumptions have to be imposed on $\ka F$ to ensure that $\bigcup \ka F=\bigcup \ka G$ implies $\ka F=\ka G$. As an application, the rigidity conjecture for $2$-extremal cube tilings of Lagarias and Shor is verified.
*Key words:* box, dichotomous boxes, polybox, additive mapping, index, binary code, word, genome, cube tiling, rigidity.
author:
- |
Andrzej P. Kisielewicz and Krzysztof Przes[ł]{}awski\
\
[Wydzia[ł]{} Matematyki, Informatyki i Ekonometrii, Uniwersytet Zielonogórski]{}\
[ul. Z. Szafrana 4a, 65-516 Zielona Góra, Poland]{}\
[[email protected]]{}\
[[email protected]]{}
title: 'POLYBOXES, CUBE TILINGS AND RIGIDITY'
---
Introduction
============
Let $\te^d=\{(x_1,\ldots, x_d)(\operatorname{mod}2):(x_1,\ldots ,x_d)\in \er^d\}$ be a flat torus. Suppose that $[0,1)^d+\lambda$, $\lambda\in \Lambda$, is a cube tiling of $\te^d$. The cube tiling $[0,1)^d+\Lambda$ is 2-*extremal* if for each $\lambda\in \Lambda$ there is a unique $\lambda'\in \Lambda$ such that $ (|\lambda_1-\lambda'_1|,\ldots , |\lambda_d-\lambda'_d|)\in \zet^d$. Let $\Lambda_+$, $\Lambda_-$ be any decomposition of $\Lambda$ such that each of the component does not contain any of the pairs $\{\lambda, \lambda'\}$, $\lambda\in \Lambda$. In an important paper [@LS2], where cube tilings of $\er^d$ contradicting Keller’s celebrated conjecture (see [@K1; @K2; @P; @LS1; @Ma; @SSz]) in a certain strong sense are constructed, Lagarias and Shor conjectured that $\Lambda_+$ and $\Lambda_-$ determine each other; that is, if $[0,1)^d+\Gamma$ is another 2-extremal cube tiling of $\te^d$, and $\Gamma_+$, $\Gamma_-$ is a corresponding decomposition of $\Gamma$, then the equality $\Lambda_+=\Gamma_+$ implies $\Lambda_-=\Gamma_-$. (Actually, their assertion, called in [@LS2] *the rigidity conjecture for 2-extremal cube-tilings*, is stated in the language of 2$\zet^d$-periodic cube tilings of $\er^d$.) In this paper we show that a far reaching generalization of the rigidity conjecture remains valid (Theorem \[krewmleko\]). In a sense, we could say that the latter result is a by-product of the present investigations. We arrive at this problem working with slightly different structures: partitions of the Cartesian products of the finite sets, called here boxes, into boxes. Our interest in these structures comes from a certain minimization problem of Kearnes and Kiss [@KK], which has been solved by Alon, Bohman, Holzman and Kleitman [@ABHK]. Minimal partitions that are involved in their solution have been characterized in [@GKP]. In Section \[minimal\] we extend these investigations to what we call polyboxes.
A non-empty subset $A$ of the Cartesian product $X:=X_1\times\cdots\times X_d$ of finite sets $X_i$, $i\in[d]$, is called a *box* if $A=A_1\times\cdots \times A_d$ and $A_i\subseteq X_i$ for each $i\in [d]$. We say that $A$ is a $k$-dimensional box if $| \{i\in [d]: |A_i|>1\}|=k$. We call $A$ *proper* if $A_i\neq X_i$ for each $i\in [d]$. The family of all boxes contained in $X$ is denoted by $\Pud X$, while $\pud X$ stands for the family of all proper boxes in $X$.
Two boxes $A$ and $B$ in $X$ are said to be *dichotomous* if there is an $i\in [d]$ such that $A_i=X_i\setminus B_i$. Any collection of pairwise dichotomous boxes is called a *suit*. A suit is *proper* if consists of proper boxes. A non-empty set $F\subseteq X$ is said to be a *polybox* if there is a suit $\ka F$ for $F$, that is, $\bigcup \ka F=F$.
Since polyboxes are defined by means of partitions into boxes, it is not surprising that characteristics of polyboxes will be expressed in terms of partitions as well. Therefore, such characteristics should be invariant on the choice of a partition. In Section \[additive functions\], we define the class of additive functions on $\pud X$. All characteristics of polyboxes that appear in the paper are defined with the use of additive functions. Theorem \[add1\] plays in this respect a crucial role.
An interesting class of invariants is described in Sections \[dilabel\] and \[binary codes\].
In Section \[indices\] we discuss important numerical characteristics of a polybox, the indices.
In the following section we show, among other things, that certain mild assumptions on the symmetry of a polybox imply that all indices of the polybox are even numbers.
In Section \[rikitiki\] we give a sufficient condition which guarantees a polybox to be rigid in the sens that it has a unique proper suit (Theorem \[sztywnypal\]). This result appears again in a greater generality, applicable to the already mentioned case of the rigidity of cube tilings, in Section \[words\] (Theorems \[niejednoznacznosc\] and \[rikitikigenom\]).
One of the basic questions is whether two given suits define the same polybox. We address this question in several places. An important procedure, which enables us to answer it, is described in Section \[modules\] (Remark \[kryt2\]). This procedure is based on a certain decomposition of the free $\zet$-module generated by boxes (Theorem \[rzut\]).
The results of the paper are summarized in an abstract setting of words in Section \[words\]. There is also defined and investigated an interesting cover relation.
Minimal partitions {#minimal}
==================
Let $F$ be a subset of a $d$-dimensional box $X$. A partition of $F$ into proper boxes is *minimal* if it is of minimal cardinality among all such partitions. It is observed in [@GKP] that if $F=X$, then the minimal partitions of $F$ coincide with the proper suits for $F$. This result extends to polyboxes:
\[minor\] If $F$ is a polybox in a $d$-dimensional box $X$ and $\ka F\subseteq \pud X$ is a partition of $F$, then $\ka F$ is minimal if and only if $\ka F$ is a suit.
The proof is a refinement of an argument given in [@ABHK], and is much the same as in [@GKP], however, we added to it a geometric flavour.
Before going into the proof, we collect several indispensable definitions and lemmas.
Let ${\ka O}(X_i)$ be the family of all sets of odd size which are contained in $X_i$. Let $B$ be a subset of $X$. We define $\widehat {B}$ to be the subset of ${\ka O}(X_1)\times \cdots \times{\ka O}(X_d)$ that consists of all $d$-tuples $(A_1,\ldots , A_d)$ for which the set $B\cap (A_1\times\cdots \times A_d)$ is of odd size.
Suppose that $B$ is a box. Let $\ka O B_i$ be the set of all sets of odd size that are contained in $X_i$ such that their intersections with $B_i$ are of odd size as well. One can easily observe that $\widehat{B}=
\ka O B_1\times\cdots \times \ka O B_d$. In particular, $\widehat{B}$ is a box. Moreover, since $$\label{os}
|\ka O {B_i}|=\frac{|\ka O (X_i)|}{2}= \frac{2^{|X_i|}}{4}$$ for each $i\in [d]$, we obtain $$\label{kostkan}
|\widehat{B}|=2^{|X|_1-2d},$$ where $|X|_1$ is defined by the equation $|X|_1=|X_1|+\cdots +|X_d|$.
\[comp\] The following conditions are equivalent:
(i)
: boxes $B, C \in \Pud X$ are dichotomous,
(ii)
: $\widehat{B}$ and $\widehat{C}$ are dichotomous,
(iii)
: $\widehat{B}$ and $\widehat{C}$ are disjoint.
The equivalence ‘(i)$\Leftrightarrow $(ii)’ is deduced easily from the observation that $X_i\setminus B_i=C_i$ if and only if $\ka O (X_i)\setminus \ka O B_i=\ka O C_i$. Concerning ‘(ii)$\Leftrightarrow $(iii)’, only the implication ‘(ii)$\Leftarrow $(iii)’ is non-trivial. Thus, if $\widehat{B}$ and $\widehat{C}$ are disjoint, then there is an $i\in [d]$ such that $\ka OB_i$ and $\ka OC_i$ are disjoint. By (\[os\]), each of these sets contains half of the elements of $\ka O(X_i)$. Therefore, they are complementary. Consequently, $\widehat{B}$ and $\widehat{C}$ are dichotomous.$\square$
The next lemma is rather obvious.
\[symmdiff\] If $B$ and $C$ are subsets of $X$, then $$\widehat{B\ominus C}=\widehat{B}\ominus\widehat{C},$$ where $\ominus $ denotes the symmetric difference.
*Proof of the theorem.* Let us enumerate all elements of $\ka F$, that is, $\ka F =\{B^1,\ldots ,B^m\}$. Let $\widehat{\ka F}=\{\widehat{B^1}, \ldots ,
\widehat{B^m}\}$. By the preceding lemma, we have $$\widehat{F}=\widehat{B^1}\ominus\cdots\ominus \widehat{B^m}.$$ This equality implies $$|\widehat{F}|\leq |\widehat{B^1}|+\cdots + |\widehat{B^m}|,$$ where the equality holds if and only if the elements of $\widehat{\ka F}$ are mutually disjoint. If we divide the above inequality by $2^{|X|_1-2d}$, then, by (\[kostkan\]), we obtain $$\label{nier1}
\frac{|\widehat{F}|}{2^{|X|_1-2d}}\leq m,$$ which means that the size of any partition of $F$ into proper boxes is bounded from below by the left side of the above inequality. Moreover, this bound is tight and, according to Lemma \[comp\], is attained if and only if $\ka F$ is a suit.$\square$
Observe that, as a by-product, we have shown that the minimal partitions of a polybox $F$ have their size equal to the number standing on the left side of (\[nier1\]). This suggests the following definition: Let $G\subseteq X$. The number $|G|_0$, given by the formula $$\label{vol0}
|G|_0={|\widehat{G}|}{2^{|X|_1-2d}}=2^d\frac{|\widehat{G}|}{|\widehat{X}|},$$ is called the *box number* of $G$.
By much the same method as applied above, one can obtain the following characterization of polyboxes.
\[vol00\] Let $G$ be a non-empty subset of a $d$-box $X:=X_1\times \cdots \times X_d$. The size of a minimal partition of $G$ into proper boxes is at least $|G|_0$. Moreover, $G$ is a polybox if and only if there is a partition of $G$ into proper boxes of size $|G|_0$.
By the definition of the box number, $$|G|_0 \le |X|_0= 2^d.$$ Moreover, equality occurs if and only if $G=X$. Indeed, for $x\in X$, let us define $\hat{x}=(\{x_1\},\ldots, \{x_d\})\in \widehat{X}$. It is easily seen that $x\in X\setminus G$ if and only if $\hat{x}\in\widehat{X}\setminus\widehat{G}$. Thus, we have
\[krakow\] Let $F$ be a polybox contained in a $d$-box $X$. If there is a proper suit $\ka F$ for $F$ of size $2^d$, then $F=X$.
The following interesting question arises:
[Question.]{} Suppose that $F$ and $G$ are polyboxes contained in a $d$-box $X$. Is it true that if $F\subseteq G$, then $|F|_0\le |G|_0?$
\[suma\] If $F$ and $G$ are disjoint polyboxes in $X$ and there are proper suits $\ka F$ for $F$, and $\ka G$ for $G$ such that $\ka F\cup\ka G$ is a suit for $ F\cup G$, then the same holds true for every pair of proper suits for $F$ and $G$.
Let $\ka F^*$ and $\ka G^*$ be proper suits for $F$ and $G$, respectively. By Theorem \[minor\], proper suits for the same polybox are of the same size. Consequently, $$|\ka F^*\cup \ka G^*|=|\ka F^*|+|\ka G^*|=|\ka F|+|\ka G|= |\ka F\cup\ka G|.$$ Again by Theorem \[minor\], the fact that $\ka F\cup \ka G$ is a suit, and the preceding equality, we conclude that the partition $\ka F^*\cup\ka G^*$ is a suit.$\square$
If sets $F$ and $G$ are as described in Proposition \[suma\], then we call them *strongly disjoint*.
Additive functions {#additive functions}
==================
Let a box $A=A_1\times \cdots \times A_d$ and a set $I\subseteq [d]$ be given. Let $i_1,\ldots, i_k$ be the elements of $I$ written in increasing order. We define $A_I:=A_{i_1}\times\cdots \times A_{i_k}$. We have the natural projection $a\mapsto a_I$ from $A$ onto $A_I$, where if $a=(a_1,\ldots , a_d)$, then $a_I=(a_{i_1},\ldots , a_{i_k})$. If $B\subseteq A$, then we put $B_I=\{b_I:b\in B\}$, and if $\ka F\subseteq 2^A$, then $\ka F_I=\{B_I: B\in\ka F\}$. To simplify our notation, we shall write $i'$ rather than $[d]\setminus \{i\}$. We say that two boxes $A$ and $B$ contained in a $d$-box $X$ form a *twin pair* if $A_{i'}=B_{i'}$ and $A_i=X_i\setminus B_i$. If $p(x)$ is a sententional function, then, as proposed by Iverson, $\iver {p(a)}=1$, if $p(a)$ is true for $x=a$, and $\iver{p(a)}=0$, if $p(a)$ is false for $x=a$.
Let us extend the notation introduced in Section \[minimal\] letting $\ka O(X)\subset \Pud X$ be the family of all boxes of odd size contained in $\Pud X$.
Let $X$ be a $d$-box and let $M$ be a module over a commutative ring $R$. A function $f\colon \pud X\to M$ is *additive* if for any two twin pairs $A$, $B$ and $C$, $D$, the equation $A\cup B=C\cup D$ implies $$f(A)+f(B)=f(C)+f(D).$$ The module of all $M$-valued additive functions is denoted $\add{X,M}$ . In this paper we shall be concerned with real valued additive functions.
For each $B\in\ka O (X)$, let us define $\eta_B\colon 2^{X}\to \er$ by $$\eta_B(A)=\iver{ |A\cap B|\equiv 1 \mod{2}}.$$ It is clear that the restriction of $\eta_B$ to $\pud X$ is additive.
Now, let us confine ourselves to the case $d=1$, that is, we shall assume that $X$ is simply a finite set that contains at least two elements. Then $\pud X$ coincides with $2^X\setminus\{X, \emptyset\}$.
For each $C\in \pud X$, let $\varphi_C \colon\pud X\to \er$ be defined by $$\label{fione}
\varphi_C (A)=\iver{A=C}-\iver{A=X\setminus C}.$$ Moreover, let $\varphi_X \colon \pud X\to \er$ be the constant function equal to 1. It is clear that the functions $\varphi_C$, $C\in \Pud X$, are additive. It is also clear that $\varphi_C\perp\varphi_D$, whenever $C\not\in\{D, X\setminus D\}$, where the orthogonality is related to the scalar product defined by $$\label{skalar}
\langle f, g\rangle = \sum_{C\in \pud X} f(C)g(C).$$ Let us note for future reference that $$\label{sym}
\varphi_{X\setminus C}=-\varphi_{C},$$ whenever $C\in \pud X$.
Straightforward calculations lead to the following
\[eta\] Let $X$ be a one dimensional box. Let $A$ and $C$ be elements of $\pud X$. Then $$\sum_{B\in \ka O (X)} \eta_B(C)\eta_B (A)= 2^{|X|-2}\iver{A=C}+2^{|X|-3}
\iver{\{A, X\setminus A\} \neq \{C, X\setminus C\}}.$$
This lemma implies that for each $C\in \Pud X$, $$\label{zwiazek}
\varphi_C= 2^{-|X|+2}\sum_{B\in \ka O(X)}(\eta_B(C)-\eta_B(X\setminus C))\eta_B.$$ (Let us emphasize that the functions $\eta_B$ are restricted here to $\pud X$.)
\[baza1\] Let $X$ be a one dimensional box. Let $\ka B\subset \Pud X$ be defined so that for every $A\in \Pud X$, it contains exactly one of the two elements $A$ and $X\setminus A$. Then the set $H:= \{ \varphi_C: C\in \ka B\}$ is an orthogonal basis of $\add {X,\er}$.
Since we have already learned that the elements of $H$ are mutually orthogonal, it remains to show that each $g\in \add {X, \er}$ is a linear combination of them. As we know from the definition of an additive mapping, there is a number $s$ such that $s=g(A)+g(X\setminus A)$ for every $A\in \pud X$. Let $$h= \frac{1}{2}(s \varphi_X +\sum_{C\in \ka B\setminus\{X\}} (g(C)-g(X\setminus C))\varphi_ C).$$ Fix any $A\in \pud X$. If $A\in \ka B$, then we get $$h(A)=\frac{s+ g(A)-g(X\setminus A)}{2}=g(A).$$ If $A\not\in \ka B$, then by (\[sym\]), we get $$h(A)=\frac{s+ (g(X\setminus A)-g(A))\varphi_{X\setminus A}(A)}{2}=g(A).$$ Thus, $g$ coincides with $h$ and consequently $g$ is a linear combination of elements of $H$. $\square$
\[baza2\] Let $X$ be a one dimensional box. Then the set $K:=\{\eta_B\colon \pud X\to \er : B\in \ka O(X)\}$ is a basis of $\add {X,\er}$.
Let $\ka B$ and $H$ be as defined in Lemma \[baza1\]. Observe that $$|H|=|K|=2^{|X|-1}.$$ To complete the proof, it suffices to notice that according to (\[zwiazek\]), each element of $H$ is a linear combination of elements of $K$. $\square$
Now, we go back to the general case. Let $\bigotimes_{i=1}^d \add{X_i,\er}$ be the tensor product of the spaces $\add{X_i, \er}$, where $X_i$, $i\in [d]$, are the direct factors of $X$. We can and we do identify $\bigotimes_{i=1}^d \add {X_i,\er}$ with the subspace of $\er^{\pud X}$ spanned by the functions $f_1\otimes\cdots\otimes f_d$ defined by $$f_1\otimes\cdots\otimes f_d (C)=f_1(C_1)\cdots f_d (C_d),$$ where $f_i\in \add{X_i, \er}$.
\[struktura\] Let $X$ be a $d$-box. Then $\add{X,\er}= \bigotimes_{i=1}^d \add{X_i,\er}$.
Let $f\in \add{X,\er}$. Fix $G\in \pud {X_d}$. Let $f_G\colon \pud {X_{d'}}\to \er$ be defined by $f_G(U)=f(U\times G)$. Clearly, $f_G\in \add{X_{d'}, \er}$. Let $g_1,\ldots , g_m$ be any basis of $\add{X_{d'}, \er}$. We can write $f_G$ as a linear combination of the elements of this basis $$\label{nae}
f_G=\sum_{i=1}^m \alpha_i(G)g_i.$$ Let $H\in \pud {X_d}$. Since $f$ is additive, for each $U$, we have $$f_G(U)+f_{X_d\setminus G}(U)=f_H(U)+f_{X_d\setminus H}(U).$$ Consequently, the mapping $\alpha_i\colon \pud{X_d}\to \er$ is additive for each $i\in [m]$. As $$f(U\times G)=\sum_{i=1}^{m} \alpha_i(G)g_i(U)=\sum_{i=1}^{m} g_i\otimes\alpha_i(U\times G),$$ we obtain $f\in \add{X_{d'},\er}\otimes\add{X_d,\er}$. Therefore, $ \add{X,\er}\subseteq\add{X_{d'},\er}\otimes\add{X_d,\er} $. The opposite inclusion is obvious. Our result follows now by induction with respect to the dimension $d$. $\square$
Let us observe that if $X$ is a $d$-box, then for each $B\in \ka O(X)$, we have $$\label{eta2}
\eta_B=\eta_{B_1}\otimes\cdots\otimes\eta_{B_d},$$ where we interpret $\eta_B$ as defined on $\pud X$. By Lemma \[baza2\] and the preceding theorem, we obtain immediately
\[bazad2\] Let $X$ be a $d$-box. Then the set $K:=\{\eta_B: B\in \ka O(X)\}$ is a basis of $\add {X, \er}$.
Let $f\in\add{X,\er}$. It follows from our theorem that there are real numbers $\alpha_B$, $B\in \ka O(X)$, such that $$f=\sum_{B\in \ka O(X)} \alpha_B\eta_B.$$ Since each $\eta_B$ is naturally defined on $2^X$, the above equation determines the extension $\tilde{f}$ of $f$ to $2^X\setminus\{\emptyset\}$, and the extension $\bar{f}$ of $f$ to the family of all polyboxes.
Let us observe that by Lemmas \[comp\] and \[symmdiff\], for every polybox $F\subset X$ and every proper suit $\ka F$ for $F$, we have $$\eta_B(F)=\sum_{A\in \ka F}\eta_B(A),$$ which implies $$\label{ekst}
\bar{f}(F)=\sum_{A\in \ka F}f(A).$$ In particular, we have the following basic results.
\[add1\] Given a polybox $F$ contained in a $d$-box $X$. For any two proper suits $\ka F$ and $\ka G$ for $F$, and any $f\in \add {X, \er}$, $$\label{rowno1}
\sum_{A\in \ka F} f(A)=\sum_{A\in \ka G}f(A).$$
\[sdisjoint\] If $F_1,\ldots, F_m$ are pairwise strongly disjoint polyboxes in a $d$-box $X$, then $$\bar{f}(F_1\cup \cdots\cup F_m)=\bar{f}(F_1)+\cdots +\bar{f}(F_m).$$
\[ekst1\] Let $X$ be a $d$-box. Let $\mathbf{A}_P( X,\er)$ be the space of all real-valued functions $g$ defined on the family of all polyboxes contained in $X$ such that for any polybox $F$, and any proper suit $\ka F$ for $F$ $$g(F)=\sum_{A\in \ka F}g(A).$$ Then the mapping $\add{X, \er}\ni f \mapsto \bar{f}\in \mathbf{A}_P( X,\er)$ is a linear isomorphism.
Let us note for completeness that we have a similar theorem for the other extension.
\[ekst2\] Let $X$ be a $d$-box. Let $\mathbf{A}_S(X,\er)$ be the space of all functions $g: 2^{X}\setminus\{\emptyset\} \to \er$ such that for any $F$ and $G\in 2^X\setminus\{\emptyset\}$, if $\widehat{F}$ and $\widehat{G}$ are disjoint, then $$g(F\cup G)=g(F)+g(G).$$ The mapping $\add{X, \er}\ni f \mapsto \tilde{f}\in \mathbf{A}_S(X,\er)$ is a linear isomorphism.
In order to formulate an analogue of Lemma \[baza1\] for arbitrary dimensions, we have to introduce some extra terminology.
For a box $C\in \Pud X$ and $\varepsilon \in \zet_2^d$, let $C^\varepsilon=C^{\varepsilon_1}_1\times\cdots\times C^{\varepsilon_d}_d$ be defined by $$C^{\varepsilon_i}_i=\left\{
\begin{array}{cl}
C_i & \text{if $\varepsilon_i=0$,} \\
X_i\setminus C_i & \text{if $\varepsilon_i=1$,}
\end{array}
\right.$$ where $i\in [d]$. If $C$ is a proper box, then $C^\varepsilon$ is a proper box. If $C$ is not proper, then it can happen $C^{\varepsilon}$ is empty. Now, let $
\varphi_C=\varphi_{C_1}\otimes\cdots\otimes \varphi_{C_d}.
$ Similarly as in the case $d=1$, for any $C$ and $D\in \Pud X$, we have $\varphi_C \perp \varphi_D$, whenever $C\not\in \{D^\varepsilon : \varepsilon\in \zet_2^d\}$.
For $\varepsilon\in \zet_2^d$, let $|\varepsilon |=|\{i: \varepsilon_i=1\}|$. The $d$-dimensional counterpart of (\[sym\]) reads as follows: If $C\in \Pud X$ and $C^\varepsilon$ is non-empty, then $$\label{symd}
\varphi_{C^\varepsilon}=(-1)^{|\varepsilon|}\varphi_C.$$
\[bazad1\] Let $X$ be a $d$-box. Let $\ka B\subset \Pud X$ be defined so that for every $A\in \Pud X$, $\ka B$ contains exactly one element of the set $\{A^\varepsilon :\varepsilon \in \zet_2^d\}$. Then the set $H:= \{ \varphi_C: C\in \ka B\}$ is an orthogonal basis of $\add {X,\er}$.
For each $i\in [d]$, let us fix $\ka B_i\subset \Pud {X_i}$ so that for $X=X_i$ and $\ka B=\ka B_i$, the assumptions of Lemma \[baza1\] are satisfied. Then, by Theorem \[struktura\], $$G:=\{\varphi_{C_1}\otimes\cdots \otimes \varphi_{C_d}: C_1\in \ka B_1,\ldots , C_d \in\ka B_d\}$$ is a basis of $\add {X, \er}$. On the other hand, one observes that by (\[symd\]) and the definitions of $H$ and $G$ we have $\varphi_C\in G$ if and only if exactly one of the two elements $-\varphi_C$, $\varphi_C$ belongs to $H$. Thus, $H$ is a basis. The orthogonality of $H$ is clear. $\square$
Equation (\[zwiazek\]) expresses $\varphi_C$ in terms of functions $\eta_B$ in dimension one. We apply it to obtain an analogous result which is valid in arbitrary dimensions. As before, let $X$ be a $d$-box and $C\in\Pud X$. By (\[zwiazek\]), (\[eta2\]) and the definition of $|X|_1$ we have $$\begin{aligned}
\varphi_C & = & 2^{-|X|_1+2d}\sum_{B_1\in\ka O(X_1)}\!\!\cdots\!\!\sum_{B_d\in \ka O(X_d)}
\left(\prod_{k=1}^d(\eta_{B_k}(C_k)-\eta_{B_k}(X\setminus C_k))\right)\eta_B\\
& = & 2^{-|X|_1+2d}\sum_{B\in\ka O(X)}\sum_{\varepsilon\in\zet_2^d}
(-1)^{|\varepsilon|}\eta_B(C^\varepsilon )\eta_B.\end{aligned}$$ Hence $$\begin{aligned}
\varphi_C = 2^{-|X|_1+2d}\sum_{B\in\ka O(X)}
\sum_{\varepsilon\in\zet_2^d} (-1)^{|\varepsilon|}\eta_B(C^\varepsilon )\eta_B.\end{aligned}$$
For further use, we introduce a class of bases of $\add {X,\er}$ containing the bases described in Theorem \[bazad1\].
For each $i$, let $\chi_i\in\add {X_i, \er}$ be the characteristic function of a non-empty subfamily $\ka F_i\subseteq \pud {X_i}$. Observe that if $\ka F_i=\pud{X_i}$, then $\chi_i=1= \varphi_{X_i}$, and that $\chi_i\neq 1$ is equivalent to saying that, as in Lemma \[baza1\], for each $A\in \pud {X_i}$, exactly one of the two elements $A$, $X_i\setminus A$ belongs to $\ka F_i$. Now, for every $D\in \Pud {X}$, let $\tau_D=\tau_{D_1}\otimes\cdots \otimes\tau_{D_d}$, where $\tau_{D_i}=\chi_i$, if $D_i=X_i$, and $\tau_{D_i}=\varphi_{D_i}$, otherwise. Clearly, we have
\[bazad3\] Let $\ka B \subset\Pud {X}$ be defined so that for every $D\in \Pud{X}$, $\ka B$ has only one element in common with $\{D^\varepsilon: \varepsilon\in\zet_2^d\}$. Then the set $L:=\{\tau_D: D\in \ka B\}$ is a basis of $\add {X, \er}$.
Dyadic labellings {#dilabel}
=================
Let $X$ be a $d$-box and let $E$ be a non-empty set. A mapping $\lambda \colon \pud X \to E$ is said to be a *dyadic labelling* if it is ‘onto’, and for every $A$, $B$, $C$ and $D$ belonging to $\pud X$, if $A$, $B$ and $C$, $D$ are twin pairs, and $A\cup
B=C\cup D$, then $\{\lambda(A), \lambda(B)\}=\{
\lambda(C), \lambda(D) \}$.
\[ind3\] Let $X$ be a $d$-box, and $E$ be a set. If $\lambda \colon \pud X\to E$ is a dyadic labelling, then for each polybox $F\subseteq X$ and every two proper suits $\ka F$, $\ka G$ for $F$, the following equation is satisfied $$\{\lambda(A) \colon A\in \ka F\}
=
\{\lambda(A) \colon A\in \ka G\}.$$
For each $e\in E$, let us define $f_e\colon \pud X\to\er$ as follows $$f_e(A)=[e=\lambda(A) ].$$ It is straightforward from the definition of dyadic labellings that $f_e$ is additive. Therefore, by Theorem \[add1\], $$\sum_{A\in \ka F} f_e(A)=
\sum_ {A\in \ka G}f_e(A),$$ which readily implies our thesis. $\square$
\[sekw\] Let $X$ be a $d$-box, and $A$, $B\in\pud X$. If $\lambda:\pud X\to E$ is a dyadic labelling, then there is $D\in \ka C_A$ such that $\lambda (D)=\lambda (B)$ and for each $i\in [d]$, $D_i=B_i$, whenever $B_i\in \{ A_i, X_i\setminus A_i\}$.
Let $\lambda (B)=e$. Define a sequence $B^0=B, B^1,\ldots , B^{d}$ by induction: Suppose that $B^k$ is already defined and $\lambda(B^k)=e$. If $B_{k+1}\in \{A_{k+1}, X_{k+1}\setminus A_{k+1}\}$, then set $B^{k+1}=B^{k}$. If it is not the case, then choose three proper boxes $R$, $S^1$ and $S^2$ so that the following equations are satisfied $$B^k_{(k+1)'}=R_{(k+1)'}=S^j_{(k+1)'},\qquad j=1,2,$$ $$R_{k+1}=X_{k+1}\setminus B^{k}_{k+1},\quad
S^1_{k+1}=A_{k+1},\quad
S^2_{k+1}=X_{k+1}
\setminus A_{k+1}.$$ (Therefore, $B^k$, $R$ and $S^1$, $S^2$ are twin pairs such that $B^k\cup R=S^1\cup S^2$. By the definition of dyadic labellings $
e\in\{\lambda(S^1), \lambda(S^2) \}.
$) Set $$B^{k+1}=\left\{
\begin{array}{ccc}
S^1 & \text{if} & \lambda(S^1)=e, \\
S^2 & \text{if} & \lambda(S^1)\neq e.
\end{array}
\right.$$ It is clear by the construction that $D=B^d$ has the desired properties.$\square$
In the sequel, we shall identify each box $A\in \pud X$ with its coordinates $(A_1,A_2,\ldots,A_d)$. Consequently, $\pud X$ is identified with $\prod_{i=1}^{d}\pud {X_i}$, and as such can be considered as a $d$-box. Thus, it makes sense to define $\pudd X:= \pud{\pud X}$. A box $\ka B\in\pudd X$ is *equicomplementary* if for every $A\in \pud X$ the intersection $\ka B\cap \ka C_A$ consists of exactly one element. Equivalently, one can say that for each $i\in [d]$ and each $C\in\pud {X_i}$ exactly one of the two sets $C$, $X_i\setminus
C$ belongs to ${\ka B}_i$. It is clear that $|\ka B|=(1/2^d)|\pud X|$. Now, let $\ka F$ be a partition of the $d$-box $\pud X$ that consists of equicomplementary boxes. $\ka F$ has exactly $2^d$ elements, which means that this partition is minimal. For each $A\in \pud X$, there is a unique element $\lambda_{\ka F}(A)\in\ka F$ such that $A\in \lambda_{\ka F}(A)$. Thus, we have defined the mapping $\lambda_{\ka F}\colon \pud X\to {\ka F}$.
\[diada\] The mapping $\lambda_{\ka F}$ is a dyadic labelling.
Suppose that $A$, $B\in\pud X$ are dichotomous. Hence there is $i\in [d]$ for which $A_i=X_i\setminus B_i$. Since $A_i\in \lambda_{\ka F}(A)_i$, the equicomplementarity of $\lambda_{\ka F}(A)$ implies that $B_i\not\in \lambda_{\ka
F}(A)_i$. Therefore, $B\not\in \lambda_{\ka F}(A)$, which in turn implies $\lambda_{\ka F}(A)\neq
\lambda_{\ka F}(B)$. In particular, the mapping $\zet_2^d\ni\varepsilon\mapsto\lambda_{\ka F}(A^\varepsilon)$ is one-to-one, and as $|\ka F|=2^d$, our mapping $\lambda_{\ka F}$ has to be onto.
Suppose now that $A$, $B\in \ka F$ and $C$, $D\in \ka F$ are two twin pairs such that $A\cup B=C\cup D$. There is an index $i\in [d]$ such that $$A_{i'}=B_{i'}=C_{i'}=D_{i'}.$$ Thus, for $j\in i'$, we have $C_{j}\in\lambda_{\ka F}(A)_{j}$ and $D_{j}\in\lambda_{\ka F}(A)_{j} $. From the two elements $C_i$ and, $D_i=X_i\setminus C_i$ one belongs to $\lambda_{\ka F}(A)_i$. As a result, $C\in\lambda_{\ka F}(A)$ or $D\in\lambda_{\ka F}(A)$. Consequently, $$\lambda_{\ka F}(A)\in\{\lambda_{\ka F}(C),\lambda_{\ka F}(D)\}.$$ Since $A$ and $B$, and $C$ and $D$ play the same role, we may deduce that $$\{\lambda_{\ka F}(A), \lambda_{\ka F}(B)\}=\{\lambda_{\ka F}(C), \lambda_{\ka F}(D)\}.$$ $\square$
Let $X$ be a $d$-box and let $E$ be a set that consists of $2^d$ elements. A mapping $\lambda \colon \pud X\to E$ is a dyadic labelling if and only if there is a proper suit $\ka F$ for $\pud X$, composed of equicomplementary boxes, and a bijection $g\colon
\ka F\to E$ such that $$\lambda =g\circ\lambda_{\ka F}.$$
The proof of the implication ‘$\Leftarrow$’ is a consequence of Proposition \[diada\]. In order to show that the implication ‘$\Rightarrow$’ is true, it suffices to prove that $\lambda^{-1}(e)$ is an equicomplementary box for each $e\in E$.
By Proposition \[sekw\], for each $A$, $B\in \pud X$, we have $\lambda (B)\in \lambda(\ka C_A)$. Since $\lambda$ is ‘onto’, $|\lambda (\ka C_A)|=2^d=|\ka C_A|$. Consequently, the set $\lambda^{-1}(e)\cap \ka C_A$ is a singleton.
Let $G$, $H\in \lambda^{-1}(e)$. Suppose that $L\in\pud X$ and $L_i\in \{G_i, H_i\}$ for each $i\in [d]$. Let $I=\{i: G_i=L_i\}$. By Proposition \[sekw\] applied to $A=L$ and $B=G$, there is a $K\in \ka C_L$ such that $\lambda(K)=e$ and $K_I=L_I$. Similarly, for $J=\{i: H_i=L_i\}\supseteq [d]\setminus I$, we can show that there is $M\in \ka C_L$ such that $\lambda(M)=e$ and $M_J=L_J$. Since by the previous part $\lambda$ is ‘one-to-one’ on $\ka C_L$ and $I\cup J=[d]$, we have $K=M=L$. Therefore $L\in \lambda^{-1}(e)$, which implies that $\lambda^{-1}(e)$ is a box. The equicomplementarity of $\lambda^{-1}(e)$ is again a consequence of the previous part of the proof. $\square$
Binary codes {#binary codes}
============
As before, let $X$ denote a $d$-box. Suppose that for each $i\in [d]$ we have chosen a mapping $ \beta_i \colon 2^{X_i}\setminus \{\emptyset , X_i\}\to \zet_2$ such that for each $A_i$, $$\label{kod}
\beta_i(A_i)+\beta_i({X_i\setminus A_i})=1.$$ The mapping $\beta \colon \pud X\to \zet_2^d$ defined by $$\beta (A)=(\beta_1 (A_1),\ldots , \beta_d (A_d))$$ is said to be a *binary code* of $\pud X$.
Clearly, each binary code is a dyadic labelling. By Proposition \[ind3\], if $\ka F$ is a proper suit for a polybox $F\subseteq X$, then the set of code words $$\beta(\ka F):=\{\beta (A): A\in\ka F\}$$ depends only on $F$, not on the particular choice of a suit. It yields the following interesting consequence:
- For $\varepsilon\in \zet_2^d$, let, as before, $|\varepsilon|=|\{i: \varepsilon_i=1\}|$. The number $|\beta|_k(F):=|\{A\in\ka F: |\beta(A)|=k\}|$ depends only on $F$. In particular, $|\beta|_k(X)={d\choose k}$.
There are natural examples of binary codes. Assume that a $d$-box $X$ is of odd cardinality. For each $A\in \pud X$, let us set $$\beta_\text{eo}(A)=(|A_1| (\operatorname{mod} 2),\ldots, |A_d| (\operatorname{mod} 2)).$$ We call $\beta_\text{eo}$ the *even-odd pattern*. In the case of the even-odd pattern $|\beta|_k(F)$ counts the number of boxes belonging to any suit $\ka F$ for $F$ that have exactly $k$ factors which are the sets of odd cardinality.
Similarly, we can define the binary code $\beta_\text{ml}$ called the *more-less pattern* $$\beta_\text{ml}(A)=([|A_1|>|X_1|/2],\ldots, [|A_d|>|X_d|/2]).$$ Here $|\beta|_k(F)$ counts the number of boxes belonging to any suit $\ka F$ for $F$ that have exactly $k$ factors which are the sets having more than a half of elements of the corresponding factors of $X$.
Indices
=======
Let a $d$-box $X$, a suit $\ka F\subseteq\pud X$, a set $F\subseteq X$ and a box $C\subseteq X$ be given. We define two kinds of indices: the *index* of $\ka F$ relative to $C$ $$\Index {\ka F} C=\sum_{A\in \ka F}{\varphi}_C (A),$$ and the *index* of $F$ relative to $C$ $$\index F C =\tilde{\varphi}_C (F).$$ Clearly, if $F$ is a polybox and $\ka F$ is a suit for $F$, then $\tilde{\varphi}_C(F)=\bar{\varphi}_C(F)$ and by (\[ekst\]) both indices are equal. Let us set $$\ka C= \ka C_C:=\{C^\varepsilon: \varepsilon\in\zet_2^d\}\setminus\{\emptyset\}.$$ Obviously, $\ka C$ is a suit for $X$. We call it *simple*. Observe that $\ka C$ is proper if and only if $C$ is a proper box. We say that elements $A$ and $B$ of $\ka C$ *carry the same sign* if $(-1)^{|\varepsilon|}=1$ for the only $\varepsilon$ such that $A^\varepsilon=B$. If $C\neq X$, then *carrying the same sign* is an equivalence relation with two classes of abstraction. Any assignment to one of these classes $+1$ and to the other $-1$ is called an *orientation* of $\ka C$. Let $\ka C^+$ be the class to which $+1$ is assigned and $\ka C^-$ be the class to which $-1$ is assigned. Suppose now that the orientation of $\ka C$ is *induced* by $C$, that is, $C\in \ka C^+$. Let $$I=i(C):=\{i\in[d]: C_i\neq X_i\}.$$ Let $ n^+(\ka F, C)$ be the number of all those $A\in \ka F$ that $A_I\in (\ka C^+)_I$ and accordingly, $n^-(\ka F, C)$ be the number of all those $A\in \ka F$ that $A_I\in (\ka C^-)_I$. Immediately from the definition of the indices we obtain
\[ladunek1\] Let $X$ be a $d$-box and let $\ka F$ be a proper suit for a polybox $F$. For any $C\in \Pud X$, if $C\neq X$, then $$\label{ladunek2}
\index F C =\Index {\ka F} C= n^+(\ka F, C)-n^-(\ka F, C).$$ Moreover, $$\index F X=|\ka F|=|F|_0.$$
In the case of $C\in \pud X$ the meaning of (\[ladunek2\]) is particularly simple, it counts the sum of all signs that come from those elements of $\ka C$ that belong to the suit $\ka F$ for $F$. In particular, this sum is independent of the choice of a suit for $F$.
\[zerot\] For each proper suit $\ka F$ for a $d$-box $X$ and each $C\in \Pud X$, if $C\neq X$, then $\Index {\ka F} C=0$. In particular, if $C$ is a proper box then the size of ${\ka F}\cap {\ka C_C}$ is even.
\[2kostki\] If $F$ and $G$ are two polyboxes in a $d$-box $X$, and $\{\zeta_i:
i\in [2^{|X|-d}]\}$ is a basis of $\add {X, \er}$, then $F=G$ if and only if $\bar{\zeta}_i(F)=\bar{\zeta}_i(G)$ for each $i\in[2^{|X|-d}]$.
Let $x:=(x_1,\ldots ,x_d)$ be an arbitrary element of $X$, and let $\bar{x}=\{x_1\}\times\cdots\times\{x_d\}$. Then $\bar{x} \in \ka O(X)$ and $\eta_{\bar{x}}\in \add {X,\er}$. Obviously, we have $[x\in F]=\eta_{\bar{x}}(F).$ Since $\zeta_i$, $i\in [2^{|X|-d}]$, is a basis, there are reals $\alpha_i(x)$ such that $\eta_{\bar{x}}=\sum_i \alpha_i(x)\zeta_i$. Thus, $$[x\in F]=\sum_i \alpha_i(x)\bar{\zeta}_i(F).$$ Our result follows now immediately, as the same equation holds true for $G$. $\square$
\[kryt1\] If $F$ and $G$ are two polyboxes in a $d$-box $X$, then $F=G$ if and only if $\index FC=\index GC$ for each $C\in \Pud X$.
Let $\ka B\subset \Pud X$ be as in Theorem \[bazad1\]. Recall the functions $\varphi_C$, $C\in \ka B$, form an (orthogonal) basis of $\add{X,\er}$, and for each $C\in \Pud X$, $$\index F C =\bar{\varphi}_C (F).$$ Therefore, our theorem is a consequence of the preceding lemma.$\square$
Symmetric polyboxes {#symbox}
===================
Let $X$ and $Y$ be $d$-boxes. Suppose that for each $i\in [d]$ one has given a mapping $\varphi_i\colon\pud {X_i}\to\pud{Y_i}$ such that $$\varphi_i(X_i\setminus A)=Y_i\setminus \varphi_i(A),$$ whenever $A\in \pud {X_i}$. Then one can define the mapping $\Phi:=\varphi_1\times\cdots\times\varphi_d$ from $\pud X$ into $\pud Y$: $$\Phi (B)=\varphi_1(B_1)\times\cdots\times \varphi_d(B_d).$$ It is clear that $\Phi$ sends dichotomous boxes into dichotomous boxes. Moreover, if $A$, $B$ and $C$, $D$ are two twin pairs in $\pud X$ such that $A\cup B = C\cup D$, then $\Phi(A)$, $\Phi(B)$ and $\Phi(C)$, $\Phi(D)$ are two twin pairs as well, and $\Phi(A)\cup\Phi(B)=\Phi(C)\cup\Phi(D)$. Further we shall refer to $\Phi$ as a mapping that *preserves dichotomies*.
\[rowy\] Given two $d$-boxes $X$, $Y$ and a mapping $\Phi\colon \pud X\to \pud Y$, that preserves dichotomies. If $\ka F$, $\ka G\subset \pud X$ are proper suits and $\bigcup \ka F=\bigcup \ka G$, then their images are also proper suits and $\bigcup \Phi(\ka F)=\bigcup\Phi(\ka G)$.
The proof is much the same as that of Proposition \[ind3\]. For each $y\in Y$, let us define $e_y\colon\pud X\to \er$ by the formula $e_y(A)=[y\in \Phi (A)]$. Since $\Phi$ preserves dichotomies, $e_y$ is additive. By Theorem \[add1\], $$\sum_{A\in \ka F} e_y(A)=\sum_{A\in \ka G}e_y(A).$$ Hence $\bigcup\Phi(\ka F)=\bigcup \Phi(\ka G)$.$\square$
By the above proposition, we can extend $\Phi$ to polyboxes. Namely, if $F$ is a polybox in $X$ and $\ka F$ is a suit for $F$, then we may define $\Phi(F)$ by the equation $$\Phi(F)=\bigcup \Phi(\ka F).$$ In particular, if $\varepsilon\in \zet_2^d$, then the polybox $F\naw \varepsilon\subseteq X$ given by $$F^{(\varepsilon)}=\bigcup_{A\in \ka F}A^\varepsilon$$ is well-defined. Observe that if $F$ is an improper box, then $F^{(\varepsilon)}$ does not have to coincide with already defined $F^\varepsilon$, as the latter set is empty if there is an $i\in [d]$ such that $F_i=X_i$ and $\varepsilon_i=1$, while $F\naw \varepsilon$ is non-empty.
\[commute0\] Given two $d$-boxes $X$, $Y$ and a mapping $\Phi\colon \pud X\to \pud Y$ that preserves dichotomies. Then for every polybox $F\subseteq X$ and $\varepsilon\in \zet_2^d$ $$\label{commute}
\Phi(F^{(\varepsilon)})=\Phi(F)^{(\varepsilon)}.$$
It suffices to observe that (\[commute\]) is satisfied when $F$ is a proper box.$\square$
\[suma00\] Let $U$ be an $n$-box, $f\colon U\to \zet_2$ and $V=\{u\in U: f(u)=1\}$. If for every $x\in U$ and every $i\in[n]$ $$\label{suma0}
\sum_{u\in U} f(u)[u_{i'}=x_{i'}]=0,$$ and $V\neq \emptyset$, then $|V|\ge 2^n$. Moreover, for every $v\in V$ there is some $y\in V$ such that $v_i\neq y_i$, $i=1,\ldots, n$.
Let us fix $v\in V$, and define $\gamma\colon V\to \{0,1\}^n$ by the formula $$\gamma(u)_i=[u_i\neq v_i].$$ It suffices to show that $\gamma$ is ‘onto’. Suppose that it is not. Then there is an element $\tau$ with minimal support among the members of $\{0,1\}^n\setminus \gamma (V)$. Let us choose an index $i$ such that $\tau_i=1$ (such $i$ has to exist as $\gamma(v)=0$). Let $\tau'$ be defined so that $\tau'_i=0$ and $\tau'_{i'}=\tau_{i'}$. Since $\tau'$ has its support strictly contained in that of $\tau$, it belongs to $\gamma(V)$. Therefore, there is an $x\in V$ such that $\tau'=\gamma(x)$. By (\[suma0\]), there has to exist $u\in V\setminus\{x\}$ for which $u_{i'}=x_{i'}$. By the definition of $\gamma$, and that of $\tau'$, we conclude that $\gamma(u)=\tau$, which is a contradiction. $\square$
\[wieza\] Let $F$ be a polybox in a $d$-box $X$ and let $\ka B\subset \Pud X$ intersect with each $\ka C_C$, $C\in \Pud X$, at exactly one element. Let $I\subseteq [d]$ be a set of size $n\ge 2$, and $$\ka E=\{C\in \ka B: i(C)=I,\, \index F C \equiv 1 \mod 2\}.$$ Suppose that for each $B\in \ka B$ $$\index F B\equiv 0 \mod 2,$$ whenever $i(B)\subset I$ and $|i(B)|=n-1$. Then $\ka E$ is empty or $|\ka E|\ge 2^n$. Moreover, if $\ka E$ is non-empty, then $n<d$ and for every $C\in \ka E$ there is a $D\in \ka E$ such that $D_i\not\in\{C_i, X_i\setminus C_i\}$ for each $i\in I$.
For each $i\in[d]$ let us pick ${\ka B}_i\subseteq \Pud {X_i}$ so that $\{A, X_i\setminus A\}\cap {\ka B}_i$ is a singleton for each $A\in \Pud {X_i}$. It is clear that it suffices to proceed with $\ka B$ defined as follows $$\ka B=\{ B\in \Pud X: B_i\in \ka B_i\,\, \text{for each $i\in [d]$}\},$$ as for every $B$ and $C\in \Pud X$ if $B\in \ka C_C$, then $$\varphi_B\equiv \varphi_C \mod 2.$$ (Observe that $\ka B$ can be thought of as an equicomplementary box, which was defined in Section \[dilabel\].) Let us order the elements of $I$: $i_1 < \ldots < i_n$. Let $\ka B^\circ=\{C\in \ka B: i(C)=I\}$ and $
U=\{(C_{i_1},\ldots , C_{i_n}): C\in \ka B^\circ \}.
$ Define $f\colon U\to \zet_2$ by the formula $$f(C_{i_1},\ldots , C_{i_n})=\index F C \przyst 2,$$ where $C\in \ka B^\circ$. Let $V=\{(C_{i_1},\ldots ,C_{i_n}): C\in \ka E\}$.
It is clear that $f$ just defined satisfies the assumptions of Lemma \[suma00\] if and only if the following claim holds true 1em
**Claim** For every $C\in \ka B^\circ$ and $i\in I$ $$L:=\sum_{D\in \ka B^\circ}\index F D [D_{i'}=C_{i'}]\equiv 0 \mod 2.$$ 1em
To prove it, let us define $B\in \ka B$ so that $B_i=X_i$ and $B_{i'}=C_{i'}$. For $D\in \Pud X$, let $\ka D_D$ be defined by the following equivalence $$A\in \ka D_D \Leftrightarrow A\in \pud X\,\,\text{and there is a $E\in \ka C_D$ such that $E_{i(D)}=A_{i(D)}$.}$$
Let $\ka F$ be a proper suit for $F$. It follows from Proposition \[ladunek1\] that $$\index F D\equiv |\ka F\cap \ka D_D |\mod 2.$$ Moreover, since the sets $\ka D_D$, $D\in \ka B^\circ$, are pairwise disjoint and $$\bigcup\{\ka D_D: D\in \ka B^\circ, D_{i'}=C_{i'}\}=\ka D_B,$$ we get $$L\equiv\sum_{D\in \ka B^\circ} |\ka F\cap \ka D_D |[D_{i'}=C_{i'}]=|\ka F\cap\ka D_B|\equiv \index F B \mod 2.$$ But the latter number is even by the assumption.
Now, from Lemma \[suma00\] we conclude that $\ka E$ is either empty or has at least $2^n$ elements. If it could happen that $\ka E$ is non-empty while $n=d$, then, again by Lemma \[suma00\], for any $G\in\ka E$ there would exist another box $H\in \ka E$ such that $G_i\neq H_i$ for each $i\in [d]$. This fact and the definition of $\ka E$ would imply that $\ka C_G$ and $\ka C_H$ would intersect $\ka F$, which would be impossible as if $A\in \ka C_G$ and $B\in \ka C_H$, then $A$ and $B$ are not dichotomous.$\square$
Let $F$ be a polybox in a $d$-box $X$ and let $\varepsilon\in\zet_2^d$ be different from $0$. If $F^{(\varepsilon)}=F$, then $\index F B\equiv 0 \mod 2$ for every box $B\in \Pud X\setminus \{X\}$. If in addition the number $n:=i(B)\cap \supp \varepsilon$ is odd, then $\index F B=0$.
First we prove the second part of the theorem.
Let $\ka F$ be a proper suit for $F$. Since $F^{(\varepsilon)}=F$, we obtain $$\index F B =\sum_{A\in \ka F}\varphi_B(A)=\sum_{A\in \ka F}\varphi_B(A^\varepsilon).$$ It follows from the definitions of $\varphi_B$ and $n$ that $$\varphi_B(A^\varepsilon)=(-1)^n\varphi_B(A)=-\varphi_B(A).$$ Therefore, $\index F B=-\index F B$ and consequently $\index F B=0$.
To prove the first part, we begin with showing that $\index F B\equiv 0 \mod 2$ for any box $B$ for which $i(B)$ is a proper subset of $[d]$. To this end, let us fix $i\not\in i(B)$, and define a $d$-box $Y$ by the equations $Y_{i'}=X_{i'}$, $Y_i=\{0,1\}$. Now, choose $\ka B_i$ as in the proof of the preceding theorem and define $\Phi\colon \pud X\to \pud Y$ so that if $B=\Phi(A)$, then $B_{i'}=A_{i'}$ and $$B_i=\left\{\begin{array}{ll}
\{0\} & \text{if $A_i\in \ka B_i$,}\\
\{1\} & \text{otherwise.}
\end{array}\right.$$ Clearly, $\Phi$ preserves dichotomies. Let $C=\Phi(B)$. As is seen from the definition of $\Phi$, for each $A\in \pud X$, one has $\varphi_B(A)=\varphi_C(\Phi(A))$. Therefore, if $G=\Phi (F)$, then $$\label{zapis}
\index F B=\index G C.$$ Let $G_0=\{y\in G: y_i=0\}$ and $G_1=\{y\in G: y_i=1\}$. These two sets are polyboxes. By Proposition \[commute0\] and the fact that $F^{(\varepsilon )}=F$, we obtain $G^{(\varepsilon)}=G$. If $\varepsilon_i=0$, then $$\label{warstwy}
G_{\kappa}^{(\varepsilon )}=G_{\kappa},\,\, \text{for $\kappa=0,1$.}$$ If we ignore the $i$-th coordinate, which is constant for elements of both polyboxes $G_\kappa$, then we can think of them as polyboxes in a $(d-1)$-box. Thus, by induction and (\[warstwy\]), we can maintain that $\index {G_\kappa} C \equiv 0 \mod 2$. Consequently, this, together with (\[zapis\]), yields $$\label{dublet}
\index F B =\index G C =\index {G_0} C +\index {G_1} C\equiv 0 \mod 2.$$ If $\varepsilon_i=1$, then $G^{(\varepsilon)}=G$ implies $G_0^{(\varepsilon)}=G_1$ and $G_1^{(\varepsilon)}=G_0$. Hence $$\index {G_0} C\equiv \index {G_1} C \mod 2$$ and (\[dublet\]) holds true also in this case.
The fact that $\index F B\equiv 0 \mod 2$ when $i(B)=[d]$ follows from what has been proved before and Theorem \[wieza\].$\square$
Rigidity {#rikitiki}
========
\[znaki\] Let $\ka F\subset\Pud X\setminus\{X\}$ be a suit for a $d$-box $X$ and let $C\in \ka F$ be chosen so that $$k:=|i(C)|=\max\{|i(D)|\colon D\in \ka F\}.$$ For each $D\in \ka C_C$, let $\varepsilon (D)\in \zet_2^d$ be defined by the equation $D=C^{\varepsilon(D)}$. Then $$\sum_{D\in \ka C_C\cap\ka F} (-1)^{|\varepsilon(D)|}=0.$$
Observe first that for any $D\in \Pud X$, $\index D C=0$ if and only if $i(D)\not\supseteq i(C)$. If $D\in \ka F$ and $i(D)\supseteq i(C)$, then by the definition of $k$, $D\in \ka C_C$. In this case, by Proposition \[ladunek1\], $\index D C =(-1)^{|\varepsilon(D)|}2^{d-k}$. Since $C\neq X$, it follows from Proposition \[zerot\] that $\index X C=0$. Consequently, by Theorem \[sdisjoint\], we obtain $$0=\sum_{D\in \ka F} \index D C=
2^{d-k}\sum_{D\in \ka C_C\cap \ka F} (-1)^{|{\varepsilon (D)}|}.$$ $\square$
\[sztywnypal\] Let $F$ be a polybox in a $d$-box $X$, and $\ka F$ be a suit for $F$. Suppose that $$|\index F C|=|\ka F\cap \ka C_C|,$$ for each $C\in \pud X$. Then $\ka F$ is the only suit for $F$.
Let $X$ be a one dimensional box. Let $\ka E(X)\subseteq \pudd X $ be the family of all equicomplementary boxes. For every $A\in \pud X$, we define $\ka E A$ as the set of all these equicomplementary boxes $B\in \ka E(X)$ that $A\in B$.
\[intermediolan\] If $X$ is a one dimensional box, $A^1,\dots,A^n$ belong to $\pud X$ and $A^i\not\in\{A^j, X\setminus A^j\}$ for every $i\neq j$, then $$|\ka E A^1 \cap\dots\cap \ka E A^n|=\frac{1}{2^{n}}|\ka E(X)|.$$
Let $\ka A=\{ \{A, X\setminus A\}\colon A\in \pud X\}$. In order to form an equicomplementary box we pick independently one element from each set belonging to $\ka A$. Let $x_1,\dots x_n$ be different elements of $\ka B$. If for each $i\in [n]$ we fix one element of $x_i$, then the number of equicomplementary boxes which can be formed so that they contain these fixed elements is $\frac{1}{2^{n}}|\ka E(X)|$. Clearly, by our assumptions, the elements $x_i=\{A^i, X\setminus A^i\}$, $i\in [n]$, are different. $\square$
If $X$ is a $d$-box and $A\in\Pud X$, then we define $\breve{A}=\ka E A_1\times\cdots\times \ka E A_d$.
\[baran\] Let $X$ be a $d$-box and let $B$, $C$, $D\in \pud X$ be such that $C\neq D$, and $B$, $D$ are not dichotomous. Then $$\breve{B}\cap\breve{C}\neq \breve{B}\cap\breve{D}.$$
If it were true that $\breve{B}\cap\breve{C}= \breve{B}\cap\breve{D}$, then $$|\breve{B}\cap\breve{C}\cap\breve D|= |\breve{B}\cap\breve{D}|.$$ Therefore, $$\prod^d_{i=1}|\ka E B_i\cap\ka E C_i\cap \ka E D_i|=\prod^d_{i=1}|\ka E B_i\cap \ka E D_i|,$$ As $B$ and $D$ are not dichotomous, the latter product is different from zero. Thus, for each $i$, $$|\ka E B_i\cap\ka E C_i\cap \ka E D_i|=|\ka E B_i\cap \ka E D_i|,$$ Now, it follows from the preceding lemma that $\ka E C_i=\ka E D_i$, for each $i$, which readily implies $C=D$. $\square$
The mapping $\Phi\colon \pud X\to \pud {\breve{X}}$ given by $
\Phi (A)=\breve{A}
$ preserves dichotomies. Suppose that there are two different suits $\ka F$ and $\ka G$ for $F$. Then there is a $B\in \ka G\setminus \ka F$. Moreover, since by Proposition \[rowy\] $\bigcup \Phi(\ka G)=\bigcup \Phi (\ka F)$, we deduce that $\breve{B}\subset \bigcup \Phi(\ka F)$. The set $\breve{B}$ is a $d$-box and $\ka H=\{\breve{B}\cap\breve {A}\colon A\in \ka F\}\setminus \{\emptyset\}$ is a (possibly improper) suit for $\breve{B}$. Clearly, $\breve{B}\not\in \ka H$. By Lemma \[znaki\], there is a $C\in \ka H$ such that $$\label{dupa}
\sum_{D\in \ka C_C\cap \ka H} (-1)^{|\varepsilon (D)|}=0,$$ where we interpret $\ka C_C$ as a subfamily of $\Pud {\breve{B}}$. In accordance with this interpretation, for every $\varepsilon\in \zet_2^d$ and each $i\in [d]$, $$(C^\varepsilon)_i=
\left\{
\begin{array}{ll}
C_i & \textrm{if $\varepsilon_i=0$,}\\
\breve{B}_i\setminus C_i & \textrm{if $\varepsilon_i=1$.}
\end{array}
\right.$$ Let $U\in \ka F$ be chosen so that $C=\breve{B}\cap \breve{U}$. Observe that $$D=C^{\varepsilon (D)}= \breve{B}\cap (\breve{U})^{\varepsilon(D)},$$ for each $D\in \ka C_C\cap \ka H$. As $\Phi$ preserves dichotomies, by Proposition \[commute0\], we get $D=\breve{B}\cap \Phi(U^{\varepsilon(D)})$. Since $D\in \ka H$, there is a $K\in \ka F$ for which one has $D=\breve{B}\cap\breve{K}$. From Lemma \[baran\] it follows that $K=U^{\varepsilon (D)}$. Hence $\{U^{\varepsilon (D)}:D\in \ka C_C\cap \ka H\}\subseteq \ka F$. Consequently, by (\[dupa\]) and the definition of the index, we deduce that $$|\index F U|=|\Index {\ka F} U |<|\ka C_U\cap \ka F|.$$ $\square$
Modules
=======
Given a non-empty set $Y$. We denote by $\zet Y$ the free $\zet$-module generated by $Y$. Every function $f\in \er^Y$ extends uniquely to the $\zet$-linear function $f^\dag\colon\zet Y\to \er$. Let now $X$ be a $d$-box. Let $E,F\in \pud X$ and $G,H\in \pud X$ be any two twin pairs such that $E\cup F= G\cup H$. If $f\in \add{X,\er}$, then by the definition of additive mappings one has $
f^\dag(E+F-G-H)=0.
$ Let $n(X)$ be the submodule of $\modgen X$ generated by the elements $E+F-G-H$, where $E,F$ and $G,H$ are as described above. Clearly, for each $f\in \add{X,\er}$, $n(X)$ is contained in the set of zeros of $f^\dag$. We show that $n(X)$ is the set of common zeros of the functions $f^\dag$, $f\in \add{X,\er}$:
\[zera\] For every $x\in \modgen X\setminus n(X)$ there is an $f\in \add{X,\er}$ such that $f^\dag(x)\neq 0$.
Suppose it is not true. Then one could find the smallest integer $d$ for which there are a $d$-box $X$, an element $x\in \modgen X\setminus n(X)$ which is a common zero of the functions $f^\dag$, $f\in \add{X,\er}$. Fix a one dimensional equicomplementary box $\ka B\subset \pud {X_d}$ and $D\in \ka B$. The element $x$ can be expressed as follows $$x=\sum_{E\colon E_d\in \ka B} k_EE+\sum_{F\colon F_d\not\in \ka B} k_FF,$$ where $E, F\in \pud X$ and $k_E, k_F$ are integers. Observe that for each $F$ one has $$F\equiv F_{d'}\times D + F_{d'}\times (X_d\setminus D)- F_{d'}\times(X_d\setminus F_d)\mod n,$$ where $n=n(X)$. If we now replace each $F$ by the expression on the right, then the formula for $x$ can be rewritten as follows $$x\equiv\sum_{E\colon E_d\in \ka B} l_EE+\sum_{G\in \pud {X_{d'}}} m_G(G\times D +G\times (X_d\setminus D)) \mod n,$$ where $l_E, m_G$ are properly chosen integers. Fix an element $A\in \ka B\setminus\{D\}$, then for each $g\in \add{X_{d'},\er}$ we get $$0=(g\otimes \varphi_{A})^\dag(x)=\sum_{E\colon E_d=A} l_E g(E_{d'}),$$ where $\varphi_A$ is as defined in (\[fione\]). Let $u=\sum_{E\colon E_d=A} l_E E_{d'}$; therefore, $u$ is a common zero of all mappings $g^\dag$, $g\in \add{X_{d'},\er}$. Now, it follows from the assumption on the minimality of $d$ that $u\in n(X_{d'})$. Hence $$u\otimes A=\sum_{E\colon E_d=A}l_EE\in n,$$ and to each $G\in\pud {X_{d'}}$ there correspond integers $p_G$, $r_G$ such that $$x\equiv \sum_{G\in \pud {X_{d'}}} p_G(G\times D) + r_G(G\times (X_d\setminus D))\mod n.$$ Denote the element on the right by $y$ and again take $g\in \add{X_{d'}, \er}$. We have $$0= (g\otimes \chi_{\ka B})^\dag(y)=\sum_{G\in \pud {X_{d'}}} p_Gg(G).$$ By the same argument as before, we deduce that $\sum_{G\in {\pud {X_{d'}}}} p_G(G\times D)\in n$. Similarly, $$0= (g\otimes(1- \chi_{\ka B}))^\dag(y)=\sum_{G\in {\pud {X_{d'}}}} r_Gg(G),$$ and $\sum_{G\in \pud {X_{d'}}} r_G(G\times(X_d\setminus D))\in n$. Hence $x\in n$, which is a contradiction.
$\square$
Let $\addb {X, \er}$ consists of all functions $f\colon\Pud X\to \er$ such that for every twin pair $A$, $B\in \Pud X$, $$f(A\cup B)=f(A)+f(B).$$
\[equiv\] If $f\in \addb {X,\er}$, then there is a function $g\in\add {X,\er }$ such that $f=\bar{g}|\Pud X$.
Let $C\in \Pud X$. If $C$ is not proper, then there is an index $i\not\in i(C)$. Let us choose a twin pair $A$, $B\in \Pud X$ so that $C_{i'}=B_{i'}=A_{i'}$. Then by the fact that $f\in \addb {X,\er }$ we have $f(C)=f(A)+f(B).$ Observe that $i(A)$ and $i(B)$ are both of greater size than $i(C)$. If $A$ and $B$ are not already proper boxes, then we split up each of them into a twin pair. We proceed this way till we obtain a proper suit $\ka F$ of $C$. Clearly, for this suit we have $$f(C)=\sum_{F\in \ka F} f(F),$$ Since $g=f|\pud X$ belongs to $\add {X,\er}$, the latter expression for $f(C)$ shows that $f=\bar{g}|\Pud X$.$\square$
In the remainder of this paper, any element from $\add{X,\er}$ and the corresponding element from $\addb{X,\er}$ will be denoted by the same symbol.
Let us denote by $N(X)$ the submodule of $\Modgen X$ spanned by the elements of the form $(A\cup B)-A-B$, where $A,B\in \Pud X$ run over all twin pairs.
\[modulo\] $\quad\modgen X+ N(X)=\Modgen X.$
It suffices to prove that for each $C\in \Pud X$ there is a $y\in \modgen X$ such that $C-y\in N(X)$. We proceed by induction with respect to $k=d-i(C)$. Let us choose a twin pair $A$, $B\in \Pud X$ so that $C=A\cup B$. As $i(A)$ and $i(B)$ are equal to $i(C)+1$, it follows by the induction hypothesis that there are $y'$ and $y''$ in $\modgen X$ such that $A-y'$ and $B-y''$ are both in $N(X)$. Since in addition $C-A-B\in N(X)$, we obtain that $$C-(y'+y'')\in N(X),$$ which completes the proof.$\square$
\[rzut\] Let $\ka F\subset \pud X$ be an equicomplementary box. Let $\ka B\subset \Pud X$ be defined so that $\ka B_i=\ka F_i\cup\{X_i\}$, whenever $i\in[d]$. For any $D\in \ka B$, let $\tau_D\in \add {X, \er}$ be defined so that $\tau_{D}=\tau_{D_1}\otimes\cdots\otimes\tau_{D_d}$ and $$\tau_{D_i}=[D_{i}\in \ka F_i]\varphi_{D_i}+[D_i=X_i](1-\chi_{\ka F_i}),$$ whenever $i\in[d]$. Then $\Modgen X=N(X)\oplus \zet\ka B$. Moreover, the mapping $P\colon \Modgen X\to \zet\ka B$ given by the formula $$P(x)=\sum_{D\in \ka B}{\tau}^\dag_D (x)D$$ is a projection onto $\zet\ka B$, $\ker P=N(X)$ and $P(\modgen X)=\zet\ka B$.
Since ${\tau}^\dag_D$, $D\in \ka B$, are $\zet$-linear, $P$ is $\zet$-linear as well. Thus, it suffices to show that: $(1)$ $P(A)=A$ for $A\in \ka B$; $(2)$ $\ker P=N(X)$; $(3)$ $P(\modgen X)=\zet\ka B$. In order to prove $(1)$, observe first that this claim is easily seen for $d=1$. Therefore, we may assume that $d>1$. By the definition of $P$, one can write $$P(A)=\sum_D{\tau}_{D_{d'}}(A_{d'}){\tau}_{D_{d}}(A_d)A_{d'}\times A_d=u\otimes v,$$ where $
u=\sum_{E\in\ka B_{d'}}{\tau}_E(A_{d'})A_{d'}$, $v= \sum_{F\in \ka B_{d}}{\tau}_F(A_{d})A_d$. By induction, $u\in \zet\ka B_{d'}$, while $v\in \zet\ka B_d$. Consequently, $P(A)\in \zet\ka B$.
From Lemma \[modulo\] it follows that for each $x\in \Modgen X$ there is a $y\in \modgen X$ such that $x\equiv y (\operatorname{mod} N(X))$. Thus, by the definition of $P$, we have $P(x)=P(y)$, which proves $(3)$.
To prove $(2)$ assume in addition that $x\in \ker P$. Then $y\in \ker P$ as well. By the definition of $P$, it means that $\tau_D^\dag(y)=0$ for each $D\in \ka B$. By Lemma \[bazad3\], the system $\tau_D$, $D\in \ka B$, is a basis of $\add {X, \er}$. Thus, combining this fact with Proposition \[zera\] leads to the conclusion $y\in n(X)$. Since $n(X)\subset N(X)$, we obtain $x\in N(X)$. $\square$
\[kryt2\] Let $\ka F$, $\ka G\subset \pud X$ be two suits. Theorem \[kryt1\] can be rephrased so that it can be used to decide whether $\ka F$ and $\ka G$ are suits for the same polybox: it is the case if and only if $\Index {\ka F} C=\Index {\ka G} C$, for every $C\in \Pud X$. This criterion relates to the system $\varphi_C$, $C\in \Pud X$. As indicated by Lemma \[2kostki\], we can replace the latter system by any basis of $\add {X, \er}$, for example, by $\tau_D$, $D\in \ka B$, where $\ka B$ is as in Theorem \[rzut\]. In fact, this theorem gives us a simple computational method to verify whether two suits determine the same polybox. To describe this method, we shall identify, as we tacitly have already done, $\bigotimes_{i=1}^d \Modgen{{X_i}}$ with $\Modgen X$, by letting $B\in\Pud X$ correspond to $B_1\otimes\cdots\otimes B_d$, and extending this correspondence by linearity. Let $P_i: \Modgen{{X_i}}\to \zet\ka B_i$ be the projection with the kernel $N(X_i)$, for each $i\in [d]$. It can be seen that $P=P_1\otimes\cdots\otimes P_d$ (see the next remark). By Theorem \[rzut\] and Lemma \[2kostki\], two, possibly improper, suits $\ka F$ and $\ka G$ define the same polybox if and only if $$\sum_{A\in \ka F} P_1(A_1)\otimes\cdots\otimes P_d(A_d)=\sum_{A\in \ka G} P_1(A_1)\otimes\cdots\otimes P_d(A_d).$$ Observe that $P_i(A_i)$, $i\in[d]$, are easily calculated $$P_i(A_i)=
\begin{cases}
A_i & \text{if $A_i\in {\ka B}_i$},\\
X_i-(X_i\setminus A_i) & \text{if $A_i\not\in {\ka B_i}$}.
\end{cases}$$ Therefore, in order to decide if $\ka F$ and $\ka G$ are suits for the same polybox it suffices to write the sum $\sum_{A\in \ka F} P_1(A_1)\otimes\cdots\otimes P_d(A_d)$, where $P_i(A_i)$ are as described above, and expand it to get an expression of the form $\sum_{C\in \ka B}\alpha_CC$, then repeat the same for $\ka G$. If the resulting expressions coincide, then the suits define the same polybox, otherwise they do not.
Let us set $R_i^0=N(X_i)$ and $R_i^1=\zet {\ka B_i}$, $i\in [d]$. Then $\Modgen{{X_i}}=R_i^0\oplus R_i^1$ and $$\Modgen X=\bigoplus_{\varepsilon\in\zet_2^d} R_1^{\varepsilon_1}\otimes\cdots\otimes R_d^{\varepsilon_d}
=\zet{\ka B}\oplus\bigoplus_{\varepsilon\neq (1,\ldots ,1)} R_1^{\varepsilon_1}\otimes\cdots\otimes R_d^{\varepsilon_d}.$$ Now, it follows from Theorem \[rzut\] that $$N(X)= \bigoplus_{\varepsilon\neq (1,\ldots ,1)}R_1^{\varepsilon_1}\otimes\cdots\otimes R_d^{\varepsilon_d}.$$ This equation can be the launching point for an alternative approach to our theory. It seems to be even simpler than ours but at the same time less natural. There is yet another approach which we found at the beginning of our investigations. We are going to publish it elsewhere.
Words
=====
Let $S$ be a non-empty set called an *alphabet*. The elements of $S$ will be called *letters*. A permutation $s\mapsto s'$ of the alphabet $S$ such that $s''=(s')'=s$ and $s'\neq s$ is said to be a *complementation*. Each sequence of letters $s_1\cdots s_d$ is called a *word* of length $d$. The set of all words of length $d$ is denoted by $S^d$. Two words $s_1\cdots s_d$ and $t_1\cdots t_d$ are *dichotomous* if there is an $i\in[d]$ such that $s'_i=t_i$.
In connection with applications to the cube tilings, it is suitable to consider $d$-boxes of arbitrary cardinality. From now $X$ is a $d$-box if $X=X_1\times\cdots\times X_d$ and $|X_i|>2$ for each $i\in [d]$. Definitions of a proper box, a suit, a polybox etc. remain unchanged. Suppose now that for each $i\in[d]$, we have a mapping $f_i:S\to \pud {X_i}$ such that $f_i(s')=X_i\setminus f_i(s)$. Then we can define the mapping $f\colon S^d\to \pud X$ by $$f(s_1\cdots s_d)=f_1(s_1)\times\cdots\times f_d(s_d).$$ There is an obvious parallelism between $f$ and the mappings that preserve dichotomies. Therefore, we shall refer to $f$ as a mapping that *preserves dichotomies* as well. If $W\subseteq S^d$, then $f(W)=\{f(w)\colon w\in W\}$ is said to be a *realization* of the set of words $W$. The realization is said to be *exact* if for each pair of words $v=s_1\cdots s_d$, $w=t_1\cdots t_d$ in $W$, if $s_i\not\in \{t_i, t'_i\}$, then $f_i(s_i)\not\in\{f_i(t_i), X_i\setminus f_i(t_i)\}$.
If $W$ consists of pairwise dichotomous words, then we call it a *(polybox) genome*. Each realization of a genome is a proper suit. Two genomes $V$ and $W$ are *equivalent* if the realizations $f(W)$ and $f(V)$ are suits of the same polybox for each mapping $f$ that preserves dichotomies. Now we collect several criteria of the equivalence. Two of them have been already discussed in previous sections, where they are expressed in terms of suits.
We begin with the criterion which is a consequence of Theorem \[rzut\] and is described in detail in Remark \[kryt2\]. Let us expand the alphabet $S$ by adding an extra element $*$. This new set of symbols will be denoted by $*S$. Let $S_+$ and $S_-$ be arbitrary subsets of $S$ satisfying the following equations $$S_-=\{s'\colon s\in S_+\}=S\setminus S_+.$$ Let $*S_+=S_+\cup \{*\}$. Let us consider the ring $\zet[*S]$ over $\zet$ freely generated by $*S$. Let us identify each $s\in S_-$ with $*-s'$, and denote the ring which we obtain by this identification from $\zet[*S]$ by $R$. It is clear that each word $v\in S^d$ can be expanded in $R$ so that it is written in a unique way in the form of an element $v_+$ of the free module $\zet{*S_+}\subset R$. For a finite $V\subset S^d$, we define $V\oplus \in \zet{*S_+}$ by the equation $V^\oplus =\sum_{v\in V} v_+$.
\[equigenom1\] Let $V$, $W\subset S^d$ be two genomes. Then $V$ and $W$ are equivalent if and only if $V^\oplus =W^\oplus$.
For each $u\in (*S)^d$, let the mapping $\varphi_u\colon S^d\to \er$ be defined by the formula $$\varphi_u(w)=\prod_{i=1}^d ([s_i=t_i]-[s_i=t'_i]+[s_i=*]),$$ where we let $u=s_1\cdots s_d$, $w=t_1\cdots t_d$. We can define the index for genomes being the counterpart of the index for suits $$\Index W u=\sum_{w\in W} \varphi_u(w).$$
Theorem \[kryt1\] leads to the following result.
\[equigenom2\] Two genomes $V$, $W\subset S^d$ are equivalent if and only if for each $u\in (*S)^d$ $$\Index V u =\Index W u.$$
Let us remark that in order to check the equivalence we do not consider all $u$, in fact it suffices to restrict ourselves to $u\in (*S_+)^d$ or even to those words $u=s_1\cdots s_d$, that for each $i\in[d]$ at least one of the letters $s_i$, $s_i'$ appears at $i$-th place of a certain word from $V\cup W$.
Let $W\subset S^d$ be a genome and let $v\in S^d$ be a word. We say that $v$ is *covered* by $W$, and write $v\sqsubseteq W$, if $f(v)\subseteq \bigcup f(W)$ for every mapping $f$ that preserves dichotomies. Let us define $g\colon S^d\times S^d\to \zet$ by the formula $$g(v,w)=\prod^d_{i=1}(2[s_i=t_i]+[s_i\not\in\{t_i, t'_i\}]).$$
Let $$E(S)=\{B\subset S\colon |\{s,s'\}\cap B|=1, \text{whenever $s\in S$}\}.$$ Similarly as in Section \[rikitiki\], for each letter $s\in S$, let us put $E s=\{B\in E(S)\colon s\in B\}$. Lemma \[intermediolan\] can be rephrased as follows
\[rikitiki2\] If $S$ is finite, $s_1,\ldots , s_n\in S$ and $s_i\not\in\{s_j, s'_j\}$ for every $i\neq j$, then $$|E s_1 \cap\dots\cap E s_n|=\frac{1}{2^{n}}|E(S)|.$$
For each word $u=s_1\cdots s_d\in S^d$, let us define $\breve{u}=Es_1\times\cdots\times Es_d$. It is clear that the mapping $S^d\ni u\mapsto \breve{u}$ preserves dichotomies.
\[Juventus\] Let $v\in S^d$ and let $W\subset S^d$ be a genome. Then $\sum_{w\in W} g(v,w)\le 2^d$. Moreover, the following statements are equivalent:
- $v \sqsubseteq W$,
- $\breve{v}\subseteq \bigcup_{w\in W}\breve{w}$,
- $\sum_{w\in W} g(v,w)= 2^d$.
Since the number of words involved is finite, the number of letters that constitute these words is finite. Therefore, we may restrict ourselves to a finite subset of the alphabet $S$. (Such a restriction will influence the sets $\breve{u}$, $u\in W\cup \{v\}$, as they depend on the alphabet, but it does not influence the relation (2).) Observe that by Lemma \[rikitiki2\] and the definition of $g$ one has $$|\breve{v}\cap\breve{w}|=\frac{g(v,w)|\breve{v}|}{ 2^d}.$$ Since $\breve{w}$, $w\in W$, form a suit, we get $$\label{brewa}
|\breve{v}|\ge |\breve{v}\cap\bigcup_{w\in W}\breve{w}|=\sum_{w\in W}\frac{g(v,w)|\breve{v}|}{2^d},$$ which immediately implies the first part of our theorem. If $v\sqsubseteq W$, then $\breve{v}\subseteq \bigcup_{w\in W}\breve{w}$ and (\[brewa\]) becomes an equation. Therefore, it remains to show, that if $\sum_{w\in W}g(v,w)=2^d$, then $v\sqsubseteq W$. Equivalently, if $\breve{v}\subseteq \bigcup_{w\in W}\breve{w}$, then $v\sqsubseteq W$. Suppose that $v\not\sqsubseteq W$. Then there is a $d$-box $X$ and a mapping $f\colon S^d\to \pud X$ that preserves dichotomies such that $f(v)\not\subseteq \bigcup f(W)$.
**Claim** If $Y$ is a 1-box and $h\colon S\to \pud Y$ preserves complementarity, that is, $h(s')=Y\setminus h(s)$ for each $s\in S$, then for every $y\in Y$ there is a $C\in E(S)$ such that $$[y\in h(s)]=[s\in C].$$
It suffices to put $C=\{s\colon [y\in h(s)]=1\}$.
Suppose that $x\in f(v)\setminus \bigcup f(W)$. By Claim and the definition of $f$, there is a $b\in E(S)^d$, $b=(B_1,\ldots, B_d)$, such that for each $u\in S^d$, $u=s_1\cdots s_d$, $$[x\in f(u)]=\prod_{i=1}^d[x_i\in f_i(s_i)]=\prod_{i=1}^d[u_i\in B_i]=[b\in \breve{u}].$$ Therefore, $b\in \breve{v}\setminus \bigcup_{w\in W}\breve{w}$ and consequently $\breve{v}\not\subseteq \bigcup_{w\in W}\breve{w}$. $\square$
Let $V$, $W\subset S^d$ be two genomes. We say that $W$ *covers* $V$, and write $V\sqsubseteq W$, if each $v\in V$ is covered by $W$.
\[rownopod\] Two genomes $V$ and $W$ contained in $S^d$ are equivalent if and only if $W$ covers $V$ and $|V|=|W|$.
The implication ‘$\Rightarrow$’ is obvious.
We may assume that $S$ is finite. Then $|\breve{u}|=2^{(|S|-2)d}$ for each $u\in S^d$. Since $|V|= |W|$, we get $$|\bigcup_{v\in V}\breve{v}|=2^{(|S|-2)d}|V|=|\bigcup_{w\in W}\breve{w}|.$$ This equation together with the assumption $V\sqsubseteq W$ give us $\bigcup_{v\in V}\breve{v}= \bigcup_{w\in W}\breve{w}$. Therefore, for each $w\in W$ we have $\breve{w}\subseteq \bigcup_{v\in V}\breve{v}$ which, by Theorem \[Juventus\], implies $w\sqsubseteq V$.$\square$
By analogy, we can define binary codes for $S^d$. One can deduce from Proposition \[rownopod\] that $V$ and $W$ are equivalent if and only if $\beta (V)=\beta (W)$ for each binary code $\beta$.
For $w\in S^d$, $w=s_1\cdots s_d$, and $\varepsilon \in \zet_2^d$, define $w^\varepsilon=s_1^{\varepsilon_1}\cdots s_d^{\varepsilon_d}$ in the same way as it has been done for boxes. Denote by $C_w$ the set $\{w^\varepsilon\colon \varepsilon\in \zet_2^d\}$. An inspection of the argument used in the proof of Theorem \[sztywnypal\] suggests the following
\[niejednoznacznosc\] Let $v\in S^d$, and $W\subset S^d$ be a genome. If $v\sqsubseteq W$ and $v\not\in W$, then there is a $w\in W$ such that $$|\Index W w|<|C_w\cap W|.$$
Let $C=C_w$ for some $w\in S^d$. As in the case of simple suits, we say that $u$, $v\in C$ carry the same sign if $(-1)^{|\varepsilon|}=1$ for the only $\varepsilon$ such that $u^\varepsilon=v$. The orientation $C^-$, $C^+$ of $C$ is defined accordingly (cf. Section \[indices\]). Suppose that $W$ is a genome and suppose that for each $w\in W$ we have chosen an orientation $C_w^+$, $C_w^-$ of $C_w$. These orientations define the following decomposition of $W$: $W^+=W\cap \bigcup_{w\in W}C^+_w$, $W^-=W\cap \bigcup_{w\in W}C^-_w$. We call it *induced*.
\[rikitikigenom\] If $W$, $V\subset S^d$ are equivalent genomes, $W^+$, $W^-$ is an induced decomposition of $W$ and $W^+\subseteq V$, then $W=V$.
Let $v\in V\setminus W^+$. Then $v\sqsubseteq W^-$. If $v\not\in W^-$, then by the preceding theorem there is a $w\in W^-$ such that $
|\Index {W^-} w|<|C_w\cap W^-|,
$ which contradicts the definition of $W^-$. $\square$
Rigidity of cube tilings
========================
\[prpr\] Let $W\subset S^d$ be a genome. Let $X$ be a $d$-box whose cardinality can be infinite. If $f\colon S^d\to \pud X$ preserves dichotomies, then the realization $f(W)$ is a proper suit for $X$ if and only if $|W|=2^d$.
For each $X_i$ one can find a finite subset $Y_i$ such that $Y$ is a $d$-box and $f(w)\cap Y\in \pud Y$, whenever $w\in W$. If $f(W)$ is a proper suit for $X$, then $\ka F =\{f(w)\cap Y\colon w\in W\}$ is a proper suit for $Y$. Therefore, $\ka F$ has to contain $2^d$ elements (see Section \[minimal\], also [@GKP Theorem 2]). On the other hand, $|\ka F|=|W|.$
Suppose now that there are $W$ and $f$ such that $\bigcup f(W)\neq X$. Let $x\in X\setminus \bigcup f(W)$. Then there is a finite $d$-box $Y$ such that $x\in Y$ and $\ka F =\{f(w)\cap Y\colon w\in W\}\subset \pud Y$ is a proper suit. Since $|W|=2^d$, then, by Theorem \[krakow\], $\ka F$ is a partition of $Y$. Thus $x\in f(w)$ for some $w$, which is a contradiction.$\square$
Our next lemma follows immediately from the preceding proposition and the definition of the cover relation $\sqsubseteq$.
\[stoliczek\] If $W\subseteq S^d$ is a genome that consists of $2^d$ elements, then $v\sqsubseteq W$ for each $v\in S^d$.
\[laga\] Let $W\subset S^d$ be a genome that consists of $2^d$ elements. Let $W^+$ and $W^-$ be an induced decomposition of $W$. If $\{v\}\cup W^+$ is a genome, then $v\in W^-$.
By the above lemma and the fact that $\{v\}\cup W^+$ is a genome, we deduce that $v\sqsubseteq W^-$. The conclusion follows now immediately from the definition of $W^-$ and Theorem \[niejednoznacznosc\]. $\square$
This result verifies the *rigidity conjecture for 2-extremal cube tilings* of Lagarias and Shor [@LS2] which states that $W^+$ determines $W^-$ uniquely if $|C_w\cap W|=2$ for each $w\in W$.
\[minimini\] If $X$ is a $d$-box of arbitrary cardinality and $\ka F$ is a partition of $X$ into proper boxes such that $|\ka F|=2^d$, then $\ka F$ is a suit.
Fix two elements $A$, $B\in \ka F$. Pick a finite box $Y\subseteq X$ so that $\ka F_Y=\{ D\cap Y\colon D\in \ka F\}\subset \pud Y$. By definition, $\ka F_Y$ is a partition of $Y$ which consists of $2^d$ elements. By (\[vol0\]) and Theorem \[vol00\], $\ka F_Y$ is a minimal partition of $Y$. It follows now from Theorem \[minor\] that it is a suit. Hence there is an $i\in [d]$ such that $A_i\cap Y_i=Y_i\setminus B_i$. It is clear that if $A$ and $B$ were not dichotomous, then for properly chosen $Y$ their intersections with $Y$ would not be dichotomous in $Y$. $\square$
Let $T$ be a subset of $\mathbb{R}^d$. The family $[0,1)^d+T=\{I_t\colon t\in T\}$, where $I_t=[0,1)^d+t$, is a *cube tiling* of $\mathbb{R}^d$ if $\mathbb{R}^d=\bigcup_{t\in T}I_t$ and $I_u\cap I_v=\emptyset$ for distinct $u,v \in T$. This cube tiling is said to be $2\mathbb{Z}^d$-*periodic* if $T$ is $2\mathbb{Z}^d$-periodic.
Consider the alphabet $S=(-1,1]$ with complementation $s\mapsto s'$ defined by the equation $|s-s'|=1$. We write the elements of $S^d$ as vectors: $t=(t_1,\ldots, t_d)$. Let $J_t=I_t\cap [0,1]^d$. Define $$W=\{t\in T\colon J_t\neq\emptyset\}\subset S^d.$$ It is easily seen that each box from $\ka G=\{J_t\colon t\in W\}$ contains exactly one vertex of the cube $[0,1]^d$. Therefore $|\ka G|=2^d$. Theorem \[minimini\] implies now that $\ka G$ is a suit. The latter fact leads to the conclusion that $W$ is a genome. Let $W^+$, $W^-$ be an induced decomposition of $W$. Define $T^+=W^++2\zet^d$ and $T^-=W^-+2\zet^d$.
\[krewmleko\] Let $[0,1)^d +T$ be a cube tiling of $\er^d$, $T^+$, $T^-$ be as defined above and $z\in \er^d$. If $I_z$ is disjoint with $[0,1)^d+T^+$, then $z\in T^-$.
Let $K_z=[0,1]^d+z$. Define $U\subset (-1,1]^d$ so that $u\in U$ if and only if $I_{u+z}$ intersects $K_z$. As in the case $W$, the set $U$ is a genome and $|U|=2^d$. Let $U^+=U\cap (T^+-z)$, $U^-=U\cap (T^--z)$. Clearly, $U^+$, $U^-$ is an induced decomposition of $U$. Observe that since $I_z$ is disjoint with $[0,1)^d+T^+$, $0$ is dichotomous to each element of $U^+$. Therefore, by Theorem \[laga\], we have $0\sqsubseteq U^-$, which in turn implies $z\in T^-$. $\square$
[ABCDEF]{} [ <span style="font-variant:small-caps;">N. Alon, T. Bohman, R. Holzman</span> and <span style="font-variant:small-caps;">D. J. Kleitman</span>, On partitions of discrete boxes, *Discrete Math.* [**257**]{} (2002), 255–258. <span style="font-variant:small-caps;">J. Grytczuk, A. P. Kisielewicz</span> and <span style="font-variant:small-caps;">K. Przes[ł]{}awski</span>, Minimal Partitions of a Box into Boxes, *Combinatorica* [**24**]{} (2004), 605–614. <span style="font-variant:small-caps;">O.-H. Keller</span>, Über die lückenlose Erfüllung des Raumes Würfeln, *J. Reine Angew. Math.* [**163**]{} (1930), 231–248. <span style="font-variant:small-caps;">O.-H. Keller</span>, Ein Satz über die lückenlose Erfüllung des 5- und 6-dimensionalen Raumes mit Würfeln, *J. Reine Angew. Math.* [**177**]{} (1937), 61–64. K. A. Kearnes and E. W. Kiss, Finite algebras of finite complexity, [*Discrete Math.*]{} [**207**]{} (1999), 89–135. <span style="font-variant:small-caps;">J. C. Lagarias, P. W. Shor</span>, Keller’s cube-tiling conjecture is false in high dimensions, *Bull. Amer. Math. Soc.* [**27**]{} (1992), 279–287. <span style="font-variant:small-caps;">J. C. Lagarias, P. W. Shor</span>, Cube tilings and nonlinear codes, *Discr. Comput. Geom.* [**11**]{} (1994), 359–391. <span style="font-variant:small-caps;">J. Mackey</span>, A cube tiling of dimension eight with no face sharing, *Discr. Comput. Geom.* [**28**]{} (2000), 275–279. <span style="font-variant:small-caps;">O. Perron</span>, Über lückenlose Ausfüllung des $n$-dimensionalen Raumes durch kongruente Würfel I, II, *Math. Z.* [**46**]{} (1940), 1–26, 161–180. <span style="font-variant:small-caps;">S. Stein, S. Szabó</span>,*Algebra and Tiling: Homomorphisms in the Service of Geometry*, Mathematical Association of America, Washington DC, 1994. ]{}
|
---
abstract: |
We study how network structure affects the dynamics of collateral in presence of rehypothecation. We build a simple model wherein banks interact via chains of repo contracts and use their proprietary collateral or re-use the collateral obtained by other banks via reverse repos. In this framework, we show that total collateral volume and its velocity are affected by characteristics of the network like the length of rehypothecation chains, the presence or not of chains having a cyclic structure, the direction of collateral flows, the density of the network. In addition, we show that structures where collateral flows are concentrated among few nodes (like in core-periphery networks) allow large increases in collateral volumes already with small network density. Furthermore, we introduce in the model collateral hoarding rates determined according to a Value-at-Risk (VaR) criterion, and we then study the emergence of collateral hoarding cascades in different networks. Our results highlight that network structures with highly concentrated collateral flows are also more exposed to large collateral hoarding cascades following local shocks. These networks are therefore characterized by a trade-off between liquidity and systemic risk.
Keywords: Rehypothecation, Collateral, Repo Contracts, Networks, Liquidity, Collateral-Hoarding Effects, Systemic Risk.
JEL Codes: G01, G11, G32, G33.
author:
- Duc Thi Luu
- 'Mauro Napoletano [^1]'
- Paolo Barucca
- Stefano Battiston
bibliography:
- 'references.bib'
date: 'January 31, 2018'
title: 'Collateral Unchained: Rehypothecation networks, concentration and systemic effects'
---
Introduction {#sec:intro}
============
This paper investigates the collateral dynamics when banks are connected in a network of financial contracts and they have the ability to rehypothecate the collateral along chains of contracts. Collateral is of increasing importance for the functioning of the global financial system. One reason for this is that the non-bank/bank nexus has become considerably more complex over the past two decades, in part because the separation between hedge funds, mutual funds, insurance companies, banks, and broker/dealers has become blurred as a result of financial innovation and deregulation [@singh2016collateral; @pozsar2011nonbank]. Another reason for the significant increase in collateral volumes [comparable to M2 until the recent financial crisis, see e.g. @singh2011velocity] has been the diffusion of rehypothecation agreements. The role of collateral in lending agreements is to protect the lender against a borrower’s default. Rehypothecation[^2] consists in the right of the lender to re-use the collateral to secure another transaction in the future [see @monnet2011rehypothecation].
Rehypothecation of collateral has clear advantages for liquidity in modern financial systems [see @FSB2017]. In particular, it allows parties to increase the availability of assets to secure their loans, since a given pool of collateral can be re-used to support different financial transactions. As a result, rehypothecation increases the funding liquidity of agents [see @brunnermeier2008market]. At the same time, rehypothecation also implies risks for market players. First, one risk associated with the additional funding liquidity allowed by rehypothecation can be the building-up of excessive leverage in the market [see e.g. @bottazzi2012securities; @singh2012deleveraging; @Capel2014collateral]. Second, rehypothecation implies that several agents are counting on the same set of collateral to secure their transactions. It follows that rehypothecation may represent yet another channel through which agents’ balance sheets become interlocked[^3], and thus a source of distress propagation and of systemic risk. For instance in the face of idiosyncratic shocks, some institutions may start to precautionarily hoard collateral, which in turn constrains the availability of collateral and its re-use for the downstream institutions in chains of repledges. This may consequently lead to an inefficient market freeze if participants lack the necessary assets to secure their loans [@Leitner2011; @monnet2011rehypothecation; @gorton2012securitized]. The latter is the distress channel we focus on in this paper.
To analyze economic benefits and systemic consequences of rehypothecation, we develop a model of collateral dynamics over a network of repurchase agreements (repos) across banks. To keep the model as simple as possible we abstract from many features of markets with collateral, and we assume that the amount of collateral available for repo financing is set as a constant fraction of total collateral available to each agent. The latter includes the proprietary collateral endowment of each bank as well as the collateral obtained from other banks via reverse repos. Although simple, our model allows us to highlight what features of rephypothecation network topology determine (i) the overall volume of collateral in the market, and (ii) the velocity of collateral [@singh2011velocity]. We show that both variables are an increasing function of the length of open chains. However, for a given length, cyclic chains (i.e. those where banks are organized in a closed chain of repo contracts) produce higher collateral than a-cyclic chains. Furthermore, we show that the direction of collateral flows also matters. In particular, concentrating collateral flows among few nodes organized in a cyclic chain allows large increases in collateral volume and in velocity even with small chains’ length. Finally, we investigate total collateral under some typical network architectures, which capture different modes of organization of financial relations in markets, and in particular different degrees of heterogeneity in the distribution of repo contracts and of collateral flows. We show that total collateral is an increasing function of the density of financial contracts both in the random network (where heterogeneity is mild) and in the core-periphery network (where heterogeneity is high). However, core-periphery structures allow for a faster increase in collateral. The above-explained model with exogenous levels of collateral hoarding rates is useful to analyse the effects of network topology on collateral flows. At the same time it is unfit to study the systemic risk implications of rehypothecation as hoarding behaviour could also reflect the liquidity position of agents in the network. We thus extend the basic model, to introduce hoarding behaviour that accounts for liquidity risk. More precisely, we assume that hoarding rates are set according to a Value-at-Risk (VaR) criterion, aimed at minimizing liquidity default risk. In this framework, we show that the equilibrium hoarding rate of each bank is a function of the hoarding rates and the collateral levels of the banks at which it is directly and indirectly connected. This introduces important collateral hoarding externalities in the dynamics, as an increase in hoarding at some banks may indirectly cause higher hoarding at other banks even not directly connected to it. We then use the extended model to study the impact on total collateral losses of small uncertainty shocks hitting a fraction of banks in the network, and how those losses vary with the structure of the rehypothecation networks. We show that core-periphery structures are the most exposed to large collateral losses when shocks hit the central nodes in the network, i.e. the one concentrating collateral flows. As, core-periphery are also the structures that generate larger collateral volumes, our results highlight that these structures are characterized by a trade-off between liquidity and systemic risk.
Our work contributes to the recent theoretical literature on the consequences of collateral rehypothecation [see e.g. @bottazzi2012securities; @andolfatto2017rehypothecation; @GottardiMaurinMonnet2017; @singh2016collateral]. This literature has highlighted the role of rehypothecation in determining repo rates [e.g. @bottazzi2012securities], or in softening borrowing constraints of market participants and in shaping the interactions in repo markets [@GottardiMaurinMonnet2017; @andolfatto2017rehypothecation] or, finally, it has contributed to evaluate some welfare aspects of policies aimed at regulating rehypotheaction [@andolfatto2017rehypothecation]. However, to the best of our knowledge, our paper is the first to study the role of the structure of the network of collateral exchanges and to explore how different network structures determine overall collateral volumes and velocity. Furthermore, our work contributes also to the literature on liquidity hoarding cascades, and it is particular related to the work of @gai2011complexity. However, different from this work, our model introduces hoarding rates that are responsive to the liquidity position of the single bank and to the position occupied in the network. In addition, it shows that liquidity hoarding dynamics can have quite different consequences depending on the particular structure of the network.
The paper is organized as follows. Section \[sec:model\] introduces the basic definitions used throughout the paper and the model with fixed hoarding rates. Section \[sec:rehyp-netw-endog\] studies in detail how the structure of rehypothecation networks determines collateral volume and its velocity. Next, Section \[sec: Net liquidity position, Value-at-Risk, and hoarding effects\] extends the model to feature time-varyng hoarding rates determined according to a VaR criterion. Section \[sec:coll-hoard-casc\] uses the latter model to study collateral hoarding cascades in different rehypothecation networks. Finally, Section \[Conclusions\] concludes, also by discussing some implications of our work.
A model of collateral dynamics on networks {#sec:model}
==========================================
In this section we build the network model that we then use the analyse the ability of the financial system to generate endogenous collateral in presence of rehypothecation and, next, to study the dynamics of collateral hoarding cascades in presence of shocks. We start with basic definitions that we shall use throughout the paper. We then introduce the laws governing collateral dynamics in presence of rehypothecation and of fixed hoarding coefficients by banks.
Definitions {#sec:definitions}
-----------
Consider a set of $N$ financial institutions (“banks” for brevity in the following). Banks invest into an external asset, that yields an exogenously fixed return $r_{EA}$, and that can also be used as a collateral. In, addition they lend to each other by using only secured loans that involve exchange of collateral as in @singh2011velocity.[^4] More precisely, we assume that all debt contracts are “repo” contracts, they are thus secured by collateral. A “repo” or “repurchase agreement”, is the sale of securities together with an agreement for the seller to buy back the securities at a later date.[^5] A “reverse repo” is the same contract from the point of view of the buyer. The haircut rate of a repo, that we denote as $h$ , is a percentage that is subtracted from the market value of an asset that is being used in a repo transaction.[^6] To collect funds via repo contracts each bank $i$ ($1\leq i \leq N$) can use the collateral that has in its “box”. The box includes both the proprietary collateral or the collateral obtained via reverse-repos, which can then be re-pledged or rehypothecated for further repo transations. Repo transactions among banks using proprietary and non-properietary collateral give rise to a directed network $\mathcal {G}$, that we shall label “rehypothecation network”. To explain in details the dynamics of collateral in each bank’s box and the timing of the events occurring through the network $\mathcal {G}$ it is useful to define the following notations:
- $A^{C^{out}}_i$: the total amount of collateral flowing out of bank $i$’s box at each step, i.e. the total amount of collateral that the bank $i$ uses to obtain loans from other banks.
- $A^{C^{rm}}_i$: the total amount of (re-pledgeable) collateral remaining inside the box.
- $A^C_i$: the total amount of (pledgeable) collateral flowing into the box of the bank $i$. At every step, the collateral that flows into the box must equal the collateral that remains in the box plus the collateral that flows out of the box. Hence, we have $A^C_i=A^{C^{out}}_i+A^{C^{rm}}_i$. Notice that $A^C_i$ includes both proprietary as well as non-proprietary assets received from other banks.
- $A^0_i$: the value of the proprietary collateral of the bank $i$. This means the bank $i$ is the original owner of $A^0_i$.
- $B_i$: the borrowers’ set of bank $i$, i.e. the banks that obtained funding from $i$ via repos and thus provided collateral to $i$. In the rehypothecation network, $B_i$ is also the “in-neighborhood” of $i$.
- $L_i$: the lenders’ set of bank $i$, i.e. the banks that obtained collateral from $i$ and thus provided funding to $i$. In the rehypothecation network, $L_i$ is also the “out-neighborhood” of $i$.
- Unless specified otherwise, each of the above mentioned symbols written without indices denotes the vector of the same variable for all the banks in the system, for instance $A^C = [A^C_1, A^C_2, ... A^C_i, ... A^C_N ] $, and similarly for the other quantities.
Furthermore, let the variable $a_{i\leftarrow j}$ capture the direction of collateral flow from the bank $j$ to the bank $i$. In particular, for every pair of banks $i$ and $j$, $a_{i\leftarrow j}=1$ if bank $j$ has given collateral to bank $i$ and $a_{i\leftarrow j}=0$ otherwise. Two additional variables related to the direction of collateral flows are the “out-degree” of a bank $i$, $k_i^{out}$, which measures the total number of outgoing links of the bank, and thus the number of banks to whom bank $i$ provided collateral to. Likewise, the “in-degree” of a bank $i$, $k_i^{in}$, is the total number of banks that provided collateral to $i$.
Finally, we assume that each bank hoards a fraction $1-\theta_i$ of the collateral it has in the box. More precisely, for every monetary unit of collateral, the bank $i$ will keep $(1-\theta_i)$ inside its box and give away $\theta_i$. Moreover, to keep the model simple we assume that each bank homogeneously spreads its non-hoarded collateral across its lenders. Let $s_{i\leftarrow j}$ be the share of bank $j$’s outgoing collateral flowing into the box of the bank $i$. If $L_j= \varnothing$ (i.e. $k_j^{out}=0$), then all shares $s_{i\leftarrow j}$ are equal to zero. If the lender’s set is not void, $L_j \neq \varnothing $, that is if $k_j^{out}>0$, then for the total outgoing collateral pledged or re-pledged by bank $j$, $A^{C^{out}}_j$ it holds: $$A^{C^{out}}_j = \sum_{i\in L_j} s_{i\leftarrow j}A^{C^{out}}_j,$$ since the shares $s_{i\leftarrow j}$ satisfy the constraint $\sum_{i\in L_j} s_{i\leftarrow j}=1$. Notice, that this means that each non-zero column of the matrix of shares $\mathcal {S}= \{ s_{i\leftarrow j} \}_{N\text{x}N}$ associated with the network $\mathcal {G}$ is summing to 1. In addition, recall that collateral is spread homogenously across lenders. This implies that $$s_j=\frac{1}{k_j^{out}}.$$ and that the elements of the matrix $\mathcal {S}$ can be expressed as $$\begin {cases}
s_{i\leftarrow j}=\frac{a_{i\leftarrow j}}{k_j^{out}}, \ \mbox{if} \ k_j^{out}>0, \\
s_{i\leftarrow j}=0, \ \mbox{otherwise}. \\
\end {cases}
\label{eq_ weights_size_2}$$
Collateral dynamics {#sec:coll-dynam-with-const}
-------------------
To describe collateral dynamics in our model let us assume, in line with @bottazzi2012securities that the amount of collateral that can be re-hypothecated never exceeds the haircutted amount of collateral. Furthermore, let us assume for simplicity that the haircut rate $h \in [0,1]$ is the same for all banks. On these grounds, we can write the following expression for the dynamics of $A^{C^{out}}_i$, the total amount collateral flowing out of the box of the bank $i$: $$A^{C^{out}}_i=A^{0^{out}}_i+(1-h) \delta_i \theta_i \sum_{j \in B_i} s_{i\leftarrow j}A^{C^{out}}_j,
\label{eq_collateral_box_out}$$ where $A^{0^{out}}_i= \delta_i \theta_i A_i^0$ is the proprietary amount of outgoing collateral of the bank $i$. The parameter $\theta_i$ accounts for the fraction that is not hoarded. The second term of the equation captures the amount of collateral received by $i$ from its borrowers $j$ and that is re-pledged. Notice that bank $i$ can only re-pledge a fraction $(1 - h) \theta_i$ of what it receives, where $(1 - h)$ accounts for the fraction remaining after the haircut is applied, and $\theta_i$ accounts for what is not hoarded. Finally, $\delta_i$ is an indicator equal to one if bank $i$ engages in at least one repo contract so that its out-degree is positive and equal to zero otherwise (i.e $\delta_i= 1, \ \mbox {if} \ k_i^{out}>0,$ and zero otherwise).
Similarly, the dynamics of the total amount of re-pledgeable collateral remaining inside the box of the bank $i$ is described by the following equation $$A^{C^{rm}}_i=A^{0^{rm}}_i+(1-h) (1-\delta_i \theta_i) \sum_{j \in B_i} s_{i\leftarrow j}A^{C^{out}}_j,
\label{eq_collateral_box_rm}$$ where $A^{0^{rm}}_i= {A^{0}_i- A^{0^{out}}_i}=(1- \delta_i \theta_i) A_i^0$ is the initial remaining collateral.\
Uses and re-use of collateral in our model are fully described by the recursive process explained by the above two equations. Notice that both Equation (\[eq\_collateral\_box\_out\]) and (\[eq\_collateral\_box\_rm\]) imply that at the initial step every bank $i$ gives away $A^{0^{out}}_i$ of collateral to its outgoing neighbors and keeps $A^{0^{rm}}_i$ inside its box. In addition, for an amount of $s_{i\leftarrow j}A^{C^{out}}_j$ that the bank $i$ receives from a neighbour $j$, it re-pledges $(1-h)\delta_i \theta_i s_{i\leftarrow j}A^{C^{out}}_j$ and hoards an amount $[1-(1-h)\delta_i \theta_i] s_{i\leftarrow j}A^{C^{out}}_j$. However, only the amount $(1-h) (1-\delta_i \theta_i) s_{i\leftarrow j}A^{C^{out}}_j$ of this hoarded collateral is further re-pledgeable to obtain further funding later, because the haircutted amount of collateral, $h s_{i\leftarrow j}A^{C^{out}}_j$, is kept in a segregated account that can be only accessed in the case of a credit event [see @bottazzi2012securities for details].
We can also determine the expression of total amount of re-pledgeable collateral flowing into the box of each bank $i$: $$A^{C}_i= A^{C^{out}}_i+ A^{C^{rm}}_i=A^{0}_i+ (1-h) \sum_{j \in B_i} s_{i\leftarrow j}A^{C^{out}}_j.
\label{eq_collateral_box_in_1}$$ The last equation makes clear that the total collateral flow in the box of a bank, $A^{C}_i$ includes the proprietary assets (i.e. $A^{0}_i$) as well as re-pledgeable non-proprietary assets (i.e. $(1-h) \sum_{j \in B_i} s_{i\leftarrow j}A^{C^{out}}_j$) received from other banks via reverse repos. Notice that $A^{C^{rm}}_i= (1-\delta_i \theta_i) A^{C}_i$ and that $A^{C^{out}}_i= \delta_i \theta_i A^{C}_i$. Substituting the latter expression in equation (\[eq\_collateral\_box\_in\_1\]) we obtain a system of equations in the variables $A^{C}$: $$A^{C}_i=A^{0}_i+ (1-h) \sum_{j \in B_i} s_{i \leftarrow j} \delta_j \theta_j A^{C}_j.
\label{eq_collateral_box_in_2}$$ So far we have not said anything about timing in our model. However, the possibility of collateral use and re-use changes over time as the inflows and outflows of collateral in a bank’s box change over time as a consequence of the different uses and re-uses of collateral made by other banks in the network. In addition, the very possibility of re-using collateral is clearly constrained by the maturity of a repo contract $T_{repo}$. In what follows, we shall assume that the maturity of repo contracts is longer than the time scale of the rehypothecation process.[^7]
Moreover we shall focus on equilibrium collateral. This equilibrium corresponds to the amount of collateral flow generated by the system over an infinite amount of steps of collateral uses and re-uses. Finally, we shall focus henceforth only on the equilibrium value of the outflowing collateral $A^{C^{out}}$ (equilibrium collateral henceforth). Indeed, first, via Equation \[eq\_collateral\_box\_in\_1\] remaining collateral $A^{C^{rm}}$ is also determined in equilibrium once the amount of outflowing collateral $A^{C^{out}}$ and the initial proprietary collateral $A^{0}$ are known. Second, outflowing collateral is a very interesting variable in our model, as it captures each bank’s contribution to overall collateral flows, and thus to overall funding liquidity in the market.
To find the equilibrium of outflowing collateral, let us start by writing equation in matrix form $$A^{C^{out}}=A^{0^{out}}+(1-h)\mathcal {M}A^{C^{out}}
\label{Eq:outgoing_collateral_matrix}$$ where the the elements $ \mathcal {M}$ is the adjacency matrix of the rehypothecation network $\mathcal {G}$, with elements $m_{i\leftarrow j}$ defined as: $$\begin{cases}
m_{i\leftarrow j}=\delta_i \theta_i s_{i\leftarrow j} =\frac{\delta_i \theta_i a_{i\leftarrow j}}{k_j^{out}}, \ \mbox {if} \ k_j^{out}>0,\\
m_{i\leftarrow j}=0, \ \mbox {if} \ k_j^{out}=0. \\
\end{cases}$$ Given the network of collateral flows $\mathcal {G}$, the haircut rate $h$, and the vector of non-hoarding rates $\theta= \{ \theta_i \}_{i=1}^{n}$, we can obtain the equilibrium value of $A^{C^{out}}$ by solving equation (\[Eq:outgoing\_collateral\_matrix\]) as follows: $$A^{C^{out}}=(\mathcal {I}-(1-h) \mathcal {M})^{-1}A^{0^{out}}=\mathcal {B}_1 A^{0^{out}},
\label{eq_tot_outflow_collateral}$$ with $\mathcal {I}$ is the identity matrix of size $N$, $\mathcal {B}_1=(\mathcal {I}-(1-h) \mathcal {M})^{-1}$. The above equation indicates that equilibrium collateral will in general be a function of the of the entire topology of the rehypothecation network $\mathcal {G}$ and of the vector of non-hoarding rates $\theta= \{ \theta_i \}_{i=1}^{n}$. In the next sections we first study the role of the network topology in affecting collateral flows and in determining different levels of equilibrium collateral.
Rehypothecation networks and endogenous collateral {#sec:rehyp-netw-endog}
==================================================
We shall now describe how the structure of the rehypothecation network affects collateral flows and equilibrium collateral determined according to the model developed in the previous section. To perform our investigation it is useful to define some aggregate indicators mesuring the performance of a network in affecting collateral flows. The first one is the aggregate amount of outgoing collateral, $S^{out}$, or “total collateral” henceforth, which is defined as: $$S^{out}=\sum_{i=1}^{i=N} A^{C^{out}}_i,
\label{eq_aggregate_outflow}$$ In addition, we also introduce the multiplier of the aggregate amount of proprietary collateral[^8], $m$, or “collateral multiplier”, henceforth. It is defined as: $$m= \frac {\sum_{i=1}^{i=N} A^{C^{out}}_i} {\sum_{i=1}^{i=N} A^{0^{out}}_i} = \frac {S^{out}}{S^{0^{out}}},
\label{eq_multiplier_outflow}$$ where $S^{0^{out}}=\sum_{i=1}^{i=N} A^{0^{out}}_i$ is the total outflowing proprietary collateral. Throughout the paper, we shall focus on $S^{out}$ and $m$ when analyzing collateral creation allowed by a given network $\mathcal{G}$. Notice that $S^{out}$ captures the aggregate flow of collateral provided by banks in the financial system. A higher (lower) amount of this flow indicates a more liquid market, i.e. one where agents can easily find collateral to secure their financial transactions. Furthermore, notice that the denominator on the right hand side of equation (\[eq\_multiplier\_outflow\]) is the aggregate amount of initial ouflowing collateral. Accordingly, $m$ captures the velocity of collateral when rehypothecation is allowed [see also @singh2011velocity]. Again, a higher value of $m$ indicates a more liquid market, and in particular one where the same set of collateral can secure a larger set of secured lending contracts. In that respect, we shall also say that a rehypothecation network $\mathcal{G}$ generates “endogenous collateral” whenever the collateral multiplier associated with it is larger ($m>1$) Clearly, both performance indicator are affected by the hoarding behavior of banks in the network. To simplify the analysis in this section we shall assume the non-hoarding and hoarding rates are fixed, so that $\theta_i=\bar{\theta}$ and $(1-\theta_i)=(1-\bar{\theta})$, $\forall i$. In Section 4, we shall remove this restriction, and we shall discuss how banks set these coefficients endogenously, according to a VaR criterion in presence of liquidity shocks.
We shall begin our analysis by providing stylized examples and by stating proposition that show how the aggregate amount of collateral going out of the boxes of all banks, $S^{out}$, and the multiplier of collateral, $m$, are influenced by some key characteristics of rehypothecation networks, like the length of rehypothecation chains, the presence of cyclic chains, or the direction of collateral flows. In additions, these results shed light on the core mechanisms driving the generation of endogenous collateral and of collateral hoarding cascades in more complex network architectures.
Length of chains and network cycles {#sec:length-chains-cycles}
-----------------------------------
Let us start with simple examples of chains composed by three banks as shown in Figure \[fig\_three\_nodes\]: panel (a) a star chain, panel (b) an open chain or “a-cyclic” chain, and panel (c) a closed chain or a “cycle”.
![Examples of rehypothecation chains among three banks.[]{data-label="fig_three_nodes"}](figures/three_nodes){width="15cm"}
*Star chains*
In the first case (i.e. the star chain in Figure \[fig\_three\_nodes\] (a)), B2 receives collateral from B1 and B3, and then it does not re-use it. We will show that without rehypothecation, there is no endogenous collateral created in the system. Before discussing the example, we denote by $A^{C^{out}}_{i, T}$ the cumulated amount of collateral ougoing from the box of the bank $i$ after $T$ times. In addition, $ S_{a, T}^{out}$ indicates total collateral after $T$ times and $ m_{a, T}$ the corresponding multiplier.[^9] At $t = 1$, the initial total amounts of collateral outgoing from the boxes of B1, B2, B3 are $$\begin{cases}
A_{1,t = 1}^{C^{out}}=A^{0^{out}}_1= \theta_1A_{1}^{0} \ \mbox {(going to bank $2$)} \\
A_{2,t = 1}^{C^{out}}=A^{0^{out}}_2=0\\
A_{3,t = 1}^{C^{out}}=A^{0^{out}}_3=\theta_3A_{3}^{0} \ \mbox {(going to bank $2$)} \\
\end{cases}$$ At $t= T \geq 2$, $A^{C^{out}}_i$ remains constant $\forall i$ since there is no re-use of collateral. We can write $${
\begin{pmatrix}\ifx\relaxA^{C^{out}}_{1,t= T}\relax\elseA^{C^{out}}_{1,t= T}\\\fiA^{C^{out}}_{3,t= T}\\#3\end{pmatrix}
}= {
\begin{pmatrix}\ifx\relaxA^{0^{out}}_1\relax\elseA^{0^{out}}_1\\\fiA^{0^{out}}_2\\#3\end{pmatrix}
} + (1-h) M_a
{
\begin{pmatrix}\ifx\relaxA^{0^{out}}_1\relax\elseA^{0^{out}}_1\\\fiA^{0^{out}}_2\\#3\end{pmatrix}
},$$ where $$M_a=\begin{bmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{bmatrix}.$$ It follows that total collateral in the example of Figure \[fig\_three\_nodes\] (a) is always equal to the sum of the initial prorietary collateral outflowing from banks’ boxes and thus, that there is no creation of endogenous collateral. That is, we get: $$S_{a}^{out}= S_{a, T}^{out}= \sum_{i=1}^{i=3} A^{C^{out}}_{i, T}= A^{0^{out}}_1 + A^{0^{out}}_3,$$ and $$m_{a}= m_{a, T}= \frac {\sum_{i=1}^{i=3} A^{C^{out}}_{i, T}}{\sum_{i=1}^{i=3} A^{0^{out}}_i} =1.$$
*A-cyclic chains*
We now consider the second example represented by the open chain or a-cyclic chain in Figure \[fig\_three\_nodes\] (b). In this case B2 can re-use the collateral that it receives from B1. In presence of rehypothecation the network generates endogenous collateral, $S^{out}>S^{0^{out}}$. However, the possibilities of endogenous collateral creation are constrained by the length of the open chain (and equal to 2 in the example shown in the figure). At $t = 1$, the initial amounts of collateral outgoing from the boxes of B1, B2, B3 are $$\begin{cases}
A_{1,t = 1}^{C^{out}}=A^{0^{out}}_1 = \theta_1A_{1}^{0} \ \mbox {(going to bank $2$)} \\
A_{2,t = 1}^{C^{out}}=A^{0^{out}}_2=\theta_2 A_2^0 \ \mbox {(going to bank $3$)} \\
A_{3,t = 1}^{C^{out}}=A^{0^{out}}_3=0\\
\end{cases}$$ At $t = 2$, bank $2$ will re-use a fraction $\theta_2$ of an additional (re-pledgeable) collateral that it has received from the bank $1$ at time $t=1$. Therefore, the amounts of collateral outgoing from the boxes of B1, B2, B3 are $$\begin{cases}
A_{1,t = 2}^{C^{out}}= A^{0^{out}}_1\\
A_{2,t = 2}^{C^{out}}= A^{0^{out}}_2 + (1-h) \theta_2 A^{0^{out}}_1\\
A_{3,t = 2}^{C^{out}}= A^{0^{out}}_3\\
\end{cases}$$ which in matrix form reads $${
\begin{pmatrix}\ifx\relaxA^{C^{out}}_{1,t = 2}\relax\elseA^{C^{out}}_{1,t = 2}\\\fiA^{C^{out}}_{3,t = 2}\\#3\end{pmatrix}
}= {
\begin{pmatrix}\ifx\relaxA^{0^{out}}_1\relax\elseA^{0^{out}}_1\\\fiA^{0^{out}}_2\\#3\end{pmatrix}
} + (1-h) M_b
{
\begin{pmatrix}\ifx\relaxA^{0^{out}}_1\relax\elseA^{0^{out}}_1\\\fiA^{0^{out}}_2\\#3\end{pmatrix}
},$$ where now $$M_b=\begin{bmatrix}
0 & 0 & 0 \\
\theta_2 & 0 & 0 \\
0 & 0 & 0
\end{bmatrix}.$$ Since all elements of $M_b^{t}$ are equal to zero for all $t \geq 2$, we get that the equilibrium values of total outflowing collateral and of the corresponding multiplier are: $${
\begin{pmatrix}\ifx\relaxA^{C^{out}}_{1,t = T}\relax\elseA^{C^{out}}_{1,t = T}\\\fiA^{C^{out}}_{2,t = T}\\#3\end{pmatrix}
}={
\begin{pmatrix}\ifx\relaxA^{C^{out}}_{1,t = 2}\relax\elseA^{C^{out}}_{1,t = 2}\\\fiA^{C^{out}}_{3,t = 2}\\#3\end{pmatrix}
} (\forall T \geq 2).$$ In addition, $$S_{b}^{out}= S_{b,T}^{out}= \sum_{i=1}^{i=3} A^{C^{out}}_{i, t=T}= A^{0^{out}}_1 + A^{0^{out}}_2 + \theta_2 (1-h) A^{0^{out}}_1,$$ and $$m_{b}= m_{b,T}= \frac {\sum_{i=1}^{i=3} A^{C^{out}}_{i, t=T}}{\sum_{i=1}^{i=3} A^{0^{out}}_i} =1 + \theta_2 (1-h) \frac {A^{0^{out}}_1} {A^{0^{out}}_1 + A^{0^{out}}_2}.$$ Notice that the above collateral multiplier is larger than 1 as long as $h<1$ and $\theta_2>0$.
*Cyclic chains*
We now consider the third case when rehypothecation processes among banks create a closed chain or a “cycle”, like the one in Figure \[fig\_three\_nodes\] (c). Notice that in the above example every bank has a positive out-degree, i.e. $k^{out}_i >0, \forall i=1, 2, 3$ and accordingly, $\delta_i =1, \forall i=1, 2, 3$ (cf. Section \[sec:coll-dynam-with-const\]). We will now show that the creation of endogenous collateral is no longer constrained by the length of the chain, and thus that in the end total collateral and the multipliers are larger than in previous example.
At $t=1$, the initial total amounts of outgoing collateral are: $$\begin{cases}
A^{C^{out}}_{1,t=1}=A^{0^{out}}_1 = \theta_1 A^{0}_{1} \ \mbox {(going to bank $2$)} \\
A^{C^{out}}_{2,t=1} =A^{0^{out}}_2 = \theta_2 A^{0}_{2} \ \mbox {(going to bank $3$)} \\
A^{C^{out}}_{3,t=1} =A^{0^{out}}_3 =\theta_3 A^{0}_{3}\ \mbox {(going to bank $1$)} \\
\end{cases}$$ Furthermore, at $t=2$, each bank $i$ will re-use a fraction $\theta_i$ of the additional re-pledgeable collateral that it has received from other banks at the previous time. We thus get: $$\begin{cases}
A^{C^{out}}_{1,t=2}= A^{0^{out}}_1+ \theta_1 (1-h) A^{0^{out}}_3 \\
A^{C^{out}}_{2,t=2}= A^{0^{out}}_2 + \theta_2 (1-h) A^{0^{out}}_1 \\
A^{C^{out}}_{3,t=2}=A^{0^{out}}_3 + \theta_3 (1-h) A^{0^{out}}_2 \\
\end{cases}$$ and in matrix form $${
\begin{pmatrix}\ifx\relaxA^{C^{out}}_{1,2}\relax\elseA^{C^{out}}_{1,2}\\\fiA^{C^{out}}_{3,2}\\#3\end{pmatrix}
}= {
\begin{pmatrix}\ifx\relaxA^{0^{out}}_1\relax\elseA^{0^{out}}_1\\\fiA^{0^{out}}_2\\#3\end{pmatrix}
} + (1-h)M_c
{
\begin{pmatrix}\ifx\relaxA^{0^{out}}_1\relax\elseA^{0^{out}}_1\\\fiA^{0^{out}}_2\\#3\end{pmatrix}
},$$ where $$M_c=\begin{bmatrix}
0 & 0 & \theta_1 \\
\theta_2 & 0 & 0 \\
0 & \theta_3 & 0
\end{bmatrix}.$$ Notice that $\theta_1 (1-h) A^{0^{out}}_3$, $ \theta_2 (1-h) A^{0^{out}}_1$, $ \theta_3 (1-h) A^{0^{out}}_2$ are respectively the additional amounts of collateral that banks $1$, $2$, and $3$ receive from all other banks. Moreover, at $t=3$, each bank $i$ will again re-use a fraction $\theta_i$ of the additional re-pledgeable collateral that it has received from other banks at time $t=2$. Therefore, $$\begin{cases}
A^{C^{out}}_{1,t=3}= A^{0^{out}}_1+ \theta_1 A^{0^{out}}_3 (1-h) + \theta_1 (1-h) \theta_3 (1-h) A^{0^{out}}_2\\
A^{C^{out}}_{2,t=3}= A^{0^{out}}_2 + \theta_2 A^{0^{out}}_1 (1-h) + \theta_2 (1-h) \theta_1 (1-h) A^{0^{out}}_3 \\
A^{C^{out}}_{3,t=3}=A^{0^{out}}_3 + \theta_3 A^{0^{out}}_2 (1-h) + \theta_3 (1-h) \theta_2 (1-h) A^{0^{out}}_1\\
\end{cases}
\label{eq:closed_chain_system}$$ which in matrix form reads $${
\begin{pmatrix}\ifx\relaxA^{C^{out}}_{1,3}\relax\elseA^{C^{out}}_{1,3}\\\fiA^{C^{out}}_{3,3}\\#3\end{pmatrix}
}= {
\begin{pmatrix}\ifx\relaxA^{0^{out}}_1\relax\elseA^{0^{out}}_1\\\fiA^{0^{out}}_2\\#3\end{pmatrix}
} + [(1-h) M_c]^1
{
\begin{pmatrix}\ifx\relaxA^{0^{out}}_1\relax\elseA^{0^{out}}_1\\\fiA^{0^{out}}_2\\#3\end{pmatrix}
} + [(1-h) M_c]^2
{
\begin{pmatrix}\ifx\relaxA^{0^{out}}_1\relax\elseA^{0^{out}}_1\\\fiA^{0^{out}}_2\\#3\end{pmatrix}
}.$$ In general, at $t=T+1$, i.e. after $T$ times of collateral re-uses, the cumulated amounts of outgoing collateral are: $${
\begin{pmatrix}\ifx\relaxA^{C^{out}}_{1,T+1}\relax\elseA^{C^{out}}_{1,T+1}\\\fiA^{C^{out}}_{2,T+1}\\#3\end{pmatrix}
}= \{ I+[(1-h) M_c]^1+ [(1-h) M_c]^2+...+[(1-h) M_c]^T \}
{
\begin{pmatrix}\ifx\relaxA^{0^{out}}_1\relax\elseA^{0^{out}}_1\\\fiA^{0^{out}}_2\\#3\end{pmatrix}
}.$$ Expressed differently, $${
\begin{pmatrix}\ifx\relaxA^{C^{out}}_{1,T+1}\relax\elseA^{C^{out}}_{1,T+1}\\\fiA^{C^{out}}_{2,T+1}\\#3\end{pmatrix}
}= {
\begin{pmatrix}\ifx\relaxA^{0^{out}}_1\relax\elseA^{0^{out}}_1\\\fiA^{0^{out}}_2\\#3\end{pmatrix}
} + (1-h) M_c {
\begin{pmatrix}\ifx\relaxA^{C^{out}}_{1,T}\relax\elseA^{C^{out}}_{1,T}\\\fiA^{C^{out}}_{2,T}\\#3\end{pmatrix}
}.$$ Clearly, the additional collateral in the system is now equal to $$[((1-h) M_c)^1+ ((1-h) M_c)^2+.....((1-h) M_c)^T
]{
\begin{pmatrix}\ifx\relaxA^{0^{out}}_1\relax\elseA^{0^{out}}_1\\\fiA^{0^{out}}_2\\#3\end{pmatrix}
}.$$
Finally, when $T \to \infty$, we obtain the equilibrium values for $S_{c}^{out}$ and for $m_{c}$. Their expressions are the following: $$S_{c}^{out}= \lim_{t \to\infty} S_{c, t}^{out}=\lim_{t \to\infty} \sum_{i=1}^{i=3} A^{C^{out}}_{i, t},$$ and $$m_{c}= \lim_{t \to\infty} m_{c, t}= \lim_{t \to\infty} \frac {\sum_{i=1}^{i=3} A^{C^{out}}_{i, t}}{\sum_{i=1}^{i=3} A^{0^{out}}_i}.$$ To conclude, it is interesting to notice that, as long as $\{ A_i^0 \}_{i=1}^{i=3}$ and $\{ \theta_i \}_{i=1}^{i=3}$ are homogenous across banks, we have the following ranking $S_c^{out} >S_b^{out}> S_a^{out}$ and $m_c >m_b> m_a$.
Direction of collateral flows, collateral sinks and cycles’ length {#sec:the role of the direction of collateral flows}
------------------------------------------------------------------
The above examples have clarified what are the fundamental properties that a rehypothecation network must have in order to create endogenous collateral, and thus additional liquidity in the system. In particular, the second example (the a-cyclic chain) makes clear that the possibilities of additional collateral creation are determined by the length of the repledging chains among banks. However, the presence of cycles in networks, like the third example above, allows one to go beyond that, and to maximize the amount of collateral creation. Furthermore, in presence of cyclic networks the direction of collateral flows also matters. In particular, networks wherein collateral flows all end up in a cycle will ceteris paribus create more endogenous collateral than networks where some collateral leaks out from cycles and sinks in some nodes of the system. To better clarify the foregoing statement, we consider in Figure \[fig\_five\_nodes\], three example networks of five nodes with the same number of links. In addition, in these networks, all out-degrees are positive, and consequently collateral from each node will flow into a cycle after going through some directed edges[^10], and there is no leakage from the cycle. The only difference among the three networks is in the length of cycles. Nevertheless, we will show that as long as non-hoarding rates are constant and homogeneous, the three networks generate the same equilibrium total collateral and the have the same equilibrium multiplier. This is summarized in the following proposition.
Let $\theta_i =\theta \ (\forall i)$. For the networks in panels $\alpha=a, b, c$ of Figure (\[fig\_five\_nodes\]) $S_{\alpha, t}^{out}= \sum_{i=1}^{i=5} A^{C^{out}}_{i, t}, \;\;\ \forall t\geq 1$. \[prop:independence-of-cycles-length\]
See appendix.
![Different rehypothecation process among five banks: same number of links, different lengths of cycles but same total amounts of equilibrium collateral. []{data-label="fig_five_nodes"}](figures/five_nodes){width="15cm"}
Let us next consider three other examples of rehypothecation process among five banks, the ones in Figure (\[fig\_five\_nodes\_2\]). The difference with examples of Figure \[fig\_five\_nodes\] is that now in panels (a) and (b) one node (i.e. node 5) has zero out-degree. In addition, the network structure in these two panels imply that some collateral leaks out of a cycle and gets stuck at the node with zero out-degree, which then plays the role of “collateral sink”. The consequence is that the three networks will generate different amounts of total equilibrium collateral. This is stated in the following proposition.
Let $\theta_i =\theta \ (\forall i)$. For the networks in panels $\alpha=a, b, c$ of Figure \[fig\_five\_nodes\_2\], $S_{a, t}^{out} < S_{b, t}^{out} < S_{c, t}^{out}, \;\;\, \forall t\geq 2.$ \[prop:dependence-cycles-length\]
See appendix.
![Different rehypothecation process among five banks: same number of links but different total amounts of equilibrium collateral. []{data-label="fig_five_nodes_2"}](figures/five_nodes_2){width="15cm"}
The above two propositions deliver interesting implications about the role of networks’ topology in determining total collateral flows. First, Proposition \[prop:dependence-cycles-length\] shows that the total amount of collateral is maximized when the longest possible cycle in the network has been created (a cycle of length 5 in example (c) of Figure \[fig\_five\_nodes\_2\]). At the same time, proposition \[prop:independence-of-cycles-length\] shows that cycles’ length is irrelevant when collateral sinks are not present in the network and all banks have positive out-degree, i.e. they have at least one repo with some other bank in the system. It also follows that in that case, it might be advantageous frow the viewpoint of total creation of collateral in the network to concentrate collateral flows among few nodes that have a cyclic chain among them. These results provide key insights to understand the behaviour of collateral flows in different network architectures that we shall discuss in the next section. They are also central to understand some of the results about collateral hoarding cascades that we will expose in Section \[sec:coll-hoard-casc\].
The next two propositions generalize the above results to any network architecture. The first of them shows that adding an arbitrary number of links to a cycle of length equal to the size of the network does not change neither total collateral nor the value of the multiplier. The second one identifies the upper bounds for the equilibrium values of total collateral and of the collateral multiplier and shows that these upper limits are attained as long as every bank has at least one outgoing link in the network.
Consider a rehypothecation network $\mathcal{G}$ of size $N$. Let $\theta_i =\theta$ and $A_i^0=A^0 \;\ (\forall i)$. Adding arbitrary links to an initial cycle of size $N$ does not change the values of $S^{out}$ and $m$. More in general, as long as there is the presence of the largest cycle, $S^{out}$ and $m$ remain unchanged as the density inside that cycle increases. \[prop:proposition\_k\_reg\]
See the appendix.
Consider a rehypothecation network $\mathcal{G}$ of size $N$. Let $\theta_i =\theta$ and $A_i^0=A^0 \;\ (\forall i)$. If $k_i^{out} >0 \;\ (\forall i)$ then equilibrium values of $S^{out}$ and $m$ are equal to the following upper limits: $$S^{out}= \frac {\theta}{1-(1-h)\theta} N A^0,$$ and $$m =\frac {1}{1-(1-h)\theta}.$$ \[prop:proposition\_positive\_out\_degree\_basic\]
See the appendix.
Network architecture and collateral creation {#sec:netw-arch-coll}
--------------------------------------------
We now address the issue of how does the network structure more in general affects collateral creation. We shall focus on three very different classes of network structures. These classes represent general archetypes of networks in the literature, and they also capture some idealized modes of organization of financial contracts in the market. The first class consists of the closed k-regular graphs of size $N$, $\mathcal{G}_{reg}$ , wherein each node has $k$ in-coming neighbors as well as $k$ out-going neighbors. This archetype corresponds to a market where repo contracts are homogenously spread across banks, so that each bank has exactly the same number of repos and of reverse repos. We consider different types of closed regular graphs of varying levels of density $\frac{1}{N-1}<p<1$. Special cases of this structure are the cycle of size $N$ (where $p=\frac{1}{N-1}$) and the complete network ($p=1$), wherein each bank has a repo with every other bank in the network (and vice-versa) and where the number of incoming and outgoing links is the same for all banks and equal to $N-1$. The second class consists of the random graphs $\mathcal{G}_{rg}$, in which there is a mild degree of heterogeneity in the distribution of financial contracts across banks. Here, the probability of a directed link (and thus of the existence of repo) between every two nodes is equal to the density $p$ ($0\leq p \leq 1$) of the network. Notice that as $p\to1$ the random graph converges to the complete graph. Finally, the third class we examine here consists of core-periphery networks, $\mathcal{G}_{cp}$, where (i) the number of nodes in the core, $N_{core}$, is fixed; (ii) each node in the periphery has only out-going links, and all point to nodes in the core; (iii) nodes in the core are also randomly connected among themselves with the probability $p_{core}$ ($0\leq p_{core} \leq 1$), and there are no directed links from the core to the periphery nodes. Notice that the latter type of structure exacerbates heterogeneity in the distribution of financial contracts and it centralizes collateral flows among nodes in the core. This structure is also interesting from an empirical viewpoint as high concentration of collateral flows is often observed in actual markets [see e.g. @singh2011velocity].
We begin our analysis of the three structures by characterizing the behaviour of endogenous collateral formation in the closed k-regular graphs.
Let $\theta_i =\theta$ and $A_i^0=A^0 \;\ (\forall i)$. A closed-k regular rehypothecation network $\mathcal{G}_{reg}$ of size $N$, and a density $p=\frac{k}{N-1}$, always returns the same equilibrium values of total collateral $S_{reg}^{out}$ and of collateral multiplier $m_{reg}$ for any density $\frac{1}{N-1}<p<1$. The equilibrium values are given by the limits stated in Proposition \[prop:proposition\_positive\_out\_degree\_basic\]. \[prop:new\_prop\_k\_reg\]
The above statement follows directly from Proposition \[prop:proposition\_k\_reg\] above. A closed k-regular of density $\frac{1}{N-1}$ already embeds the longest possible cycle that is possible to create in a network of size $N$. Accordingly adding further links does not bring any change in equilibrium values of total collateral and of the multipliers, which are always equal to the upper bounds stated in Proposition \[prop:proposition\_positive\_out\_degree\_basic\]. In contrast to the close k-regular graph, the random network and of core-periphery networks display some variation in total collateral and in the multiplier with for increasing levels of density. The following proposition characterizes the behavior of the latter two variables in these two network structures.[^11]
Let $\theta_i =\theta$ and $A_i^0=A^0 \;\ (\forall i)$, then:
1. A random graph $\mathcal{G}_{rg}$ of size $N$ creates more equilibrium collateral and a higher multiplier with higher level of density $p$.
2. A core-periphery graph of $\mathcal{G}_{cp}$ of size $N$ creates more equilibrium collateral and a higher multiplier with higher level of density $p_{core}$ of the core.
3. For any graph density $0\leq p \leq 1$, $\exists \, p_{th}=p_{th}(N,N_{core},p)$ such that a core-periphery graph of $\mathcal{G}_{cp}$ creates more equilibrium collateral and a higher multiplier than random graphs $\mathcal{G}_{rg}$ for any $p_{core}>p_{th}$.
4. As the overall density $p$ in the random graph (or the density $p_{core}$ of the core in the core-periphery graph) goes to 1, the equilibrium values of total collateral and the multiplier converge to the limits stated in Proposition \[prop:proposition\_positive\_out\_degree\_basic\].
\[prop:proposition\_density\_rand\]
See the appendix.
The plots in Figures \[fig:collateral\_outflowing\_example\] and \[fig:collateral\_multiplier\_example\] help to visualize the results contained in the last two propositions. The plots show equilibrium values of total collateral and of the multiplier resulting from numerical simulations[^12] using each of the three network topologies examined above (closed k-regular, random graph, and core-periphery) and with different levels density (of $p_{core}$ for the core-periphery). First, the plots show that both total collateral and the multiplier do not change with the level of density in the closed k-regular graph (plot (a) in both figures). In contrast, both variables increase with the level of density in the random graph and in the core-periphery network (respectively, panels (b) and (c) of the two figures), before eventually converging to the same value of the close k-regular graph (and determined by the expressions in Proposition \[prop:proposition\_positive\_out\_degree\_basic\]). The main intuition for the latter result is that increasing the level of density (in the overall network or in the core) increases both the number and the length of cycles in the network[^13]. These two factors have a positive impact on endogenous collateral creation in the network, as we explained in Section \[sec:the role of the direction of collateral flows\]. However, when the longest possible cycle in the network (for the random graph) or in the core (for the core-periphery graph) adding further links does not longer increase endogenous collateral. Furthermore, both the third statement of Proposition \[prop:proposition\_density\_rand\] and the plots in the figures indicate the core-perihery network generates a much higher total collateral than the random graph already with small increases in density. For instance the inspection of Figure \[fig:collateral\_multiplier\_zoom\] reveals that already with $N=50$ banks in the network a tiny increase in overall density (from $0.02$ to $0.03$) has the effect of more than doubling the value of the multiplier (from $2$ to almost $5$). In contrast, a much larger change in density is required to produce a similar effect in the random graph. This result generalizes the insights discussed in the previous section (cf. Proposition \[prop:independence-of-cycles-length\]). Once all banks have positive out-degree and they are thus all contributing with outflowing collateral, concentrating all collateral flows in a small cycle (like the one in the core) already generates the largest possible total collateral. This result has also implications for markets organization, as it indicates that concentrating collateral flows among few nodes has great advantages for the velocity of collateral and thus for the overall liquidity of the market. At the same time, in the next section we shall show that - when liquidity hoarding externalities are present - networks with highly concentrated collateral flows are also more exposed to larger collateral hoarding cascades following small local shocks.
![Collateral multiplier ($m$) as a function of network the density in the random graph and in the core-periphery network. Notice the double scale for the overall network density and the density of links within the core of the core-periphery network.[]{data-label="fig:collateral_multiplier_zoom"}](figures/figure_multiplier_total_out_basic_model_C_P_RG_1)
Value at Risk and Collateral Hoarding {#sec: Net liquidity position, Value-at-Risk, and hoarding effects}
======================================
So far we have worked with the assumption that non-hoarding rates $\{\theta_i\}_{i=1}^N$ were constant across time and homogeneous across banks. This has simplified the analysis and it has allowed us to highlight the role of the characteristics of network topology in determining collateral flows in the financial system. At the same time, this hypothesis is also quite restrictive as banks’ hoarding and non-hoarding might be responsive to the liquidity risk situation of banks and, accordingly, also by the level of available collateral [see e.g. @Acharya2010precautionaryhoarding; @Berrospide2012liquidityhoarding; @deHaan2013liquidityhoarding]. In addition, recent accounts of collateral dynamics [@singh2012deleveraging] have documented the sizeable reduction in velocity of collateral in the aftermath of the last financial crisis as a result of increased collateral hoarding by banks. To account for these important phenomena in this section we extend the basic model presented in Section \[sec:model\] to introduce time-varying non-hoarding rates determined by liquidity risk considerations.
Again, to keep the model as simple as possible we abstract from many important aspects concerning the liquidity position of the banks. We assume that all funding is secured. For every bank $j$ let $NL_j$ be its net liquidity position. Recall that the amount of pledgeable collateral of bank $j$ that can be used to get external funds with a haircut $h$ is $A_j^C$. At the same time, if $L_j \neq \varnothing $, a fraction $\theta_j$ of this amount of collateral is already pledged (i.e. an amount of $A^{C^{out}}= \theta_j A_j^C$). The net liquidity position of the bank $j$ is thus given by: $$NL_j=(1-h)(1-\theta_j)A_j^C(\theta_1,\theta_2,...,\theta_N, A^0, \mathcal {G})-\epsilon_j,
\label{eq_net_liquidity_position}$$ where $\epsilon_j$ are payments due within the periods, i.e. liquidity shocks, which are assumed to be a i.i.d. normally distributed random variable with mean $\mu_j$ and standard deviation $\sigma_j$. The notation $A_j^C(\theta_1,\theta_2,...,\theta_N, A^0, \mathcal {G})$ emphasizes the fact that the total collateral position of a bank depends also on the fractions of non-hoarded collateral of all banks in the network $ \mathcal {G}$. Notice that the above equation implies that the more borrowers of $j$ hoard collateral, the lower is the value of collateral $A^C_j$, and thus the higher the need to hoard collateral for $j$.
Let us start by observing that if the liquidity shock is large enough, bank $j$ defaults (i.e. $NL_j < 0$). This occurs when $$\epsilon_j > (1-h)(1-\theta_j) A_j^C.$$ Given the assumption on the random variable $\epsilon_j$, the default of $j$ is an event occurring with probability $$\text {prob.} (NL_j<0)=\text {prob.}(\epsilon_j > (1-h)(1-\theta_j)A_j^C).$$
Furthermore, following @Adrian2010Liquidity_VaR and @Adrian2013Leverage_VaR, we assume that each bank $j$ employs a Value-at-Risk (VaR) strategy to determine the fraction $1-\theta_j$ of collateral to hoard, so that the above probability of default is not higher than a target $(1-c_j)$ (where $0<c_j <1$). If we assume that returns on external assets held by $j$ are higher than the repo rate, then each bank $j$ will decide the optimal fraction $1-\theta_j$ such that $$\text {prob.}(\epsilon_j > (1-h)(1-\theta_j)A_j^C) = 1-c_j.
\label{eq_VaR_condition}$$
Given that $\epsilon_j$ are a i.i.d. normally distributed random variable we have $$\text {prob.} (NL_j<0)=\text {prob.}(\epsilon_j > (1-h)(1-\theta_j)A_j^C)=\frac {1}{2} [1- \text {erf} (\frac {(1-h)(1-\theta_j)A_j^C-\mu_j}{\sigma_j \sqrt {2}})],$$ where $\text{erf}$ is Gauss error function defined as $$\text {erf} (x)= \frac {1}{\pi}\int_{-x}^{x} \text {e}^{-t^2} dt.$$
Under the VaR constraint bank $j$ sets the share of hoarded collateral at the level $\theta^*_j$ such that $$\frac {1}{2} [1- \text {erf} (\frac {(1-h)(1-\theta_j)A_j^C-\mu_j}{\sigma_j \sqrt {2}})]= 1-c_j
\label{eq_VaR_condition_normal1}$$ $ \Leftrightarrow $ $$\text {erf} (\frac {(1-h)(1-\theta_j)A_j^C-\mu_j}{\sigma_j \sqrt {2}})= 2c_j-1
\label{eq_VaR_condition_normal2}$$ $ \Leftrightarrow $ $$\theta_j= 1- \frac {\sigma_j \sqrt {2} \text {argerf} (2c_j-1) +\mu_j}{(1-h) A_j^C},
\label{eq_VaR_condition_normal3}$$ where $\text {argerf}$ is the inverse error function defined in $(-1, 1) \to \mathbb{R}$ such that $$\text {erf}(\text {argerf}(x)) =x.$$
Equation (\[eq\_VaR\_condition\_normal3\]) indicates that $\theta_j$ is a decreasing function of the VaR target $c_j$, of the uncertainty about the liquidity shock (captured by $\sigma_j$), of the mean of the liquidity shock $\mu_j$, and of the haircut rate $h$. Moreover, it is an increasing function of value of the collateral $A^C$ as well as of the shares of non-hoarded collateral of other banks in the network. Denote $$c_j^0=\sigma_j \sqrt {2} \text {argerf} (2c_j-1) +\mu_j,
\label{eq_c_0_Norm}$$ we then obtain the following final expression for the optimal $\theta_j$ under the assumption of normally-distributed liquidity shocks. $$\theta_j= 1- \frac {c_j^0}{(1-h) A_j^C(\theta_1,\theta_2,...,\theta_N, A^0, \mathcal {G})}.
\label{eq_VaR_theta_Norm}$$
Notice that the endogenous level of non-hoarding, $\theta_j$, depends now not only on the uncertainty about $\epsilon_j$ but also on the haircut rate $h$, as well as on the value of the collateral $A^C_j$. The interdependence between $\theta_j$ and $A^C_j$ implies that each bank will adjust its hoarding preference (i.e. hold more or less collateral) in anticipation of expected “losses” or “gains” in its total amount of collateral. It also follows that a change in hoarding rates at bank $j$ will induce a change in hoarding rates at banks to which $j$ is connected to.
We now investigate the existence of equilibria in non-hoarding rates. Let us start by noticing that Equation (\[eq\_VaR\_theta\_Norm\]) indicates that, for every bank j, if $L_j \neq \varnothing $, its hoarding can in general be expressed as: $$(1- \theta_j)= \frac {c_j^0}{(1-h) A_j^C(\theta_1,\theta_2,...,\theta_N, A^0, \mathcal {G})},
\label{eq_theta_VaR}$$ where the variable $c_j^0$ defined by equation (\[eq\_c\_0\_Norm\]) captures the effects of uncertainty in $\epsilon_j$ on $(1- \theta_j)$. Since $\theta_j \in [0, 1]$, it follows that $A_j^C(\theta_1,\theta_2,...,\theta_N, A^0, \mathcal {G})$ must be in $[\frac {c_j^0}{1-h}, \infty]$. From equation (\[eq\_theta\_VaR\]), it follows that $$(1-h) \theta_j=\frac {(1-h)A^C_j- c^0_j}{ A^C_j}.
\label{eq_theta_VaR2}$$ Equivalently, $$(1-h) \frac {\theta_j}{k_j^{out}} =\frac {(1-h)A^C_j- c^0_j}{A^C_j k_j^{out}}.
\label{eq_VaR_share_dynamic}$$
Recall that under the assumption that banks homogenously spread collateral across their lenders, we have $$w_{i \leftarrow j}=\theta_j s_{i\leftarrow j}= \frac{\theta_j}{k_j^{out}}, \ \forall i \in L_j \neq \varnothing.$$ Therefore, $$(1-h) w_{i \leftarrow j}= \frac {(1-h)A^C_j - c^0_j}{A^C_j k_j^{out}}, \ \forall i \in L_j \neq \varnothing.$$ Next, denote by $\mathcal {\tilde {W}}^{VaR}= \{ \tilde {w}_{i\leftarrow j} \}$ the matrix with size ($N\text{x}N$) where $$\begin {cases}
\tilde {w}_{i\leftarrow j}=\frac {(1-h)A^C_j - c^0_j}{A^C_j k_j^{out}}, \ \forall i \in L_j \neq \varnothing\\
\tilde {w}_{i\leftarrow j}=0, \ \mbox {elsewhere}\\
\end {cases}
\label{w_VaR_model}$$ The level of collateral $A_i^C$ is then obtained by solving the following equation$$A_i^C=A^0_i+\sum_{j \in B_i} \tilde {w}_{i \leftarrow j} A^C_j \ \forall i=1,2,...N.
\label{eq_collateral_box_VaR_mt}$$ Finally, the non-hoarded rates $\theta_i$ can be obtained by substituting $A_i^C$ into equation (\[eq\_theta\_VaR2\]).
In general, the solution to the system composed by the system of equations in (\[eq\_collateral\_box\_VaR\_mt\]) might not be unique. However, the following proposition establishes sufficient conditions for the uniqueness of the solution.
Let $\tilde {b}$ be a column vector size $N \text{x}1$, given by $$\tilde {b}= A^0- \mathcal {S}\mathcal {C}^0.
\label {parameter_b_VaR}$$ Define the matrix $\tilde {\mathcal{A}}=\{ \tilde {a}_{i\leftarrow j}\}$ with size $N$$N$ as $$\tilde {\mathcal{A}} =\mathcal {I}-(1-h)\mathcal {S}.
\label{parameter_A_VaR}$$ If $0 < h < 1$ and $\tilde {\mathcal{A}}^{-1} \tilde {b} \geq \frac {\mathcal {C}^0}{1-h}$ where $\mathcal {C}^0=[c^0_1, c^0_2,..., c^0_{N-1}, c^0_N]^T$ is the column vector capturing the effects of the net liquidity shock on hoarding preferences, then the system (\[eq\_collateral\_box\_VaR\_mt\]) has the unique solution[^14]: $$A^{C}=\tilde {\mathcal{A}}^{-1}\tilde {b}.
\label {eq_total_collateral_solution}$$ \[proposition\_solution\_A\_C\_VaR\]
See the appendix.
Finally, by substituting $A^{C}$ in (\[eq\_total\_collateral\_solution\]) into equation (\[eq\_theta\_VaR2\]), we obtain the solution to the equilibrium rates of non-hoarded collateral as follows $$\theta_j= 1- \frac {c_j^0}{(1-h) \tilde {\mathcal{A}}^{-1}\tilde {b}}.
\label{eq_theta_VaR_solution}$$ In the next section, we use the results obtained from the above proposition about the determination of equilibrium banks’ collateral $A_j^{C}$ and non-hoarding rates $\theta_j$ to study the emergence of collateral hoarding cascades under different network structures when some banks are hit by uncertainty shocks, captured by an increase in the variable $c^0_j$.
Collateral hoarding cascades {#sec:coll-hoard-casc}
============================
We now use the VaR collateral hoarding model developed in the previous section to study how different structures of rehypothecation networks react when a fraction of banks in the network is hit by adverse shocks. In particular we focus on uncertainty shocks[^15] that cause the variable $c^0_i$ in Equation to increase to $c^1_i=
c^0_i(1+\tilde {c}^0)$ for some banks $i$, where $\tilde {c}^0 \geq
0$. The rise in uncertainty will lead those banks to increase their hoarding of collateral. In turn, this will trigger a cascade of hoarding effects at banks that are either directly or indirectly connected to the banks initially it by the uncertainty shock. Indeed, higher hoarding at bank $i$ will also cause a loss in the collateral flowing into bank’s $i$ neighbors. As a consequence, the latter banks will also increase their hoarding rates, causing further loss in collateral inflows at other banks in the system and thus further adjustments in hoarding rates at other banks in the system. The final result of the foregoing cascade will be a new equilibrium characterized, in general, by a lower level of total collateral in the system. On the grounds of the results stated in Proposition \[proposition\_solution\_A\_C\_VaR\] we can determine the new equilibrium vector of collateral in the aftermath of the local uncertainty shock. More formally, let $\mathcal {C}^1=[c^1_1, c^1_2,..., c^1_{N-1}, c^1_N]^T$ the new vector of uncertainty factors in the aftermath of the local uncertainty shock. By substituting $\mathcal {C}^1$ into equations (\[eq\_total\_collateral\_solution\]) and (\[parameter\_b\_VaR\]), we obtain the new equilibrium solution, $A^{C^{1}}$, to $A^{C}$ at the end of the hoarding cascade as $$A^{C^{1}}=\tilde {\mathcal{A}}^{-1} [A^0 - \mathcal {S}\mathcal {C}^1].
\label {eq_total_collateral_solution_shock_uncertainty}$$
Let us now investigate how after-shock equilibrium collateral of behaves in different networks. The plots in Figure \[compare\_structures\_extended\_model\_hoarding\] show the pre-shock levels of the ratio between equilibrium total outflowing collateral $S^{out}$ and total proprietary collateral $S^{0}$ (left scale). Moreover, they show equilibrium total ouflowing collateral after the shock relative to its pre-shock value (right-scale). The two variables are plotted as functions of network density and for the three different structures examined in Section \[sec:netw-arch-coll\], namely the closed k-regular network $\mathcal{G}_{reg}$, the random network $\mathcal{G}_{rg}$, and the core-periphery network $\mathcal{G}_{cp}$. The plots refers to numerical investigations of equilibrium collateral before and after a fraction $f=20\%$ of banks in the network experiences a $50\%$ increase in the uncertainty factor (so that $\tilde {c}^0=0.5$) and are performed under two different shocks scenarios. In the first of them (“random attack”, left-side plots) the banks hit by the uncertainty shock are randomly selected. In the second scenario (“targeted attack”, right-side plots) the shocked banks are selected according to their centrality[^16] in the network (in descending order).[^17] Notice that the second scenario is relevant only for the random network and for the core-periphery, as all nodes have the same centrality in the closed k-regular network.
The analysis of Figure \[compare\_structures\_extended\_model\_hoarding\] reveals first that - before the shock - the behaviour of total collateral under the three network structures is the same as the one discussed in Section \[sec:netw-arch-coll\] for the case of constant hoarding rates[^18]. Second, the effects of the local uncertainty shock vary with density only in the random network and in the core-periphery network. In contrast, the loss in total collateral does not change with density in the closed k-regular graph. Third, the overall impact of the shock is very different across the scenarios considered. In particular, the maximal impact is quite small (only a $10\%$ loss) for all the three structures in the random attack scenario. In addition, all the structure generate the same total loss as overall density converges to 1. In contrast, in the targeted attack scenario, the total loss generated in presence of a core-periphery network is much larger than in the other network structures (up to $50\%$ of the initial total collateral value) and increases with density.
Thus, the core-periphery network generates very large losses in collateral when central nodes are hit by local uncertainty shocks. Recall that those nodes are precisely the one concentrating collateral flows, and thus generating the large increases in endogenous collateral stressed in Section \[sec:netw-arch-coll\]. It follows that the core-periphery network displays a trade-off between liquidity and systemic (liquidity) risk. On the one hand, concentrating collateral flows generates large gains in market liquidity already with a low density of the network. On the other hand, high concentration produces large liquidity losses when small local shock come across. Indeed, by concentrating collateral flows, more central nodes have also a larger impact on collateral at the peripheral nodes that have connections with them, and they thus trigger a must stronger adjustment in hoarding rates thereafter. Notice that these concentration effects underlying the large liquidity losses in the core-periphery network are much smaller in the random graph (where link heterogeneity is small). In addition, they are completely absent in the closed k-regular graph, where all nodes have the same centrality.
To conclude, it is useful to stress that the presence of liquidity hoarding externalities is central for the above results about the trade-off observed in the core-periphery networks. In other terms, targeted local shocks would have a small impact in core-periphery networks if hoarding rates were not responsive to changes in the liquidity positions of banks. To understand why, notice that with constant hoarding rates a loss in collateral value or an exogenous increase in hoarding rates at one bank $i$ will only have a $n^{th}$-order effect at other banks directly or indirectly connected to it. And this is because the initial shock is dampened by haircut rates and hoarding rates at other banks along the chain. For instance, in the very simple case of the cycle (i.e. closed chain) displayed in Figure \[fig\_three\_nodes\] (c) a shock to outflowing collateral at node 1 ($A^{0^{out}}_1$) will have an effect only of order $\theta_3 (1-h) \theta_2 (1-h)$ on outflowing collateral at node 3 (see also Equation ). In contrast, in the case of VaR-determined hoarding rates, the effect can be large because it is reinforced by the process of agents’ revising their hoarding rates at each step along the rehypothecation chain.[^19]
Concluding remarks {#Conclusions}
==================
We have introduced and analyzed a simple model to study collateral flows over a network of repo contracts among banks. We have assumed that, to obtain secured funding, banks may pledge their proprietary collateral or re-pledge the collateral obtained by other banks via reverse repos. The latter practice is known as “rehypothecation” and it has clear advantages for market liquidity as it allows banks to secure more transactions with the same set of collateral. At the same time, re-pledging other banks’ collateral may also raise liquidity risk concerns, as several banks rely on the same collateral for their repo transactions. We have focused on investigating which characteristics of rehypothecation networks are key in order to increase the velocity of collateral flows, and thus increase the liquidity in the market. We have first assumed that banks hoard a constant fraction of their collateral. Under this hypothesis we have shown that characteristics of the network like the length of re-pledging chains, the presence of cyclic chains or the direction of collateral flows are key determinant of the level of endogenous collateral in the system, defined as a overall level of collateral larger than the initial proprietary endowments of banks. In particular, we have shown that the level of endogenous collateral increases with chains’ length. However, cyclic chains allow, *ceteris paribus*, for a larger endogenous collateral than a-cyclic chains of the same length. Finally, we have shown that endogenous collateral is large already with small cyclic chains if the banks involved centralize collateral flows. The foregoing features of network topology underlie the results about the determination of total collateral in more general network architectures. In particular, we showed that total collateral increases with density in the random network (displaying a mild heterogeneity in collateral flows across banks) and in the core-periphery networks (displaying high concentration of collateral flows). Nevertheless, core-periphery networks generate larger collateral than random networks already with smaller increase in the density of the network. The foregoing results have implications for the micro-structure of markets where collateral is important, as they highlight a new factor besides network density - i.e. concentration in collateral flows - that allows for significant gains in market liquidity. A market with highly concentrated collateral flows generates higher velocity of collateral, as it is thus more liquid, even if banks are not tied by a dense network of financial contracts.
Furthermore, we have extended the model to allow for endogenous levels of hoarding rates that depend on the liquidity position of each bank in the network. More precisely, we assumed that banks set their hoarding rates by adopting a Value-at-Risk criterion aimed at minimizing the risk of liquidity defaults. We have shown that, in these framework, hoarding rates of each single bank are in general dependent on other banks’ rates and collateral levels, a feature which introduces important collateral hoarding externalities in the analysis. We have then used the above framework to study the overall impact on collateral flows of local adverse shocks leading to an increase in payments’ uncertainty at some banks in the market, in particular investigating how the response may vary with the topology of the rehypothecation network. We have highlighted that core-periphery networks generate larger losses in overall collateral compared to other network structures (closed k-regular network, random network) when the nodes experiencing the shock are the most central nodes in the network, i.e. the ones concentrating collateral flows. This result has interesting implications for the regulatory analysis of markets with collateral, as it shows that the same network structures allowing for the largest increase in collateral velocity - i.e. core-periphery networks - are also the ones more exposed to largest collateral hoarding cascades in case of local shocks. A trade-off between liquidity and systemic (liquidity) risk thus emerges in those networks. In addition, our results also suggest that - in the presence of rehypothecation - regulatory liquidity and collateral requirements imposed to banks need to account for the structure of the network of lending contracts across banks. In particular, such requirements should both account for the systemic role (e.g. centrality) of the banks in the collateral flow network, as well as account for the whole topology of the network (e.g. for the presence of hierarchical structures such as the core-periphery architecture).
Our work could be extended at least in three ways. First, in these work we have abstracted from many important aspects of real-world secured lending markets, such as heterogeneous collateral quality and endogenous haircut rates. Introducing these elements could probably enrich our results. In particular, in the model the haircut rate affects the extent of rehypothecation. Accordingly, endogenous haircut rates that reflect different levels of counterparty risk can be an additional source of externalities in the model. Second, we have used the Value at Risk criterion to determine hoarding rates in our model. However, hoarding rates may also result from liquidity requirements imposed to banks. It would then be interesting to extend the model to study how different requirements that have been proposed so far may impact on market liquidity in presence of rehypothecation, and how these requirements should be designed in order to minimize the liquidity-systemic risk trade-off that we highlighted above. Finally, we have focused on bilateral repo contracts. However, it would be interesting to study how our results might change in presence of try-party repo structures, and in the presence of central clearing counterparties that interact with banks re-using their collateral.
Acknowledgments {#acknowledgments .unnumbered}
===============
Appendix: Proofs of Propositions {#Appendix}
================================
[^1]: Corresponding author: `[email protected]`
[^2]: Throughout this paper, the terms “re-use”, “rehypothecation”, and “re-pledge” are interchangeable.
[^3]: See for @battiston2016complexity for an account of the sources of banks’ interconnectedness in financial systems.
[^4]: See also @Aguiar_Map_Collateral2016 for a more comprehensive discussion of the structure of collateral flows.
[^5]: The repurchase price should be greater than the original sale price, the difference effectively representing interest, and sometimes called the repo rate.
[^6]: The size of the haircut usually reflects the perceived risk associated with holding the asset.
[^7]: Notice that in many cases, the rehypothecation process ends already after a small number of steps, typically smaller than the number of banks in the system. In addition, the effective duration of the rehypothecation process exponentially decreases with the levels of the non-hoarding rates and of the haircut rates (see also next section).
[^8]: See [@FSB2017_measure] for discussions of other collateral re-use measures.
[^9]: Notice that $A^{C^{out}}_{i}$ mentioned in equation (\[eq\_tot\_outflow\_collateral\]) is equilibrium collateral, and it corresponds to the amount of outflowing collateral when $T \to \infty$.
[^10]: It is also possible to show that, as long as all agents have positive out-degree, collateral flows will end up in a cycle after a finite number of steps. For the sake of brevity we do not report this proposition and the related proof. However, it is available from the authors upon request.
[^11]: In the next proposition we determine the equilibrium for bank’s collaterals in the case of an average system, that is instead of considering each single sample of collateral networks we consider the expected value of the amount of collateral for each banks, and solve only for a single average system. For all cases, numerical evidence strongly supports the analytical results.
[^12]: The numerical simulation is implemented with $N=50$, $h=0.1$, $1-\theta=0.1$, and $A^0=100$ for all banks.
[^13]: In addition, for the random graph, the number of nodes with positive out-degree also increases with density.
[^14]: Throughout this paper, for any two vectors X and Y of size n, then $X \geq Y$ if $X_i \geq Y_i, \forall i=1,...n.$
[^15]: We also performed analysis of the impact of aggregate shocks to the value of collateral. However, results and the dynamics were similar to the one reported in the paper.
[^16]: We use “PageRank” centrality [see @newman2010networks; @battiston2012debtrank for more details]. However, the main conclusions still hold when the degree centrality is employed.
[^17]: The other parameters of the simulation where set so that the number of banks is $N=50$, and $h=0.1$, $A_i^0=100$, $1-\theta_i=0.1$ and $c^0_i=1$ for all banks.
[^18]: Indeed, all the results of Propositions \[prop:new\_prop\_k\_reg\] and \[prop:proposition\_density\_rand\] hold also in the model where hoarding rates are determined according to a VaR criterion. For the sake of brevity, we do not report here these propositions and the related proofs. However, they are available from the authors upon request.
[^19]: In particular, in the example of Figure \[fig\_three\_nodes\] (c) with VaR-based hoarding rates the effect of a shock to $A^{0^{out}}_1$ on the outflowing collateral of node 3 be of the order $\left(1-h\right)^2\left(\theta_3 \theta_2+\theta_3\frac{\partial \theta_2}{\partial A^{0^{out}}_1}+\theta_2\frac{\partial \theta_3}{\partial A^{0^{out}}_1}\right).$
|
---
abstract: 'We address continuous variable quantum teleportation in Gaussian quantum noisy channels, either thermal or squeezed-thermal. We first study the propagation of twin-beam and evaluate a threshold for its separability. We find that the threshold for purely thermal channels is always larger than for squeezed-thermal ones. On the other hand, we show that squeezing the channel improves teleportation of squeezed states and, in particular, we find the class of squeezed states that are better teleported in a given noisy channel. Finally, we find regimes where optimized teleportation of squeezed states improves amplitude-modulated communication in comparison with direct transmission.'
address:
- '${}^1$Dipartimento di Fisica and Unità INFM, Università degli Studi di Milano, via Celoria 16, I-20133 Milano, Italia'
- '${}^2$INFM UdR Pavia, Italia'
author:
- 'Stefano Olivares${}^1$, Matteo G. A. Paris${}^2$, and Andrea R. Rossi${}^1$'
title: Optimized teleportation in Gaussian noisy channels
---
Introduction {#s:intro}
============
In a quantum channel, information is encoded in a set of quantum states, which are in general nonorthogonal and thus, even in principle, cannot be observed without disturbance. Therefore, their faithful transmission requires that the entire communication protocol is carried out by a physical apparatus that works without knowing or learning anything about the travelling signal. In this respect, quantum teleportation provides a remarkable mean for indirectly sending quantum states.
The key ingredient of quantum teleportation is an entangled bipartite state used to support the quantum communication channel [@bennett2]. This allows the preparation of an arbitrary quantum state at a distant place without directly transmitting it. In optical implementations of continuous variables quantum teleportation (CVQT), the entangled source is typically a twin-beam state of radiation (TWB), whose two modes are shared between the two parties. A faithful transmission of quantum information through the channel requires a large input-output fidelity, which in turn is an increasing function of the amount of entanglement. However, the propagation of a TWB in noisy channels unavoidably leads to degradation of entanglement, due to decoherence induced by losses and noise. Indeed, the effect of decoherence on TWB entanglement and, in turn, on teleportation fidelity, have been addressed by many authors [@wilson; @jlee1; @jspb; @vukics; @bowen; @progphys]. Thresholds for separablity of TWB have been established and teleportation of both classical and nonclassical states has been explicitly analyzed [@banjpa; @noise:take]. In particular, in Ref. [@noise:take] it was investigated how much nonclassicality can be transferred by noisy teleportation in a zero temperature thermal bath. Moreover, the stability of squeezed states in a squeezed environment has been recently studied, showing that such nonclassical states loose their coherence faster than coherent states even if coupled with nonclassical reservoir [@grewal]. The open question is then if there exist situations where squeezed states are favoured with respect to coherent ones, especially for quantum communication purporses.
In this paper we investigate the behavior of a TWB propagating through a Gaussian noisy channel, either thermal or squeezed-thermal, and address its performances for applications in quantum communication [@niel:chuang; @oliv:paris]. As we will see, in presence of noise along the channel, teleportation of a suitable class of squeezed states can be an effective and robust protocol for amplitude-based communication compared to direct transmission.
Squeezed environments were addressed by many authors for preservation of the macroscopic quantum coherence. In fact, if squeezed quantum fluctuations are added to dissipation, a macroscopic superposition state preserves its coherence longer than in presence of dissipation alone [@sq:ken]. Reference [@sq:munro] showed that the interference fringes due to a superposition of two macroscopically distinct coherent states (“Schrödinger’s cat states”) could be improved by the inclusion of squeezed vacuum fluctuations. An interesting physical realization of an environment with squeezed quantum fluctuations based on quantum-non-demolition-feedback was proposed in reference [@sq:vitali]. Effective squeezed-bath interactions were studied in references [@sq:clark; @sq:clark2], where the technique of quantum-reservoir engineering [@sq:bath:eng] was actually used to couple a pair of two-state atoms to an [*effective*]{} squeezed reservoir.
The paper is structured as follows: in sections \[s:interaction\] and \[s:sep\] we describe the evolution of a TWB in a squeezed-thermal bath and study its separability by means of the partial transposition criterion; section \[s:tele\] addresses the TWB coupled with the non classical environment as a resource for quantum teleportation of squeezed states; in section \[s:comm\] we compare the performances of direct transmission and teleportation. In section \[s:conclusions\] we draw some concluding remarks.
Twin beam coupled with a squeezed thermal bath {#s:interaction}
==============================================
The propagation of a TWB interacting with a squeezed-thermal bath can be modelled as the coupling of each part of the state with a non-zero temperature squeezed reservoir. The dynamics can be described by the two-mode Master equation [@WM] $$\begin{aligned}
\frac{d\rho_t}{dt} &=& \{ \Gamma (1+N) L[a]+\Gamma (1+N)
L[b]+\Gamma N L[a^{\dag}] + \Gamma N L[b^{\dag}] \nonumber\\
&\mbox{}& \hspace{1cm}+ \Gamma M \mathcal{M}[a^{\dag}] + \Gamma M^*
\mathcal{M}[a] + \Gamma M \mathcal{M}[b^{\dag}] + \Gamma M^* \mathcal{M}[b]
\}\rho_t \,,
\label{master:squeezed}\;\end{aligned}$$ where $\rho_t\equiv\rho(t)$ is the system’s density matrix at the time $t$, $\Gamma$ is the damping rate, $N$ and $M$ are the effective photons number and the squeezing parameter of the bath respectively, $L[O]$ is the Lindblad superoperator, $L[O]\rho_t=O\rho_t O^{\dag}-\frac12 O^{\dag}O\rho_t - \frac12
\rho_t O^{\dag} O$, and $\mathcal{M}[O]\rho_t = O\rho_t O -
\frac12 O O \rho_t - \frac12 \rho_t O O $. The terms proportional to $L[a]$ and $L[b]$ describe the losses, whereas the terms proportional to $L[a^{\dag}]$ and $L[b^{\dag}]$ describe a linear phase-insensitive amplification process. Of course, the dynamics of the two modes are independent on each other.
Thanks to the differential representation of the superoperators in equation (\[master:squeezed\]), the corresponding Fokker-Planck equation for the two-mode Wigner function $W\equiv W(x_1,y_1; x_2,y_2)$ is $$\begin{aligned}
\partial_t W & = & \frac{\Gamma}{2}\sum_{j=1}^{2}\:
\left(\partial_{x_j} x_j + \partial_{y_j} y_j\right)W +
\frac{\Gamma}{2} \sum_{j=1}^{2}\: \left\{\frac{1}{2}
\Re{\rm e}[M]\left( \partial^{2}_{x_j x_j} - \partial^{2}_{y_j y_j}\right)
\right. \nonumber \\
&\mbox{}& \quad + \left. \Im{\rm m}[M]\: \partial^{2}_{x_j y_j} +
\frac{1}{2}\left(N+\frac{1}{2}\right)\left(\partial^{2}_{x_j x_j}
+ \partial^{2}_{y_j y_j}\right) \right\} W\;,\end{aligned}$$ which, introducing $\tau = \Gamma t/\gamma$ and $\gamma = (2N+1)^{-1}$, reduces to the standard form $$\begin{aligned}
\partial_{\tau} W = \left\{ -\sum_{j=1}^4 \partial_{x_{j}}\:
a_{j}(\underline{x}) + \frac{1}{2} \sum_{i,j=1}^{4}
\partial^{2}_{ x_{i} x_{j}}\: d_{ij} \right\} W\,,
\label{wigner:master:squeezed}\end{aligned}$$ where, for sake of simplicity, we put $\underline{x} =
(x_{1},y_{1};x_{2},y_{2}) \equiv (x_{1},x_{2};x_{3},x_{4})$. In equation (\[wigner:master:squeezed\]) $a_j(\underline{x})$ and $d_{ij}$ are the matrix elements of the drift and diffusion matrices $\mathbf{A}(\underline{x})$ and $\mathbf{D}$ respectively, which are given by $$\begin{aligned}
\mathbf{A}(\underline{x}) = -\frac{\gamma}{2} \,
\underline{x}\,,\end{aligned}$$ $$\begin{aligned}
\mathbf{D}= \left( \begin{array}{cccc}
\frac{1}{4}+\frac{\gamma}{2} \, \Re{\rm e}[M] & \gamma \, \Im{\rm
m}[M] &
0 & 0 \\
\gamma \, \Im{\rm m}[M] & \frac{1}{4}-\frac{\gamma}{2} \, \Re{\rm
e}[M] &
0 & 0 \\
0 & 0 & \frac{1}{4}+\frac{\gamma}{2} \, \Re{\rm e}[M] & \gamma
\, \Im{\rm
m}[M] \\
0 & 0 & \gamma \, \Im{\rm m}[M] & \frac{1}{4}-\frac{\gamma}{2} \,
\Re{\rm e}[M]
\end{array} \right)\,.\end{aligned}$$ Notice that in our case the drift term is linear in $\underline{x}$ and the diffusion matrix does not depend on $\underline{x}$. We assume $M$ as real and a TWB as starting state [*i.e.*]{} $\rho_0\equiv\rho_{\rm TWB}
=|{\rm TWB}\rangle\rangle\langle\langle {\rm TWB} |$, where $|{\rm TWB}
\rangle\rangle= \sqrt{1-x^2} \sum_p\: x^{a^{\dag}a}\: {\vert p \rangle}{\vert p \rangle}$. The TWB corresponds to the Wigner function $$\begin{aligned}
W_{0}(x_{1},y_{1};x_{2},y_{2}) = \frac{
\exp\left\{
\displaystyle{
- \frac{(x_{1}+x_{2})^{2}}{4\sigma_{+}^{2}}
- \frac{(y_{1}+y_{2})^{2}}{4 \sigma_{-}^{2}}
- \frac{(x_{1}-x_{2})^{2}}{4 \sigma_{-}^{2}}
-\frac{(y_{1}-y_{2})^{2}}{4 \sigma_{+}^{2}}
}
\right\}
}{(2 \pi)^{2}\:
\sigma_{+}^{2}\sigma_{-}^{2}}\end{aligned}$$ with $\sigma_{\pm}^2 = \frac14 e^{\pm 2 \lambda}$ and $\lambda$, $x=\tanh
\lambda$, being the squeezing parameter of the TWB. Now the solution of the Fokker-Planck (\[wigner:master:squeezed\]) is given by [@WM] $$\begin{aligned}
W_{\tau}(x_{1},y_{1},x_{2},y_{2}) = \frac{
\exp\left\{
\displaystyle{
-\frac{(x_{1}+x_{2})^{2}}{4
\Sigma_{1}^{2}}-\frac{(y_{1}+y_{2})^{2}}{4 \Sigma_{2}^{2}}
-\frac{(x_{1}-x_{2})^{2}}{4
\Sigma_{3}^{2}}-\frac{(y_{1}-y_{2})^{2}}{4 \Sigma_{4}^{2}}
}
\right\}
}
{(2 \pi)^2\:
\Sigma_{1}\: \Sigma_{2}\: \Sigma_{3}\: \Sigma_{4}}
\label{wigner:evol:sq}\end{aligned}$$ where $\Sigma_{j}^2 = \Sigma_{j}^2(\lambda, \Gamma, n_{\rm th}, n_{\rm
s})$, $j=1,2,3,4$, are $$\begin{array}{lll}
\Sigma_{1}^{2}=\sigma_{+}^{2} e^{-\Gamma t}+D_{+}^{2}(t)\,,
&\quad&
\Sigma_{2}^{2}=\sigma_{-}^{2} e^{-\Gamma t}+D_{-}^{2}(t)\,, \\
\mbox{}\\
\Sigma_{3}^{2}=\sigma_{-}^{2} e^{-\Gamma t}+D_{+}^{2}(t)\,,
&\quad&
\Sigma_{4}^{2}=\sigma_{+}^{2} e^{-\Gamma t}+D_{-}^{2}(t)\,,
\end{array}\label{sigma:1234}$$ and $$\begin{aligned}
\label{sq:D:pm}
D_{\pm}^{2}(t) = \frac{1 + 2 N \pm 2 M}{4} \left(1 -e^{-\Gamma
t}\right)\end{aligned}$$ with $\left| M \right| \leq (2 N + 1)/2$. The latter condition is already enforced by the positivity condition for the Fokker-Planck’s diffusion coefficient, which requires $$\begin{aligned}
\label{10}
M\leq\sqrt{N(N+1)}\,.\end{aligned}$$ If we assume the environment as composed by a set of oscillators excited in a squeezed-thermal state of the form $\nu= S
(r)\rho_{\rm th} S^\dag (r)$, with $S(r)=\exp\{\frac12 r [a^{\dag
2}-a^2]\}$ and $\rho_{\rm th}=(1+n_{\rm th})^{-1} [n_{\rm
th}/(1+n_{\rm th})]^{a^\dag a}$, then we can rewrite the parameters $N$ and $M$ in terms of the squeezing and thermal number of photons $n_{\rm s}=\sinh^2 r$ and $n_{\rm th}$ respectively. Then we get [@grewal] $$\begin{aligned}
M &=& \left(1 + 2\,n_{\rm th}\right)\sqrt{n_{\rm s}(1 + n_{\rm s})}\,,
\label{phys:par:M} \\
N &=& n_{\rm th} + n_{\rm s}(1 + 2\,n_{\rm th}) \label{phys:par:N}
\:.\end{aligned}$$ Using this parametrization, the condition (\[10\]) is automatically satisfied.
Separability {#s:sep}
============
A quantum state of a bipartite system is [*separable*]{} if its density operator can be written as $\varrho=\sum_k p_k \sigma_k
\otimes \tau_k$, where $\{p_k\}$ is a probability distribution and $\tau$’s and $\sigma$’s are single-system density matrices. If a state is separable the correlations between the two systems are of purely classical origin. A quantum state which is not separable contains quantum correlations [*i.e.*]{} it is entangled. A necessary condition for separability is the positivity of the density matrix $\varrho^T$, obtained by partial transposition of the original density matrix (PPT condition) [@peres]. In general PPT has been proved to be only a necessary condition for separability; however, for some specific sets of states, PPT is also a sufficient condition. These include states of $2\times 2$ and $2 \times 3$ dimensional Hilbert spaces [@2x] and Gaussian states (states with a Gaussian Wigner function) of a bipartite continuous variable system, [*e.g.*]{} the states of a two-mode radiation field [@geza; @simon]. Our analysis is based on these results. In fact, the Wigner function of a twin-beam produced by a parametric source is Gaussian and the evolution inside active fibers preserves such character. Therefore, we are able to characterize the entanglement at any time and find conditions on the fiber’s parameters to preserve it after a given fiber length. The density matrix’s PPT property can be rephrased as a condition on the covariance matrix of the two modes Wigner function $W(x_1,y_1;x_2,y_2)$. We have that a state is separable iff $$\begin{aligned}
\label{15}
\mathbf{V}+\frac{i}{4} \mathbf{\Omega}\geq0\end{aligned}$$ where $$\begin{aligned}
\begin{array}{cc}
\mathbf{\Omega} = \left( \begin{array}{cc} \mathbf{J} & \mathbf{0} \\
\mathbf{0} & \mathbf{J}
\end{array} \right) &
\mbox{and} \quad
\mathbf{J} = \left( \begin{array}{cc} 0 & 1 \\
-1 & 0
\end{array} \right)\,.
\end{array}\end{aligned}$$ and $$\begin{aligned}
V_{pk} = \langle \Delta\xi_p\: \Delta\xi_k \rangle = \int\!\!
{\rm d}^4\xi\: \: \Delta\xi_p\: \Delta\xi_k\: W(\xi)\,,\end{aligned}$$ with $\Delta\xi_j = \xi_j - \langle \xi_j \rangle$, and $\underline{\xi}=\{x_1,y_1,x_2,y_2 \}$. The explicit expression of the covariance matrix associated to the Wigner function (\[wigner:evol:sq\]) is $$\begin{aligned}
\mathbf{V}= \frac{1}{2} \left(
\begin{array}{cccc}
\Sigma_{1}^{2}+\Sigma_{3}^{2} & 0 & \Sigma_{1}^{2}-\Sigma_{3}^{2} & 0 \\
0 & \Sigma_{2}^{2}+\Sigma_{4}^{2} & 0 & \Sigma_{2}^{2}-\Sigma_{4}^{2} \\
\Sigma_{1}^{2}-\Sigma_{3}^{2} & 0 & \Sigma_{1}^{2}+\Sigma_{3}^{2} & 0 \\
0 & \Sigma_{2}^{2}-\Sigma_{4}^{2} & 0 &
\Sigma_{2}^{2}+\Sigma_{4}^{2}
\end{array} \right)\,,\end{aligned}$$ and, then, condition (\[15\]) is satisfied when $$\begin{aligned}
\label{20}
\Sigma_{1}^{2}\:\Sigma_{4}^{2} \geq \frac{1}{16}\,,\quad\quad
\Sigma_{2}^{2}\:\Sigma_{3}^{2} \geq \frac{1}{16}\,.\end{aligned}$$ Notice that changing the sign of $M$ leaves conditions (\[20\]) unaltered.
By solving these inequalities with respect to time, $t$ we find that the two-mode state becomes separable for $t > t_{\rm s}$, where the threshold time $t_{\rm s} = t_{\rm s}(\lambda,\Gamma,n_{\rm th},
n_{\rm s})$ is given by $$\begin{aligned}
\label{tau:sep:squeezed}
t_{\rm s} = \frac{1}{\Gamma}\log\left( f + \frac{1}{1+2 n_{\rm th}}
\sqrt{f^2 + \frac{n_{\rm s}(1+n_{\rm s})} {n_{\rm th}(1+n_{\rm
th})}} \right)\,,\end{aligned}$$ and we defined $$\begin{aligned}
f \equiv f(\lambda,n_{\rm th}, n_{\rm s}) =\frac{(1+2\,n_{\rm th})\:
\left[ 1+2\,n_{\rm th}-e^{-2\,\lambda}(1+2\,n_{\rm s}) \right]}
{4\,n_{\rm th}(1+n_{\rm th})}\,.\end{aligned}$$ As one may expect, $t_{\rm s}$ decreases as $n_{\rm th}$ and $n_{\rm s}$ increase. Moreover, in the limit $n_{\rm\rm s}
\rightarrow 0$, the threshold time (\[tau:sep:squeezed\]) reduces to the case of a non squeezed bath, in formula [@jspb; @progphys] $$\begin{aligned}
\label{25}
t_{0} &=& t_{\rm s}(\lambda,\Gamma,n_{\rm th}, 0)\nonumber\\
&=& \frac{1}{\Gamma} \log \left( 1+\frac{1 - e^{-2 \,
\lambda}}{2\,n_{\rm th}} \right)\,.\end{aligned}$$ In order to see the effect of a squeezed bath on the entanglement time we define the function $$\begin{aligned}
\label{26}
G(\lambda, n_{\rm th}, n_{\rm s}) &\equiv& \frac{t_{\rm s}-t_{\rm
0}}{t_0}\,.\end{aligned}$$ In this way, when $G>0$, the squeezed bath gives a threshold time longer than the one obtained with $n_{\rm s}=0$, shorter otherwise. These results are illustrated in figure \[f:threshold\], where we plot equation (\[26\]) as a function of $n_{\rm s}$ for different values of $n_{\rm th}$ and $\lambda$. Since $G$ is always negative, we conclude that coupling a TWB with a squeezed-thermal bath destroys the correlations between the two channels faster than the coupling with a non squeezed environment.
Optimized quantum teleportation {#s:tele}
===============================
In this section we study continuous variable quantum teleportation (CVQT) assisted by a TWB propagating through a squeezed-thermal environment. Let us remind the CVQT protocol: the sender and the receiver, say Alice and Bob, share a two-mode state described by the density matrix $\rho_{12}$, where the subscripts refer to modes 1 and 2 respectively: mode 1 is sent to Alice, the other to Bob. The goal of CVQT is teleporting an unknown state $\sigma$, corresponding to the mode 3, from Alice to Bob. In order to implement the teleportation, Alice first performs a heterodyne detection on modes 3 and 1, [*i.e.*]{} she jointly measures a couple of two-mode quadratures. The POVM of the measurement is given by $$\begin{aligned}
\Pi_{13}(z) = \frac{1}{\pi}\:D_1(z) {\vert \mathbb{I} \rangle\rangle}_{13}
{}_{31}{\langle\langle \mathbb{I} \vert} D_{1}^{\dag}(z)\,,\label{het:POVM}\;\end{aligned}$$ where ${\vert \mathbb{I} \rangle\rangle}_{13} \equiv \sum_{v}\:{\vert v \rangle}_{1}{\vert v \rangle}_{3}$, and $D_1(z) \equiv \exp\{z a^{\dag}-z^* a\}$ is the displacement operator acting on mode 1. Each measurement outcome is a complex number $z$, which is sent to Bob via a classical communication channel, and used by him to apply a displacement $D(z)$ to mode 2 such to obtain the quantum state $\rho_{\rm tele}$ which, in an ideal case, coincides with the input signal $\sigma$ [@braun1; @furusawa]. The Wigner function of the heterodyne POVM is given by [@rsp] $$W[\Pi_{13}(z)](x_1,y_1;x_3,y_3) = \frac{1}{\pi^2}\: \delta\big(
(x_1 - x_3) + x \big)\: \delta\big( (y_1 + y_3) - y \big)\,,
\label{wig:het}\;$$ with $z=x+iy$, and since, using Wigner functions, the trace between two operators can be written as [@glauber] $$\begin{aligned}
{\rm Tr}\:[O_1\:O_2] = \pi\: \int\!\! d^2 w\: W[O_1](w)\: W[O_2](w)\,,\end{aligned}$$ the heterodyne probability distribution is given by [@oliv] $$\begin{aligned}
p(z) &=& \pi^3 \int\!\!\!\!\int\!\! dx_1\,dy_1
\int\!\!\!\!\int\!\! dx_2\,dy_2
\int\!\!\!\!\int\!\! dx_3\,dy_3 \: W[\sigma](x_3,y_3)\:\nonumber\\
&\mbox{}& \hspace{1cm}
\times W[\rho_{12}](x_1,y_1;x_2,y_2)\:
\nonumber\\
&\mbox{}& \hspace{1.5cm} \times W[\Pi_{13}(z)](x_1,y_1;x_3,y_3)\,
W[\mathbb{I}_2](x_2,y_2)\,,
\label{wigner:het:prob}\;\end{aligned}$$ while the conditional state of mode 2 is $$\begin{aligned}
W[\rho_2(z)](x_2,y_2) &=& \frac{\pi^2}{p(z)}\:
\int\!\!\!\!\int\!\! dx_1\,dy_1 \int\!\!\!\!\int\!\! dx_3\,dy_3 \:
W[\sigma](x_3,y_3)\:
\nonumber\\
&\mbox{}& \hspace{1cm} \times W[\rho_{12}](x_1,y_1;x_2,y_2)\:
\nonumber\\
&\mbox{}& \hspace{1.5cm} \times W[\Pi_{13}(z)](x_1,y_1;x_3,y_3)\,
W[\mathbb{I}_2](x_2,y_2)\,,
\label{wig:rho:cond}\end{aligned}$$ where $W[\mathbb{I}_2](x_2,y_2)=\pi^{-1}$. Thanks to equation (\[wig:het\]) and after the integration with respect to $x_3$ and $y_3$, we have $$\begin{aligned}
W[\rho_2(z)](x_2,y_2) &=& \frac{1}{\pi\,p(z)}\:
\int\!\!\!\!\int\!\! dx_1\,dy_1 \: W[\sigma](x_1+x,-y_1+y)\: \nonumber\\
&\mbox{}& \hspace{2cm} \times
W[\rho_{12}](x_1,y_1;x_2,y_2)\nonumber\\
&=& \frac{1}{\pi\, p(z)}\:
\int\!\!\!\!\int\!\! dx_1\,dy_1 \: W[\sigma](x_1,y_1)\: \nonumber\\
&\mbox{}& \hspace{2cm} \times W[\rho_{12}](x_1-x,-y_1+y;x_2,y_2)\,.
\label{wig:rho:cond:2}\end{aligned}$$ Now we perform the displacement $D(z)$ on mode 2. Since $$W[D(z)\:\rho\:D^{\dag}(z)](x_j,y_j) = W[\rho](x_j-x,y_j-y)\,,$$ we obtain $$\begin{aligned}
W[\rho'_2(z)](x_2,y_2) &=& \frac{1}{\pi\,p(z)}\:
\int\!\!\!\!\int\!\! dx_1\,dy_1 \: W[\sigma](x_1,y_1)\: \nonumber\\
&\mbox{}& \hspace{1cm}\times W[\rho_{12}](x_1-x,-y_1+y;x_2-x,y_2-y)\,.
\label{wig:rho:cond:3bis}\end{aligned}$$ with $\rho'_{2}(z) \equiv D(z)\:\rho_{2}(z)\:D^{\dag}(z)$. The output state of CVQT is obtained integrating equation (\[wig:rho:cond:3bis\]) with respect to all the possible outcomes of heterodyne detection $$W[\rho_{\rm tele}](x_2,y_2)=\int\!\! d^2 z\: p(z)\:W[\rho'_2(z)](x_2,y_2)\,.
\label{tele:out:gen}$$ Finally, when the shared state is the one given in equation (\[wigner:evol:sq\]), equation (\[tele:out:gen\]) rewrites as follows $$\begin{aligned}
W[\rho_{\rm tele}](x_2,y_2) &=& \int\!\!\!\!\int\!\! \frac{dx'\:dy'}{4 \pi
\:\Sigma_{2}\:\Sigma_{3}}\: \nonumber\\
&\mbox{}&\hspace{0.5cm} \times
\exp\left\{ - \frac{(x'-x_2)^2}{4 \Sigma_{3}^2}-
\frac{(y'-y_2)^2}{4 \Sigma_{2}^2}\right\}\: W[\sigma](x',y') \\
&=& \int\!\! \frac{d^2 w}{4 \pi \:\Sigma_{2}\:\Sigma_{3}}\:
\exp\left\{ - \frac{(\Re{\rm e}[w])^2}{4\Sigma_{3}^2} -
\frac{(\Im{\rm m}[w])^2}{4 \Sigma_{2}^2}\right\}\:\nonumber\\
&\mbox{}&\hspace{0.5cm} \times
W[D(w)\:\sigma\:D^{\dag}(w)](x_2,y_2)\,, \label{wig:output}\end{aligned}$$ which shows that the map $\mathcal{L}$, describing CVQT assisted by a TWB propagating through a squeezed-thermal environment, is given by $$\begin{aligned}
\rho_{\rm tele}&\equiv&\mathcal{L}\sigma\nonumber\\
&=& \int\!\! \frac{d^2 w}{4 \pi \:\Sigma_{2}\:\Sigma_{3}}\:
\exp\left\{ - \frac{(\Re{\rm e}[w])^2}{4\Sigma_{3}^2} -
\frac{(\Im{\rm m}[w])^2}{4 \Sigma_{2}^2}\right\}\:\nonumber\\
&\mbox{}&\hspace{0.5cm} \times
D(w)\:\sigma\:D^{\dag}(w)\,,
\label{squeezed:CVQT:map}\;\end{aligned}$$ [*i.e.*]{} the teleportation protocol corresponds to a generalized Gaussian noise. Notice that if $n_{\rm s}\rightarrow 0$, from equations (\[sigma:1234\]), (\[phys:par:M\]) and (\[phys:par:N\]) one has $$\Sigma_2^2, \Sigma_3^2 \rightarrow
\sigma_{-}^2\: e^{-\Gamma t} + \frac{1+2 n_{\rm th}}{4}
(1 - e^{-\Gamma t})\,,$$ which is the noise due to a thermalized quantum channel [@banjpa]. The map (\[squeezed:CVQT:map\]) can be extended to the case of a general Gaussian noise as follows $$\mathcal{L}_{\rm gen}\sigma =
\int\!\! \frac{d^2 w}{\pi \:\sqrt{{\rm det}[\mathbf{C}]}}\:
\exp\left\{ - \underline{w}\:\mathbf{C}\:\underline{w}^{T}
\right\}\: D(w)\:\sigma\:D^{\dag}(w)\,,
\label{squeezed:CVQT:map:gen}\;$$ where $\underline{w}$ is the row vector $\underline{w} = (\Re{\rm e}[w],
\Im{\rm m}[w])$ and $\mathbf{C}$ is the covariance matrix of the noise [@wilson].
Now, in order to use CVQT as a resource for quantum information processing, we look for a class of squeezed states which achieves an average teleportation fidelity greater than the one obtained teleporting coherent states in the same conditions. The Wigner function of the squeezed state $\sigma = {\vert \alpha,\zeta \rangle} {\langle \alpha,\zeta \vert}$, ${\vert \alpha,\zeta \rangle}=D(\alpha)S(\zeta) |0\rangle$, is given by (we assume the squeezing parameter $\zeta$ as real) $$\begin{aligned}
W[\sigma](x_3,y_3) = \frac{2}{\pi}\: \exp\Bigg\{ -
\frac{2\: (x_3 - a)^2}{e^{-2\zeta}} - \frac{2\: (y_3 - b)^2}{e^{2\zeta}} \Bigg\}
\label{w1}\;,\end{aligned}$$ with $a = \Re{\rm e}[\alpha] $, $b = \Im{\rm m}[\alpha] $. Thanks to equations (\[wig:output\]) and (\[w1\]), we have $$\begin{aligned}
W[\rho_{\rm tele}](x,y) =
\frac{2\: \exp \left\{
\displaystyle{
- \frac{2\: (x - a)^2}{e^{-2\zeta} + 8\:
\Sigma_{3}^2} - \frac{2\: (y - b)^{2}}{e^{2\zeta} + 8\: \Sigma_{2}^2}
}
\right\} }{\pi \sqrt{(e^{2\zeta} + 8\: \Sigma_{2}^2) (e^{-2\zeta} + 8\:
\Sigma_{3}^2)}}\: \label{tele:squeezed}\;,\end{aligned}$$ where we suppressed all the subscripts. The average teleportation fidelity is thus given by $$\begin{aligned}
\nonumber
\overline{F}_{\zeta,{\rm tele}}(\lambda,\Gamma, n_{\rm th}, n_{\rm s})
&\equiv& \pi \int\!\!\!\!\int\!\! {\rm d}x\: {\rm d}y\:
W[\sigma](x,y)\: W[\rho_{\rm out}](x,y)\\
&=& \Big( \sqrt{(e^{2\zeta} + 4\: \Sigma_{2}^2)
(e^{-2\zeta} + 4\: \Sigma_{3}^2)} \Big)^{-1}\,,
\label{squeezed:fid:gen}\end{aligned}$$ which attains its maximum when $$\begin{aligned}
\label{zeta:max}
\zeta = \zeta_{\rm max} \equiv
\frac12 \log \left( \frac{\Sigma_2}{\Sigma_3} \right)\,,\end{aligned}$$ and, after this maximization, reads as follows $$\begin{aligned}
\label{squeezed:fid:max}
\overline{F}_{\rm tele}(\lambda,\Gamma, n_{\rm th}, n_{\rm s}) =
\frac{1}{1+4\:\Sigma_2\:\Sigma_3}\,.\end{aligned}$$
For $n_{\rm s} \rightarrow 0$ we have $\Sigma_2 = \Sigma_3$, and thus then $\zeta_{\rm max}\rightarrow 0$, [*i.e.*]{} the input state that maximizes the average fidelity (\[squeezed:fid:gen\]) reduces to a coherent state. In other words, in a non squeezed environment the teleportation of coherent states is more effective than that of squeezed states. Moreover, equation (\[squeezed:fid:max\]) shows that meanwhile the TWB becomes separable, [*i.e.*]{} $\Sigma_2^2\:\Sigma_3^2 \ge \frac{1}{16}$ (see equations (\[20\])), one has $\overline{F}_{\rm tele}\leq
0.5$. We remember that when the average fidelity is less than 0.5, the same results can be achieved using [*classical*]{} (non entangled) shared states [@braun1; @braun2]: in our case, it could be possible to verify the separability of the shared state simply studying the fidelity achieved teleporting squeezed states. Notice that the classical limit $\overline{F}_{\rm tele} = 0.5$, which was derived in the case of coherent state teleportation [@braun2], still holds when we wish to teleport a squeezed state with a fixed squeezing parameter. Finally, the asymptotic value of $\overline{F}_{\rm tele}$ for $\Gamma t \rightarrow
\infty$ is $$\begin{aligned}
\overline{F}_{\rm tele}^{(\infty)} = \frac{1}{2\: (1 + n_{\rm th})}\,,\end{aligned}$$ which does not depend on the number of squeezed photons and is equal to $0.5$ only if $n_{\rm th} = 0$. This last result is equivalent to say that in presence of a zero-temperature enviroment, no matter if it is squeezed or not, the TWB is non-separable at every time.
In figure \[f:fidelity\] we plot $\overline{F}_{\rm tele}$ as a function of $\Gamma t$ for different values of $\lambda$, $n_{\rm th}$ and $n_{\rm
s}$. As $n_{\rm s}$ increases, the non classicity of the thermal bath starts to affect the teleportation fidelity and we observe that the best results are obtained when the state to be teleported is the squeezed state that maximizes (\[squeezed:fid:gen\]). Furthermore the difference between the two fidelities increases as $n_{\rm s}$ increases. Notice that there is an interval of values for $\Gamma t$ such that the coherent state teleportation fidelity is less than the classical limit $0.5$, although the shared state is still entangled.
Teleportation vs direct transmission {#s:comm}
====================================
This section is devoted to investigate whether the results obtained in the previous sections can be used to improve quantum communication using non classical states. We suppose to have a communication protocol where information is encoded onto the field amplitude of a set of squeezed states of the form ${\vert \alpha,\zeta \rangle}$ with fixed squeezing parameter. In figure \[f:scheme\] we show a schematic diagram for direct and teleportation-assisted communication. As one can see from the figure, direct transmission line’s length $L$ is twice the effective length of the teleportation-assisted scheme: this is due to the fact that the two modes of the shared state are chosen to be propagating in opposite directions.
When we directly send the squeezed state (\[w1\]) through a squeezed noisy quantum channel, the state arriving at the receiver is $$\begin{aligned}
W[\rho_{\rm dir}](x,y) =
\frac{2\: \exp \left\{
\displaystyle{
- \frac{2\: (x - a\: e^{-\Gamma t'/2})^2}{e^{-2\zeta- \Gamma t'} + 4
D_{+}^2} - \frac{2\: (y - b\: e^{-\Gamma t'/2})^{2}}{e^{2\zeta
- \Gamma t'} + 4 D_{-}^2}}\right\}}
{\pi \sqrt{(e^{-2\zeta - \Gamma t'} + 4 D_{+}^2) (e^{2\zeta -
\Gamma t'} + 4 D_{-}^2)}}\: \label{evol:squeezed}\;,\end{aligned}$$ with $D_{\pm}^2$, evaluated at time $t'$, given in equation (\[sq:D:pm\]) and time $t'$ is twice the time $t$ implicitly appearing in equation (\[tele:squeezed\]), because of the previously explained choice. Equation (\[evol:squeezed\]) is the Wigner function of the state $\rho_{\rm dir}$, solution of the single-mode Master equation $$\begin{aligned}
\frac{{d}\rho_t}{{d}t} = \{ \Gamma (1+N) L[a]+ \Gamma N L[a^{\dag}] +
\Gamma M \mathcal{M}[a^{\dag}] + \Gamma M^* \mathcal{M}[a] \}\rho_t \,,
\label{sm:master:squeezed}\;\end{aligned}$$ where $\Gamma$, $N$, $M$ and the superoperators $L[O]$ and $\mathcal{M}[O]$ have the same meaning as in equation (\[master:squeezed\]). As in case of quantum teleportation, we can define the direct transmission fidelity (see equation (\[squeezed:fid:gen\])), obtaining $$\begin{aligned}
F_{\zeta,\alpha,{\rm dir}}(\Gamma, n_{\rm th}, n_{\rm s}) =
\frac{\exp \left\{
\displaystyle{
- \frac{a^2 (1 - e^{-\Gamma t'/2})^2}
{2\:\Sigma_{a}^2(t')} -
\frac{b^2 (1 + e^{-\Gamma t'/2})^{2}(t')}
{2\:\Sigma_{b}^2(t')}}\right\}}
{2\sqrt{
\Sigma_{a}^2(t')\:\Sigma_{b}^2(t')}}\: \label{squeezed:dir:fid:gen}\;.\end{aligned}$$ where $$\begin{aligned}
\Sigma_{a}^2(t') &=& \mbox{$\frac14$} e^{-2\zeta}(1 + e^{-\Gamma t'})
+ D_{+}^2(t')\;,\\
\Sigma_{b}^2(t') &=& \mbox{$\frac14$} e^{2\zeta}(1 + e^{-\Gamma t'})
+ D_{-}^2(t')\,.\end{aligned}$$ Since $F_{\zeta,\alpha,{\rm dir}}$ depends on the amplitude $\alpha = a +i
b$ of the state to be transmitted, in order to evaluate the average fidelity here we assume that the trasmitter sends squeezed states with fixed squeezed parameter and with amplitudes distributed according to the Gaussian $$\begin{aligned}
\mathcal{P}(\alpha) = \frac{1}{2 \pi \Delta^2}
\exp\Bigg\{-\frac{|\alpha|^2}{2\Delta^2}\Bigg\}
\label{gaussian:comm:distrib}\,.\end{aligned}$$ The average direct transmission fidelity reads as follows $$\begin{aligned}
\overline{F}_{\zeta,{\rm dir}} &=&
\int\!\! d^2\alpha\: \mathcal{P}(\alpha)\:F_{\zeta,\alpha,{\rm
dir}}(\Gamma, n_{\rm th}, n_{\rm s})\\
&=& \frac12 \Bigg\{\sqrt{
\big[ (1 - e^{-\Gamma t'/2})\Delta^2 + \Sigma_{a}^2(t')\big]
\big[ (1 - e^{-\Gamma t'/2})\Delta^2 + \Sigma_{b}^2(t')\big]
}\Bigg\}^{-1}
\label{direct:fid:gen}\;,\end{aligned}$$ which, for $\Gamma t \rightarrow \infty$ and using equation (\[zeta:max\]), reduces to $$\begin{aligned}
\overline{F}_{{\rm dir}}^{(\infty)} =
\frac12 \Big(\sqrt{g_{+}(n_{\rm th}, n_{\rm s})\:
g_{-}(n_{\rm th}, n_{\rm s})}\Big)^{-1} \,,\end{aligned}$$ with $$\begin{aligned}
g_{\pm}(n_{\rm th}, n_{\rm s}) &=&
\frac14 \Bigg[
1 + \sqrt{1 + 8\: n_{\rm s}(1 + n_{\rm s}) \pm 4\:(1 + 2\:n_{\rm s})
\sqrt{n_{\rm s}(1 + n_{\rm s})}}
\nonumber\\
&\mbox{}&\hspace{0.5cm}
+ 2 \Big(
n_{\rm s} + n_{\rm th} + 2\: n_{\rm s} n_{\rm th} \pm (1 + 2\: n_{\rm th})
\sqrt{n_{\rm s }(1 + n_{\rm s})}
\Big)
\Bigg] + \Delta^{2}\,.\nonumber\end{aligned}$$
Teleportation is a good resource for quantum communication in noisy channel when $\overline{F}_{\zeta,{\rm tele}}\geq\overline{F}_{\zeta,{\rm dir}}$, which gives a threshold $\Delta_{\rm th}^2$ on the width $\Delta^2$ of the distribution (\[gaussian:comm:distrib\]) $$\begin{aligned}
\Delta_{\rm th}^2(\lambda,\Gamma, n_{\rm th}, n_{\rm s}) &=&
\frac{1}{2\:(1 - e^{-\Gamma t})^2}
\bigg\{
-\big[\Sigma_{a}^2(2t) + \Sigma_{b}^2(2t)\big]\nonumber\\
&\mbox{}& \hspace{0.5cm}
+ \sqrt{\big[\Sigma_{a}^2(2t) - \Sigma_{b}^2(2t)\big]^2 +
\big( \overline{F}_{\zeta,{\rm tele}} \big)^{-2}}
\bigg\}
\label{Delta:threshold}\;,\end{aligned}$$ where $\overline{F}_{\zeta,{\rm tele}}$ of Eq. (\[squeezed:fid:gen\]) is evaluated at time $t$ and, then, $t' = 2t$.
In figure \[f:comm\] we plot $\overline{F}_{\zeta,{\rm tele}}$ and $\overline{F}_{\zeta,{\rm dir}}$ with $\zeta = \zeta_{\rm max}$ for different values of the other parameters. We see that teleportation is an effective and robust resource for communication as the channel becomes more noisy and $\Delta^2$ larger. Moreover, when $n_{\rm th}, n_{\rm s}
\rightarrow 0$, one obtains the following finite value for the threshold $$\begin{aligned}
\Delta_{\rm th}^2(\lambda,\Gamma,0,0) = \frac{e^{\Gamma t} - 1 + e^{-2
\lambda}}{2\:e^{-\Gamma t}(1 - e^{-\Gamma t})^2}\,,\end{aligned}$$ [*i.e.*]{} teleportation assisted communication can be more effective than direct transmission even for pure dissipation at zero temperature.
Conclusions {#s:conclusions}
===========
In this work we have studied the propagation of a TWB through a Gaussian quantum noisy channel, either thermal or squeezed-thermal, and have evaluated the threshold time after which the state becomes separable. Moreover, we have explicitly found the completely positive map for the teleportated state using the Wigner formalism.
We have found that the threshold for a squeezed environment is always shorter than for a purely thermal one. On the other hand, we have shown that squeezing the channel is a useful resource when entanglement is used for teleportation of squeezed states. In particular, we have found the class of squeezed states which optimize teleportation fidelity. The squeezing parameter of such states depends on the channel parameters themselves. In these conditions, the teleportation fidelity is always larger than the one achieved by teleporting coherent states. Moreover, there are no regions of useless entanglement, [*i.e.*]{} the fidelity approaches the classical limit $\overline{F}=0.5$ when the TWB becomes separable.
Finally, we have found regimes where the optimized teleportation of squeezed states can be used to improve the transmission of amplitude-modulated signals through a squeezed-thermal noisy channel. The transmission performances have been investigated by means of input-output fidelity, comparing the direct transmission with the teleportation one. Actually, decoherence mechanisms are different between these two channels: in the teleportation channel the fidelity is reduced due to the interaction of the TWB with the squeezed-thermal bath; in direct transmission the signal is directly coupled with the non-classical environment and, then, fidelity is affected by the degradation of the signal itself. The performance of CVQT as a quantum communication channel in nonclassical environment obviously depends on the parameters of the channel itself, but our analysis has shown that if the signal is drawn from the class of squeezed states that optimize teleportation fidelity, and the probability distribution of the transmitted state amplitudes is wide enough, then teleportation is more effective and robust as the environment becomes more noisy.
[99]{} C. H. Bennett et al., Phys. Rev. Lett [**70**]{}, 1895 (1993). D. Wilson, J. Lee and M. S. Kim, quant-ph/0206197 v3 (2002). J. Lee, M. S. Kim and H. Jeong, Phys. Rev A [**62**]{}, 032305 (2000). J. S. Prauzner-Bechcicki, quant-ph/0211114 v2 (2003). A. Vukics, J. Janszky and T. Kobayashi, Phys. Rev. A [**66**]{}, 023809 (2002). W. P. Bowen et al., Phys. Rev. A [**67**]{}, 032302 (2003). M. G. A Paris, [*Entangled light and applications*]{} in [*Progress in Quantum Physics Research*]{}, V. Krasnoholovets Ed., Nova Publisher, in press. M. Ban, M. Sasaki and M. Takeoka, J. Phys. A: Math. Gen. [**35**]{}, L401 (2002). M. Takeoka, M. Ban and M. Sasaki, J. Opt. B: Quantum Semiclass. Opt. [**4**]{}, 114 (2002). K. S. Grewal, Phys Rev A [**67**]{}, 022107 (2003). M. A. Nielsen and I. L. Chuang, [*Quantum Computation and Quantum Information*]{} (Cambridge University Press, Cambridge, 2000). S. Olivares and M. G. A. Paris, [*Binary optical communication in single-mode and entangled quantum noisy channels*]{}, preprint quant-ph/0309096 T. A. B. Kennedy and D. F. Walls, Phys. Rev. A [**37**]{}, 152 (1988). W. J. Munro and M. D. Reid, Phys. Rev. A [**52**]{}, 2388 (1995). P. Tombesi and D. Vitali, Phys. Rev. A [**50**]{}, 4253 (1994). S. G. Clark and A. S. Parkins, Phys. Rev. Lett. [**90**]{}, 047905 (2003). S. G. Clark, A. Peng, M. Gu and S. Parkins, quant-ph/0307064. N. Lütkenhaus, J. I. Cirac and P. Zoller, Phys. Rev. A [**57**]{}, 548 (1998). D. F. Walls and G. J. Milburn, [*Quantum Optics*]{} (Springer-Verlag, Berlin, 1994). A. Peres, Phys. Rev. Lett. [**77**]{}, 1413-1415 (1996). P. Horodecki, M. Lewenstein, G. Vidal and I. Cirac, Phys. Rev. A [**62**]{}, 032310 (2000). Lu-Ming Duan, G. Giedke, J. I. Cirac and P. Zoller, Phys. Rev. Lett. [**84**]{} 2722 (2000). R. Simon, Phys. Rev. Lett. [**84**]{} 2726 (2000). S. L. Braunstein and H. J. Kimble, Phys. Rev. Lett. [**80**]{}, 869 (1998). A. Furusawa et. al., Science [**282**]{}, 706 (1998). M. G. A. Paris, M. Cola and R. Bonifacio, J. Opt. B [**5**]{}, S360 (2003). K. Cahill and R. Glauber, Phys. Rev. [**177**]{}, 1857 (1969). S. Olivares, M. G. A. Paris and R. Bonifacio, Phys. Rev. A [**67**]{}, 032314 (2003). S. L. Braunstein, C. A. Fuchs and H. J. Kimble, J. Mod. Opt. [**47**]{}, 267 (2000).
![Plots of the ratio $G = (t_{\rm s}-t_{0})/t_{0}$ as a function of the number of squeezed photons $n_{\rm s}$ for different values of the TWB parameter $\lambda$ and of the number of thermal photons $n_{\rm th}$. The values of $n_{\rm th}$ are chosen to be: (a) $n_{\rm th} = 10^{-6}$, (b) $10^{-3}$, (c) $10^{-1}$ and (d) $1$, while the solid lines, from bottom to top, refer to $\lambda$ varying between 0.1 to 1.0 with steps of 0.15.[]{data-label="f:threshold"}](threshold.eps){width="70.00000%"}
![Plots of the average teleportation fidelity. The solid and the dashed lines represent squeezed and coherent state fidelity, respectively, for different values of the number of squeezed photons $n_{\rm s}$: (a) $n_{\rm s} = 0$, (b) $0.1$, (c) $0.3$, (d) $0.7$. In all the plots we put the TWB parameter $\lambda = 1.5$ and number of thermal photons $n_{\rm th}
= 0.5$. The dot-dashed vertical line indicates the threshold $\Gamma
t_{\rm s}$ for the separability of the shared state: when $\Gamma t >
\Gamma t_{\rm s}$ the state is no more entangled. Notice that, in the case of squeezed state teleportation, the threshold for the separability corresponds to $\overline{F} = 0.5$.[]{data-label="f:fidelity"}](fidelity.eps){width="70.00000%"}
![ Direct and teleportation-assisted transmission. (a) In direct transmission, the sender directly sends state $\sigma$ through the Gaussian noisy channel: the state arriving at the receiver is $\rho_{\rm dir}$. (b) In teleportation-assisted transmission, the sender mixes at the balanced beam splitter BS the state $\sigma$ to be transmitted with one of the two mode of the shared state, arriving from the Gaussian noisy channel, and then he measures the quadrature $x$ and $y$, respectively, of the output modes. This result is classically communicated to the receiver, which applies a displacement $D(z)$, $z=x+iy$, to the output state, obtaining $\rho_{\rm tele}$ (see section \[s:tele\] for details). Notice that the length of the direct transmission line is twice the effective length of the teleportation-assisted transmission one.[]{data-label="f:scheme"}](scheme.eps){width="70.00000%"}
![Plots of average teleportation (equation (\[squeezed:fid:gen\])) and direct communication (equation (\[direct:fid:gen\])) fidelity as functions of $\Gamma t$ for different values of $n_{\rm th}$ and $n_{\rm
s}$: (a) $n_{\rm th} = n_{\rm s} = 0$; (b) $n_{\rm th} = 0.3$ and $n_{\rm
s} = 0$; (c) $n_{\rm th} = 0.5$ and $n_{\rm s} = 0$: (d) $n_{\rm th} = 0.5$ and $n_{\rm s} = 0.3$. In all the plots the solid line referes to $\overline{F}_{\rm tele}$ with $\lambda = 1.5$, whereas the dashed lines are $\overline{F}_{\rm dir}$ with (from top to bottom) $\Delta^2 = 0.1,
0.5, 1, 5 $. The squeezing parameter is chosen to be $\zeta = \zeta_{\rm
max}$, which maximizes teleportation fidelity. Notice that direct transmission fidelity is evaluated in a time $t$ equal twice the time of teleportation (see the scheme in figure \[f:scheme\]). The dot-dashed vertical line indicates the threshold $\Gamma t_{\rm s}$ for the separability of the shared state used in teleportation.[]{data-label="f:comm"}](comm.eps){width="70.00000%"}
|
---
author:
- |
Ziyang Hu[^1]\
D.A.M.T.P.\
University of Cambridge
bibliography:
- 'submersion.bib'
title: |
On the General Problem of\
Structure-Preserving Submersions
---
Introduction {#sec:introduction}
============
In physics and other fields where we use a geometrical space as a model for doing computations, situations that can be roughly described as “structure-preserving submersions” arise frequently. The archetypal example is given by the problem of rigid motion in classical mechanics where the geometrical structure of the moving body is preserved by the motion. In [@me:000] we see that a straightforward generalisation to special relativity is possible. Many techniques have been devised to solve these problems. However, these techniques are adapted to the particular problem at hand and, for example, techniques useful in the study of classical mechanical rigidity would not be very useful in the study of relativistic rigidity.
In this paper we describe a general geometrical framework encompassing all of the geometrical problems that can be described as a “structure-preserving submersion”. This programme shall enable us to study many properties common to all such problems in an abstract way, while at the same time understand more deeply where many of the differences that do occur in the different problems arise. We shall also see that this framework gives a very robust foundation for doing computations, and problems that were computationally too expensive to carry out before is now much simpler. Most importantly, this framework gives us a very clear *geometrical* picture of what is happening in a problem, and hence can effectively guide us through the computations without getting lost in the details.
Our plan for this paper is as follows. In section \[sec:structures-manifold\] we will first describe how to formulate the concept of a “manifold with structure” within the Cartan framework, which covers all classical geometries and their inhomogeneous generalisations, for example, Riemannian geometry. In section \[sec:subm-immers-topol\] we use the framework to first discuss how structure-preserving immersion fits into our programme, then goes into structure-preserving submersion proper. The reason for dealing with immersion first is that, this problem is simpler but shares some structural similarities with the submersion case, so that we can gain some experience, and more importantly, as we shall see, every submersion problem contains a family of submersion problems. We then give many simple examples of how familiar problems can be formulated in our framework, especially the examples of Riemannian and conformal submersions. Next, in section \[sec:riemannian-geometry\], we use our framework to derive some very rigid constraints on the forms of rigid flows in Riemannian spaces (and with a few change of signs, pseudo-Riemannian spacetimes). Our results will include a form of generalisation of the Herglotz-Noether theorem, namely that a rotational rigid motion in conformally flat space must be isometric in the Riemannian case, and a rotational conformally rigid motion in flat space must be conformally isometric in the conformal geometry case. This is a further extension to the result derived in [@me:000], where more basic results are obtained using more down-to-earth methods but with much less theoretical underpinning. This will show the utility of our approach of solving problems. Along the way we will discuss several further directions worth pursuing, which we will summarise in section \[sec:conclusion\].
Structures on a manifold {#sec:structures-manifold}
========================
In order to define a *structure-preserving submersion* on a manifold we first need to understand what it means for a manifold to have a geometrical structure. A smooth manifold can be locally represented by ${\mathbb{R}}^{n}$, which by itself is assumed to have no structure at all, and the simplest way to give it a structure is to specify the usual Euclidean metric on it. A general way of specifying the geometrical structure is to consider the transformation group preserving this geometrical structure instead: this is Klein’s Erlangen programme. However, Klein’s programme considers only *homogeneous* geometrical structures, whereas in our application this condition is obviously too restrictive, and we need the full power of Cartan’s extension of *espaces généralisés*, which we shall now describe.
Cartan geometry {#sec:cartan-geometry}
---------------
Let $M$ be a manifold with a geometrical structure $\Gamma$ defined on it. A (local) *admissible transformation* $f:M\rightarrow M$ is a (local) automorphism such that $f^{*}\Gamma = \Gamma$. We assume that the set of all admissible transformations $f$ locally have the structure of a Lie group $G$ — this is not a very restrictive assumption at all. We assume further that $G$ acts transitively (otherwise we will simply use a quotient group) on $M$ on the right, and for any point $p\in M$, the isotropy group $H_{x}$ such that $f(x)=x$ for all $f\in H_{x}$ are all isomorphic, so we write $H_{x}\cong H$. Then the geometrical property $\Gamma$ is encoded in a very particular coframing on the principal $H$-bundle over $M$. First recall that a principal $H$ bundle is a smooth fibre bundle $\xi = (P,M,\pi,H)$ together with a right action $P\times H\rightarrow P$ that is fibre preserving and acts simply transitively on each fibre. Note that the fibre is isomorphic to the group, but which point in a fibre to choose as the identity element is arbitrary. This arbitrariness is captured by the right action. We are now ready to give the definition of a *Cartan geometry*:
Let $G$ acts locally transitively on $M$ with isotropy group $H$. Let $\mathfrak{g}$ and $\mathfrak{h}$ be their respective Lie algebras. Then the Cartan geometry $\xi=(P,\omega)$ on $M$ modelled on $(\mathfrak g, \mathfrak h)$ consists of the following data:
- the smooth manifold M;
- the principal right $H$ bundle $P$ over $M$;
- the $\mathfrak g$-valued 1-form $\omega$ on $P$, called the *Cartan connection*, satisfying
1. for each point $p\in P$, the linear map $\omega_{p}:T_{p}(P)\rightarrow\mathfrak g$ is an isomorphism;
2. $(R_{h})^{*}\omega=\operatorname{Ad}(h^{-1})\omega$ for all $h\in H$ where $R$ denotes the right action of a group on itself and $\operatorname{Ad}$ denotes the adjoint action;
3. $\omega(X^{\dagger})=X$ for all $X\in\mathfrak h$ where $X^{\dagger}$ is the vector field on $P$ generated by the infinitesimal group action $X\in\mathfrak g$.
The $\mathfrak g$-valued 2-form $\Omega=d\omega+\tfrac{1}{2}[\omega\wedge\omega]$ is called the *curvature*. If $\Omega$ takes value in the subalgebra $\mathfrak h$, then the geometry is called *torsion free*. if $\Omega=0$, then the geometry is *flat*.
When a geometry is flat, then $P\cong G$ and $\omega$ is the left-invariant Maurer-Cartan form on $G$. Even in the non-flat case, condition 3 above shows that restricted to each fibre $\omega$ is still the Maurer-Cartan form on $H$. The curvature measures the extent of the deformation of $P$ from the Lie group $G$. In the flat case, $M\cong G/H$ is a homogeneous space.
We will often need the notion of basic and semibasic forms. Let $H\rightarrow E\xrightarrow{\pi}M$ be a fibre bundle over $M$ with group action $H$. Then for the space of forms we have $\pi^{*}:\Lambda^{p}(M)\hookrightarrow\Lambda^{p}(E)$. For $\omega\in\Lambda^{p}(E)$, $\omega$ is *basic* if it lies in the image of $\pi^{*}$, whereas it is *semi-basic* if $\omega(V_{1},\dots,V_{n})=0$ whenever $V_{1}$ is tangential to a fibre. An easy criteria for a form to be basic is that it is semi-basic and right $H$-invariant.
Examples: Riemannian geometry and conformal geometry {#sec:exampl-riem-geom}
----------------------------------------------------
We now have all the abstract machinery we will need from Cartan’s geometry. Next we will give two key examples of how to apply this programme. These examples will be used later when we construct immersions and submersions on it.
\[Riemannian geometry in the Cartan framework\] Let us see how we can capture the information contained in a Riemannian metric in the Cartan framework. We will take the model to be $$\begin{aligned}
\label{eq:1}
G&=\left\{
\begin{pmatrix}
1&0\\
V&A
\end{pmatrix}\in M_{n+1}({\mathbb{R}})\mid A\in O(n)
\right\}\\
H&=\left\{
\begin{pmatrix}
1&0\\
0&A
\end{pmatrix}\in M_{n+1}({\mathbb{R}})\mid A\in O(n)
\right\}\\
\mathfrak g &=\left\{
\begin{pmatrix}
0&0\\
v&a
\end{pmatrix}\in M_{n+1}({\mathbb{R}})\mid A\in \mathfrak{o}(n)
\right\}\\
\mathfrak h &=\left\{
\begin{pmatrix}
0&0\\
0&a
\end{pmatrix}\in M_{n+1}({\mathbb{R}})\mid A\in \mathfrak{o}(n)
\right\}\end{aligned}$$ i.e., for the Lie algebra part, $a+a^{t}=0$. Using the form of the Lie algebra, the Cartan connection and curvature are $$\begin{aligned}
\label{eq:3}
\omega&=
\begin{pmatrix}
0&0\\
\theta&\alpha
\end{pmatrix}\\
\Omega&=
\begin{pmatrix}
0&0\\
d\theta+\alpha\wedge\theta&d\alpha+\alpha\wedge\alpha
\end{pmatrix}\end{aligned}$$ The forms $\theta$ are basic, whereas the forms $\alpha$ are tangential to the fibre. We also see that the torsion free condition is just the usual “Cartan’s first structural equations”: $$\label{eq:4}
d\theta=-\alpha\wedge\theta$$ or, if we choose a suitable basis for $\mathfrak g$ and put on some indices (Einstein’s summation convention applies), $$\label{eq:5}
d\theta^{i}=-\alpha^{i}{}_{j}\wedge\theta^{j}.$$ Therefore the data captured by a Riemannian metric is also captured by the 1-forms $\theta^{i}$ and $\alpha^{i}{}_{j}$ satisfying the torsion-free condition . For flat space (Euclidean space), we of course also have $$\label{eq:6}
d\alpha^{i}{}_{j}=-\alpha^{i}{}_{k}\wedge\alpha^{k}{}_{j}.$$
A conformal structure on a manifold $M$ is an equivalence class of Riemannian structure on $M$: two Riemannian metrics $g$ and $g'$ belongs to the same class if and only if there is a function $\Lambda:M\rightarrow {\mathbb{R}}$ such that $g'=\Lambda g$. We need to encode this information into a linear Lie group acting on the tangent space, and it is fairly complicated, as shown below.
We will use the Möbius model: let $\mathbb{L}={\mathbb{R}}^{n+2}$ be equipped with the indefinite metric $$\label{eq:42}
\Sigma_{n+1,1}=
\begin{pmatrix}
0&0&-1\\
0&I_{n}&0\\
-1&0&0
\end{pmatrix}$$ this is the so-called light-cone version of the metric. Without regard of the metric, we can form the projective space $\mathbb{P}(\mathbb{L})$ of $\mathbb{L}$, i.e., two points are identified if and only if they lie on the same straight line through the origin. Using the metric now, we define the *Möbius $n$-space* $M_{0}$ to be the set of lightlike points in $\mathbb{P}(\mathbb{L})$. Note that the affine map ${\mathbb{R}}^{n+1}\rightarrow\mathbb{L}$ $$\label{eq:43}
(y_{0},y_{1},\dots,y_{n})\mapsto\left(\frac{1}{\sqrt{2}}(1+y_{0}),y_{1},\dots,y_{n},\frac{1}{\sqrt{2}}(1-y_{0})\right)$$ induces a diffeomorphism between the $n$-sphere and the Möbius $n$-space, hence also the terminology *Möbius $n$-sphere*.
The Lorentz group is the symmetry group of the metric $\Sigma_{n+1,1}$ fixing the origin, and the symmetry group of the Möbius $n$-sphere is a subgroup. The kernel of the group action of Lorentz group on $M_{0}$ is $\{\pm I\}$. For the isotropy subgroup, we will take the one fixing the north pole of the Möbius sphere. Hence we will take as our model the *Möbius model* $$\begin{aligned}
\label{eq:44}
G&=O(n+1,1)/\{\pm 1\}\\
H&=\{h\in G\mid h[e_{0}]=[e_{0}]\},\qquad\text{$[e_{0}]$ refers to the north pole.}\end{aligned}$$ The matrix representation of $H$ is given by $$\label{eq:45}
H=\left\{
\begin{pmatrix}
z&0&0\\
0&a&0\\
0&0&z^{-1}
\end{pmatrix}
\begin{pmatrix}
1&q&r\\
0&1&q^{t}\\
0&0&1
\end{pmatrix}
\mid
z\in{\mathbb{R}}^{+},a\in O(n),q^{t}\in{\mathbb{R}}^{n},r=\frac{1}{2}qq^{t}.
\right\}$$ and for the Lie algebra pair $(\mathfrak g,\mathfrak h)$, we will take $$\begin{aligned}
\label{eq:46}
\mathfrak g&=\left\{
\begin{pmatrix}
z&q&0\\
p&s&q^{t}\\
0&p^{t}&-z
\end{pmatrix}
\mid s^{t}=s\right\}
\\
\mathfrak h&=
\left\{
\begin{pmatrix}
z&q&0\\
0&s&q^{t}\\
0&0&-z
\end{pmatrix}
\mid s^{t}=s\right\}\end{aligned}$$ Note that the adjoint action of $h\in H$ on $p\in\mathfrak g/\mathfrak h$ is $$\label{eq:52}
\operatorname{Ad}(h)p=sz^{-1}p$$ as expected, i.e., rotation and scaling.
With $(\mathfrak g,\mathfrak h)$ and $H$, we are now able to define a Cartan geometry modelled on Möbius $n$-sphere. We will denote the Cartan connection by $$\label{eq:47}
\omega=
\begin{pmatrix}
\epsilon&\nu&0\\
\theta&\alpha&\nu^{t}\\
0&\theta^{t}&-\epsilon
\end{pmatrix}$$ note that the forms $\theta$ plays the same role as the Riemannian $\theta$: they are basic for the canonical projection of the bundle. The curvature is $$\begin{aligned}
\label{eq:49}
\Omega&=
\begin{pmatrix}
E&Y&0\\
\Theta&A&Y^{t}\\
0&-\Theta^{t}&-E
\end{pmatrix}\\
&=
\begin{pmatrix}
d\epsilon+\nu\wedge\theta&d\nu+\epsilon\wedge\nu+\nu\wedge\alpha&0\\
d\theta+\theta\wedge\epsilon+\alpha\wedge\theta&d\alpha+\theta\wedge\nu+\alpha\wedge\alpha+\nu^{t}\wedge\theta^{t}&\star\\
0&\star&\star
\end{pmatrix}\end{aligned}$$ The $A$ block of the curvature plays the same role as the torsion-free part of the Riemannian curvature. Call this part of the Lie algebra the $\mathfrak{s}$-block. We also define the Ricci homomorphism, the abstract version of the Ricci contraction of the Riemannian tensor, in the present context defined by $$\begin{aligned}
\label{eq:50}
\operatorname{Ricci}:\hom(\Lambda^{2}(\mathfrak g/\mathfrak h),\mathfrak s)&\cong\Lambda^{2}(\mathfrak g/\mathfrak h)^{*}\otimes\mathfrak s\\
&\xrightarrow{id\otimes\operatorname{ad}}\Lambda^{2}(\mathfrak g/\mathfrak h)^{*}\otimes\mathfrak g/\mathfrak h\otimes(\mathfrak g\otimes\mathfrak h)^{*}\\
&\xrightarrow{\text{contract}\otimes\operatorname{ad}}(\mathfrak g/\mathfrak h)^{*}\otimes(\mathfrak g/\mathfrak h)^{*}\end{aligned}$$ In the above, $\operatorname{ad}$ is the adjoint action of $\mathfrak s$ on $\mathfrak g/\mathfrak h$. This is the usual interpretation of $\mathfrak s$ as a matrix of transformations. The last step is the map $$\label{eq:51}
t^{*}\wedge u^{*}\otimes v\otimes w^{*}\mapsto (u^{*}(v)t^{*}-t^{*}(v)u^{*})\otimes w^{*}$$ If the curvature blocks $E=0$ (no scaling curvature), $\Theta=0$ (no torsion), and $A$ is in the kernel of $\operatorname{Ricci}$, then the Möbius geometry is called *normal*. *Normal Möbius geometries are in 1-1 correspondence with conformal metrics on a manifold*. This completes the formulation of conformal geometry in the Cartan language.
Let us remark that the term *Cartan geometry* does not stand for a single, concrete model of geometry, as in the case of Euclidean geometry. Rather, it is an abstract geometrical language for describing other geometries. In particular, we should note that though in a concrete problems the Cartan connection $\omega$ can be given by an explicit expression, it is more useful in the Cartan framework to first study the properties of it only abstractly, without any particular expressions attached, just as we can study the properties of a Riemannian metric in Riemannian geometry without giving it any particular expression. The Cartan connection is the central concept for Cartan geometry, as it encodes all data about the geometry.
Lifting and the method of equivalence {#sec:lift-into-princ}
-------------------------------------
In fact, the theory of Cartan geometry arises as an application of Cartan’s algorithm for solving equivalence problems. However, the method of equivalence is more general for solving a large class of problems in differential systems. The method of equivalence is very powerful due to its algorithmic nature, and once a problem can be formulated in its form a solution is sure to be found, though the computation may be very complicated so that the use of computer is required.
We shall now briefly outline the method of equivalence so as to make a more computational interpretation of our later manipulations possible. We will gloss over a large number of technical issues, as for our need just keeping a simple picture in mind is sufficient, especially when we discuss why the structure-preserving submersion problem *cannot* be solved in section \[sec:equiv-meth-fails\]. For further details of the method, see [@LBryant:1991p8950; @Fels:1998p7335; @Fels:1999p7361].
Basically, the method of equivalence is concerned with the following question: given two manifolds $M$ and $N$, each with a set of differential 1-forms $\theta_{M}^{i}$ and $\theta_{N}^{i}$ on it, does that exist a diffeomorphism $\Phi:M\rightarrow N$ such that $$\label{eq:78}
\Phi^{*}\theta_{N}^{i}=g^{i}{}_{j}\theta_{M}^{j},$$ where $g^{i}{}_{j}\in G$ is a specified Lie group acting linearly on the column vector of 1-forms? It is neither assumed that the 1-forms are linearly independent nor that they span the cotangent space, though in the case where they do not span the cotangent space we need to complement them with other forms. The 1-forms encode the geometrical structure in the problem with respect to the cotangent space, and the Lie group $G$ is simply the group that preserves this structure. For example, for Riemannian geometry *an* orthonormal frame encodes the same amount of information as the metric: we can write $g=\sum\theta^{i}\otimes\theta^{i}$ for some one forms $\theta^{i}$, using the Gram-Schmidt algorithm. The group $G$ is then simply the orthogonal group acting on the cotangent space in the usual way, and formulated in this way, this problem is the same as finding whether two manifolds as “the same” within the realm of Riemannian geometry, the problem Riemann first studied. However, is not symmetric in $M$ and $N$, and it is cumbersome to have elements of $G$ explicitly around, so we use the following method: we “lift” the problem into the space $M\times G$ and $N\times G$, and define the lifted 1-forms by $\omega^{i}_{M}=g\cdot\pi_{M}^{*}\theta^{i}_{M}$ and $\omega^{i}_{N}=g\cdot\pi_{N}^{*}\theta^{i}_{N}$, where $\pi_{M}:M\times G\rightarrow M$ is the projection and similarly for $N$. It can then be proved that the original problem is satisfied if and only if there exists diffeomorphism $\tilde \Phi:M\times G\rightarrow N\times G$ such that $\tilde\Phi^{*}\omega_{N}^{i}=\omega_{M}^{i}$. Now we can see how the principal bundle in Cartan geometry arises in this way. The case of Cartan connection arises also from this procedure: if the original 1-forms *are* a coframe, then $\omega_{M}^{i}$ etc., already form a basis of basic 1-forms. The vertical 1-forms arise by considering the differentials: $$\begin{aligned}
\label{eq:79}
d\omega^{i}&=dg\wedge\theta^{i}+g\cdot d\theta^{i}\\
&=dg\, g^{-1}\wedge g\cdot\theta^{i}+\gamma^{i}{}_{jk}\theta^{j}\wedge\theta^{k}\\
&=dg\, g^{-1}\wedge \omega^{i}+\Gamma^{i}{}_{jk}\omega^{j}\wedge\omega^{k}
\end{aligned}$$ now $dg\,g^{-1}$ is the Maurer-Cartan form in the fibre direction, thus we have a basis of 1-forms on the lifted manifold. The functions $\Gamma^{i}{}_{jk}$ are called *torsion* and depends explicitly on the group elements. It is possible to absorb some or all of the torsion by redefining the Maurer-Cartan forms by adding linear combinations of $\omega^{i}$ while still keeping them Lie algebra-valued. The aim is to make the torsion independent of the group elements by a series of such absorptions, and then the remaining essential torsions are just differential invariants of the problem, and their functional relationships determine whether the original two sets of 1-forms are equivalent. In the case of geometrical coframes, after absorption we have the Cartan connection.
Structure-preserving maps {#sec:subm-immers-topol}
=========================
We start by recalling the definition of immersion and submersion in differential *topology*:
A smooth map $f\rightarrow M^{m}\rightarrow N^{n}$ with constant rank $r$ is called an *immersion* if $r=m$, and a *submersion* if $r=n$.
For an immersion or a submersion, an easy application of the implicit function theorem yields the following well-known-result:
Let $f\rightarrow M^{m}\rightarrow N^{n}$ be an immersion (resp. submersion). Then for each point $p\in M$ there are coordinate systems $(U,\varphi),(V,\psi)$ about $p$ and $f(p)$, respectively, that the composite $\psi f \varphi^{-1}$ is a restriction of the coordinate inclusion $\imath : {\mathbb{R}}^{m}\rightarrow{\mathbb{R}}^{m}\times{\mathbb{R}}^{m-n}$ (resp. a restriction of the coordinate projection $\pi: {\mathbb{R}}^{n}\times {\mathbb{R}}^{m-n}\rightarrow{\mathbb{R}}^{n}$).
This important property tells us two things. First, if we only care about local properties of an immersion or submersion, then the above proposition allows us to construct charts such that $f(M)$ in the case of immersion or $N$ in the case of a submersion is locally a topological manifold. Note that this may be true only locally: a map $f:{\mathbb{R}}\rightarrow{\mathbb{R}}^{2}$ given by $t\mapsto (t, at)$ with $a$ irrational, composed with the covering map $p:{\mathbb{R}}\rightarrow T^{2}={\mathbb{R}}^{2}/\mathbb{Z}^{2}$ is an immersion, but $pf({\mathbb{R}})$ is not globally a topological manifold since the image is dense in $T^{2}$. Nonetheless if we only focus on the local properties such subtleties will not bother us. The second thing this property tells us is the *equivalence problem* of an immersion or submersion: given two local immersions (resp. submersions) in a manifold, does there exist an *admissible* transformation of the manifold such that the first immersion (resp. submersion) can be transformed to coincide with the second? Here admissible transformations refer to all (local) diffeomorphisms of the manifold since we are working in the category of smooth manifolds. Since the property has given us coordinates charts in which all immersions or submersions look locally the same, the solution to the equivalence problem is
In the category of smooth manifolds, all immersions (submersions) are locally equivalent, provided the dimensions of the relevant manifolds match.
Note that the implicit function theorem is really *implicit*: in general, there is no explicit algorithm telling us what the required transformation is. In our study that follows which deals with geometry rather than topology, we would want explicit algorithms whenever possible.
For many applications, the category of smooth manifolds is too general to be of any use. For example, we would certainly object to the statement that a sphere is locally the same as a plane in Euclidean space. The reason is that an Euclidean space has more structure (i.e., the Euclidean metric) to distinguish differences not seen by the smooth structure. By focusing on the extra structure, we have made the transition from the field of differential topology to differential geometry. In what follows, we shall investigate the equivalence problem of immersions and submersions in many differential geometrical settings. We will ask ourselves the following two questions:
- Given an immersion or submersion $f:M\rightarrow N$ where in the case of immersion $N$ has an extra geometrical structure (e.g. a Riemannian metric) defined on it and in the case of submersion $M$ has an extra geometrical structure defined on it, when does it follow that $f(M)$ in the case of immersion or $N$ in the case of a submersion is also a manifold with extra geometrical structure?
- In the case where we can define an extra geometrical structure on $f(M)$ or $N$, to solve the equivalence problem taking into consideration of the extra geometrical structures.
As we are not interested in any particular configuration but instead are concerned with the general problem, it is a tremendous advantage if we can work in an explicitly coordinate-independent way. Of course, this is now easy for us, as we already know how to apply Cartan’s constructions to give geometrical structures to manifolds, as discussed in section \[sec:structures-manifold\].
Structure-preserving immersions {#sec:immers-subm}
-------------------------------
Although this paper is mainly concerned with the problem of structure-preserving submersions, there are two reasons why we should discuss structure-preserving immersions here first. One reason is that submersion and immersion has many structural similarities, and when we study immersions we gain some experience of how to deal with certain quantities; a more important reason is that, as we will see, every submersion problem contains a large class of immersion problems.
To keep discussion short, we will just give two examples which should sufficiently illustrate the methods dealing with immersion problems. For the general theory, see [@Fels:1999p7361].
\[Riemannian immersion\] Suppose $N$ is equipped with a Riemannian geometry and we have an immersion $f:M\rightarrow N$. First we need to place our bundle in a normalised form. Using $R_{h}^{*}\omega=\operatorname{Ad}(h^{-1})\omega$, if we apply the action of $$\label{eq:7}
h=
\begin{pmatrix}
1&0\\
0&r
\end{pmatrix}$$ Then an easy calculation shows that $R_{h}^{*}\theta=r^{-1}\theta$, $r\in O(n)$. Therefore, we can use this action to align the first $m$ of the basic forms $\theta$ to be tangential to $f(M)$. Then, restricted to $f(M)$, the Cartan connection will be of the following form $$\label{eq:8}
\omega=
\begin{pmatrix}
0&0&0\\
\theta&\alpha&-\beta^{t}\\
0&\beta&\gamma
\end{pmatrix}$$ where $\alpha$ is a skew $m\times m$ matrix of 1-forms and $\gamma$ is a skew $(n-m)\times(n-m)$ matrix of 1-forms. To keep the connection in this nice form, we are no longer allowed arbitrary right actions of $H$. Instead, now the allowed actions must be of the form $$\label{eq:9}
h=
\begin{pmatrix}
1&0&0\\
0&r&0\\
0&0&s
\end{pmatrix}$$ where $r\in O(m)$, $s\in O(n)$. This is a *reduction of the principal bundle $P$*. Let us still call the reduced bundle $P$. In the reduced bundle, $\alpha$ and $\gamma$ are still tangential to the fibre, but the tangential directions in which $\beta$ lives are now gone. Since $\beta$ derives from the Maurer-Cartan form on $H$, it now cannot have anything to do with $\alpha$ and $\gamma$. Since $\alpha$, $\beta$, $\gamma$ still constitute a basis of 1-forms on $P$, we must have $\beta$ a linear combination of the basic forms $\theta$. If we use lowercase Latin indices for the tangential directions and uppercase for the normal directions, we have $$\label{eq:10}
\beta^{i}{}_{A}=M^{i}{}_{jA}\theta^{j}$$ for *functions* $M^{i}{}_{jA}$ on $P$. The torsion-free condition also has to be imposed. An easy calculation shows that this now amounts to the condition $$\label{eq:11}
\beta\wedge\theta=0$$ or, using $$\label{eq:12}
M^{i}{}_{jA}\theta^{i}\wedge\theta^{j}=0$$ from which we conclude that the functions $M^{i}{}_{jA}$ is symmetric in the tangential indices. To summarise, for an immersion $f:M\rightarrow N$ in Riemannian geometry, we have the following piece of data:
- the Cartan connection $$\label{eq:13}
\begin{pmatrix}
0&0\\
\theta&\alpha
\end{pmatrix}$$ on $f(M)$, which determines the induced geometry on $M$ (and the induced Riemannian metric);
- the Ehresmann connection $\gamma$ on $f(M)$, which determines the geometry that is normal to $f(M)$;
- the gluing data $M^{i}{}_{jA}$ which ties together the tangential and normal part of $f(M)$.
We can of course dig deeper by studying interesting geometrical properties if the ambient space is Euclidean, and study the classification by asking the equivalence question. Since these are well-studied in the literature, we shall content with only sketching the above geometrical framework for beginning such investigations.
\[Conformal immersion\] Let $N$ be a normal Möbius geometry and $f:M\rightarrow N$ an immersion. As in the Riemannian case, the first step is using the adjoint action to reduce the principal bundle by aligning the first several $\theta$ with the submanifold and restrict to the submanifold. After this reduction, the Cartan connection becomes $$\label{eq:53}
\omega=
\begin{pmatrix}
\epsilon&\nu&\mu&0\\
\theta&\alpha&-\beta^{t}&\nu^{t}\\
0&\beta&\gamma&\mu^{t}\\
0&\theta^{t}&0&-\epsilon
\end{pmatrix}$$ the group $H$ has been reduced to $$\label{eq:54}
H=\left\{
\begin{pmatrix}
1&p&q&s\\
0&1&0&p^{t}\\
0&0&1&q^{t}\\
0&0&0&1
\end{pmatrix}
\begin{pmatrix}
z&0&0&0\\
0&a&0&0\\
0&0&c&0\\
0&0&0&z^{-1}
\end{pmatrix}
\mid a\in O(m),c\in O(m-n),s=\frac{1}{2}(pp^{t}+qq^{t})
\right\}$$ i.e., in terms of Lie algebra $$\label{eq:55}
\begin{pmatrix}
\star&\star&\star&0\\
0&\star&0&\star\\
0&0&\star&\star\\
0&0&0&\star
\end{pmatrix}$$ As in the Riemannian case, after the reduction $\beta$ becomes a basic form. Hence we write $$\label{eq:56}
\beta^{A}{}_{i}=M^{A}{}_{ij}\theta^{j}$$ the torsion free condition again means that we need to ensure $$\label{eq:57}
0=\beta\wedge\theta=M^{A}{}_{ij}\theta^{j}\wedge\theta^{i}$$ i.e., $M^{A}{}_{ij}$ is symmetric in $i$ and $j$.
In the Riemannian case, we stopped here since those were all the *structural* things we could do. But since Möbius geometry has more freedom, now we can do still more. Taking $h$ as an element of $H$ of the form , we can verify that $$\label{eq:58}
R_{h}^{*}M^{A}{}_{ij}=z^{-1}a^{-1}(c(M-qI))a=z^{-1}(a^{-1})^{i}{}_{k}(c^{A}{}_{B}(M^{B}{}_{kl}-q^{B}\delta_{kl}))a^{l}{}_{j}$$ By choosing $q$, we can make $M$ trace-free: $\sum_{i}M^{A}{}_{ii}=0$. After this second reduction, the group becomes $$\label{eq:59}
H=\left\{
\begin{pmatrix}
1&p&0&s\\
0&1&0&p^{t}\\
0&0&1&0\\
0&0&0&1
\end{pmatrix}
\begin{pmatrix}
z&0&0&0\\
0&a&0&0\\
0&0&c&0\\
0&0&0&z^{-1}
\end{pmatrix}
\mid a\in O(m),c\in O(m-n),s=\frac{1}{2}pp^{t}
\right\}$$ i.e., in terms of Lie algebra, $$\label{eq:60}
\begin{pmatrix}
\star&\star&0&0\\
0&\star&0&\star\\
0&0&\star&0\\
0&0&0&\star
\end{pmatrix}$$ The Cartan connection is still given by , but now in addition $\mu$ is semi-basic. We note here that the forms $\mu$ and $\nu$ are not very important since for $n\neq 2$ they are completely determined by the requirement of a normal geometry.
Before studying submersions, let us remark here that immersions are much nicer than submersions. Given an immersion map and a geometrical structure on the range of the map, in most circumstances we can define a geometrical structure on the domain of the map for which the map is structure-preserving, and in a large number of cases this structure is even unique. Also, as in our last part of discussion of Riemannian immersions, it is possible to only consider the “ambient geometry”, without reference to the ambient space at all and discard all information that is not relevant to the immersion.
Anatomy of the structure-preserving submersion {#sec:anat-struct-pres}
----------------------------------------------
I hope the above examples of structure-preserving immersions should at least give a rough idea of how a structure-preserving submersion should look like. Now let us consider the following. Suppose $\sigma:M\rightarrow B$ is a submersion that can be described as “structure-preserving”. Then *both* $M$ and $B$ must have structure related to them. As we saw before, the structures are best considered in the principal bundle, hence we shall aim to find a map which covers $\sigma$ in the diagram below: $$\label{eq:40}
\xymatrix{
M\times H\ar[r]^{\approx}\ar[rd]_{\text{proj}_{1}}&P\ar@{..>}[r]^{?}\ar[d]_{\pi_{M}}&Q\ar[d]^{\pi_{B}}&B\times H'\ar[l]_{\approx}\ar[ld]^{\text{proj}_{1}}\\
&M\ar[r]^{\sigma}&B&
}$$ It is obvious that $H'$ needs to be a subgroup of $H$. Also, for this to be a covering map, the map we are looking for needs to be surjective. However, this implies there must be a surjective homomorphism from $H$ to its subgroup $H'$. In many applications where a structure-preserving submersion is obvious, the group $H$ is simple, and hence no such homomorphism can exist. Therefore we need to look for a more complicated setting.
Let us for the moment forget that we are dealing with *structure-preserving* maps and focus only on the submersion. The submersion locally defines a foliation on the manifold $M$, and a subgroup $H_{L}\subset H$ can be defined that preserves this foliation. In other words, let $\theta^{a}$ and $\theta^{i}$ be a basis of 1-forms on the manifold which generates the Cartan connection in the principal bundle, and let $\theta^{i}$ define the Frobenius distribution defining the submersion. Then $$\label{eq:61}
H_{L}=\{g\in H\mid g\cdot \omega^{i}\equiv0\mod \{\omega^{j}\},\forall i\}$$ Geometrically, this amounts to a reduction of the principal bundle, as pictured below: $$\label{eq:48}
\xymatrix{
M\times H\ar[r]^{\approx}\ar[rd]_{\text{proj}_{1}}&P\ar[d]_{\pi_{M}}&P'\ar[l]_{\rho}\ar[d]^{\pi'_{M}}&M\times H_{L}\ar[l]_{\approx}\ar[ld]^{\text{proj}_{1}}\\
&M&\ar[l]_{=}M&
}$$ By considering its action on $\theta^{A}$, we can see that as a subgroup of $GL(n)$, the group $H_{L}$ has block upper-triangular structure: $$\label{eq:70}
g\cdot
\begin{pmatrix}
\{\theta^{i}\}\\
\{\theta^{A}\}
\end{pmatrix}
=
\begin{pmatrix}
g_{ij}&g_{iA}\\
0&g_{AB}
\end{pmatrix}
\begin{pmatrix}
\{\theta^{i}\}\\
\{\theta^{A}\}
\end{pmatrix}.$$ An important note: the above representation of the group should be understood in abstract terms, together with the labelling: as we saw in the case of conformal geometry, it is not necessarily true that the group action is always aligned in this way (though if we use bigger matrix and padding a sufficient number of zeros in the column vector, it is always possible to do so, and the way to do it should be clear from the problem at hand, as in the case of conformal geometry).
Since we must have a submersion before we can talk about structure-preservation, it makes sense that we take this as our starting point. Therefore, we look for the map indicated by $\tilde\sigma$ below: $$\label{eq:71}
\xymatrix{
M\times H\ar[d]^{\approx}&\ar[l]_{\rho}M\times H_{L}\ar[d]^{\approx}\ar[r]^{\tilde\sigma}& B\times H'\ar[d]^{\approx}\\
P\ar[d]^{\pi_{M}}&\ar[l]_{\rho}P'\ar[d]^{\pi'_{M}}\ar[r]^{\tilde\sigma}&Q\ar[d]^{\pi_{B}}\\
M&\ar[l]_{=}M\ar[r]^{\sigma}&B
}$$ In the above diagram, we require all squares commute, all arrows to the left injective, and all arrows to the right surjective. Note that both $P$ and $Q$ are Cartan-geometries in their own right, and hence we can use the above maps to pull-back their Cartan connections to $P'$: $$\xymatrix@R=3pt{
P&\ar[l]_{\rho}P'\ar[r]^{\tilde\sigma}&Q\\
\omega_{P}\ar@{|->}[r]&\rho^{*}\omega_{P}&\\
&\tilde\sigma^{*}\omega_{Q}&\ar@{|->}[l]\omega_{Q}.
}$$ We now want to find a basis of 1-forms on $P'$. This should also give us the constraints on the map $\tilde\sigma$. The pulled-back forms on $P'$ are (we will omit some pullback signs when this should cause no confusion of on which space we are currently working): $$\label{eq:73}
\begin{array}{rcccccc}
\rho^{*}\omega_{P}:&\theta_{P}^{A}&\theta_{P}^{i}&\omega_{P}^{AB}&\omega_{P}^{ij}&\omega_{P}^{Ai}&\omega_{P}^{iA}\\
\tilde\sigma^{*}\omega_{Q}:&&\theta_{Q}^{i}&&\omega_{Q}^{ij}&&
\end{array}$$ It is obvious that the forms $\rho^{*}\omega_{P}$ span $P'$, whereas the forms $\tilde\sigma^{*}\omega_{Q}$ are linearly independent on $P$. Moreover, as both $\theta^{i}_{P}$ and $\theta^{i}_{Q}$ are basic with respect to projection to $M$, they span the same subspace of the tangent bundle of $P$. However, there is an important difference between them.
Suppose we choose a point $q\in Q$, and consider $\tilde L(q)=\tilde\sigma^{-1} (q)$. This set can be given a submanifold structure, of dimension $\dim H_{L}-\dim H'+\dim H-\dim B$. It can even be given a $H'$ bundle structure. Both $\theta^{i}_{P}(q)$ and $\theta^{i}_{Q}(q)$ are 1-forms on this submanifold, but the ambiguity in the form $\theta^{i}{}_{Q}$ is a right action of $H'$ (we can still permute the 1-forms), whereas the ambiguity in $\theta^{i}_{P}$ is $H'\times M/B$ (we can apply the group action independently at each point above $L=M/B$, in contrast to the first case).
On the bundle $\tilde L(q)$ we now use the right $H'$ action to align all the $\theta^{i}_{P}$ with $\theta^{i}_{Q}$. Depending on the group $H'$, it is conceivable that this might not always be possible, but we will only consider cases where this is possible since otherwise it is clear that such a submersion cannot be legitimately called “structure-preserving”. After this is done for $\tilde L(q)$ for all $q\in Q$, we have a situation that is quite similar to, but not the same as a reduction of principal bundle on $P'$. If we refer to , the subgroup consisting of elements $g_{ij}$ still acts on $P'$, but once this action is chosen on a point $x\in P'$, it is determined for all $y\in\tilde L(\tilde\sigma(x))$. The rest of the group elements do not have this restriction. Let us call this situation a *semi-reduction* of the principal bundle.
An important simplification occurs where $g_{iA}=0$ in . This occurs frequently, as we shall see, since the group $H\subset GL(n)$ can have symmetries that force it to be zero due to the lower left entry being zero.
As we have hinted several times before, the connection components $\theta^{A}_{P}$ and $\omega^{AB}_{P}$ when restricted to a single leaf $\tilde L(q)$, form the Cartan connection on the leaf. For example, in the Riemannian case this reduces to equation \[eq:8\], where we just set the transversal basic forms to zero and all our techniques for dealing with immersions apply.
Let us return to . Now we have $\theta^{i}_{P}=\theta^{i}_{Q}$. First note that on $P'$, the form $\omega^{iA}_{P}$ are *basic*, and if the special situation where $g_{iA}=0$ occurs, $\omega^{Ai}_{P}$ are basic as well. The best way to see this is to ponder the relationship between the Lie algebra and the group. Hence, we can write $$\label{eq:75}
\omega^{Ai}_{P}=K^{Ai}{}_{B}\omega^{B}_{P}+M^{Ai}{}_{j}\omega^{j}{}_{Q}.$$ Note that $K^{Ai}{}_{B}$ and $M^{Ai}{}_{j}$ are ordinary functions on the bundle $P'$. They can be considered tensor quantities on $M$, but since it is much easier to work with ordinary functions than tensors, we shall always do calculations on $P'$ and not on the base. If $\omega^{Ai}{}_{P}$ are basic as well, there is another set which is symmetric with respect to in an obvious way, for example, orthogonal group would give $\omega^{Ai}_{P}=-\omega^{iA}{}_{P}$.
Considering from the point of view of giving the basic forms $\theta^{i}_{Q}$ and $\theta^{A}_{P}$ and then determining whether the distribution given by $\theta^{i}_{Q}$ really gives a distribution, Frobenius theorem tells us that this is the case if and only if $$\label{eq:76}
d\theta^{i}_{Q}\equiv\theta^{A}_{P}\wedge\omega^{A}{}_{iP}\equiv0\mod\{\theta^{i}_{Q}\}.$$ This is the *integrability condition* that we need to enforce. If $\theta^{iA}_{P}$ and $\theta^{Ai}_{P}$ are linearly dependent, this will reduce to a condition on $K^{Ai}{}_{B}$.
Using $R^{*}_{h}(\omega)=\operatorname{Ad}(h^{-1})\omega$, we can verify that after the semi-reduction the 1-forms $\omega^{ij}_{P}$ also has no ambiguity left along each $\tilde L$. However, it is not necessarily true that $\omega^{ij}_{P}=\omega^{ij}_{Q}$. In fact, we can calculate $d\theta^{i}_{P}$ and $d\theta^{i}_{Q}$ on $P$ and $Q$ separately, and then use $\theta^{i}_{P}=\theta^{i}_{Q}$ to compare the results. We have $$\label{eq:77}
\omega^{ij}_{P}=\omega^{ij}_{Q}-M^{A}{}_{ij}\theta^{A}_{Q}$$ However, we can think of both $\omega_{P}^{ij}$ and $\omega_{Q}^{ij}$ as $\mathfrak{h'}$-valued 1-forms where $\mathfrak{h'}$ denotes the Lie algebra of $H'$, and hence this shows that $M^{A}{}_{ij}\theta^{A}_{Q}$ is also a $\mathfrak{h'}$-valued 1-form. This is the *Lie algebra compatibility condition*.
These are all the conditions that we can deduce from our motivation. We can now finally put everything together and give our definition for structure-preserving submersion:
\[Structure-preserving submersion\] Let $B$ and $M$ be Cartan geometries with group $H'$ and $H$, respectively. A submersion $\sigma:M\rightarrow B$ is called *structure-preserving* if after reduction of the principal bundle of $M$ adapted to the submersion to $P'$, we can find function $M^{A}{}_{ij}$ and $K^{Ai}{}_{B}$ on $P'$ such that, after right-$H_{L}$ action if needed, on the bundle we have
- $\theta^{i}_{P}=\theta^{i}_{Q}$,
- $\omega^{ij}_{P}=\omega^{ij}_{Q}-M^{A}{}_{ij}\theta^{A}_{Q}$ (this implies the Lie algebra-compatibility condition),
- $\omega^{Ai}_{P}=K^{Ai}{}_{B}\omega^{Q}_{P}+M^{Ai}{}_{j}\omega^{j}_{Q}$ (this implies the integrability condition, since we started with a submersion),
Note that we specified the geometries on $B$ and $M$ independently. *It is important to specify at least what class of geometry they each belong to*, for example, in the Riemannian case if a structure-preserving submersion exists then it can be interpreted uniquely in our above framework. However, if we allow $B$ to be non-torsion-free, then this is no longer the case: $M^{A}{}_{ij}$ can now take *any* value subject to Lie-algebra compatibility, as any value different from would now contribute to torsion in the geometry $B$.
Although we started with the space $P$ and derived what a “structure-preserving submersion” should look like, making the space $Q$ look derivative, in our definition the centre stage is not given to $P$. This is deliberate: if we look at the definition of a structure-preserving immersion, i.e., ambient geometry in section \[sec:immers-subm\], we see that the centre stage is given to what the geometrical construction, i.e., the immersion, can “see” and we omitted altogether those parts unavailable to $Q$. Here we do the same thing, as this way of definition is more general. In addition, later in section \[sec:anoth-view-struct\] we shall see that there are cases where $Q$ instead of $P$ is considered more immediately available to us.
Now, as usual, we will give examples of how this construction realises in the case of Riemannian and conformal geometries.
\[Riemannian submersion\]
Now let us return to the torsion-free Cartan geometry modelled on Euclidean geometry. Let $f:M\rightarrow N$ be a submersion. Using essentially the same reduction of the principal bundle $P$ as we did before, we now have the Cartan connection on $P$ over $M$ of the form $$\label{eq:14}
\omega=
\begin{pmatrix}
0&0&0\\
\theta&\alpha&-\beta^{t}\\
\psi&\beta&\gamma
\end{pmatrix}$$ where $\alpha$ is $n\times n$, $\gamma$ is $(n-m)\times(n-m)$, i.e., the forms $\theta$ constitute a basis of basic forms on $N$. As before, after reduction the forms $\beta$ becomes basic, hence $$\label{eq:15}
\beta^{A}{}_{i}=M^{A}{}_{ij}\theta^{j}+K^{A}{}_{Bi}\psi^{B}.$$
Now the submersion we have is very specific: it endows the space $N$ with a Riemannian structure. First, let us now use the right action to align all the forms $\theta$ so that they are basic for the projection $\pi: M\rightarrow N$. After this is done, we are no longer allowed to apply the right action of the group element $$\label{eq:16}
h=
\begin{pmatrix}
1&0&0\\
0&r&0\\
0&0&1
\end{pmatrix}$$ arbitrarily on the whole space $M$: $h$ can still be applied, but once it is applied to a point in a leaf, the same action must be applied to all points on the same leaf. In other words, the degree of freedom is now in $N$, not $M$.
For the Riemannian submersion, the space $N$ also has a torsion-free Cartan connection $$\label{eq:17}
\begin{pmatrix}
0&0\\
\theta&\bar\alpha
\end{pmatrix}$$ notice we have used the same $\theta$ as in the space $M$. To be more precise, the $\theta$ on $M$ is now the pullback of the $\theta$ on $N$, but for brevity we omit the pullback signs. This should not cause any confusion. Calculating $d\theta$ in two ways, first on $N$, then on $M$, we obtain $$\label{eq:18}
d\theta=\bar\alpha\wedge\theta=\alpha\wedge\theta-\beta^{t}\wedge\psi.$$ Let us see what kind of restriction this places on our quantities. Expanding using , we have (recall: exterior derivatives commute with pullbacks) $$\begin{aligned}
\label{eq:19}
\bar\alpha^{i}{}_{j}\wedge\theta^{j}&=\alpha^{i}{}_{j}\wedge\theta^{j}-M^{A}{}_{ij}\theta^{j}\wedge\psi^{A}-K^{A}{}_{Bi}\psi^{B}\wedge\psi^{A}\\
&=(\alpha^{i}{}_{j}+M^{A}{}_{ij}\psi^{A})\wedge\theta^{j}-K^{A}{}_{Bi}\psi^{B}\wedge\psi^{A}\end{aligned}$$ comparing coefficients, we learn that $\alpha^{i}{}_{j}=\bar\alpha^{i}{}_{j}-M^{A}{}_{ij}\psi^{A}$, $M^{A}{}_{ij}$ is skew symmetric in $i$ and $j$ (due to the skew-symmetry of $\bar\alpha$), and $K^{A}{}_{Bi}$ is symmetric in $A$ and $B$. We remark that the redefinition of $\alpha$ by $\bar\alpha$ is the same technique used in the equivalence method for absorbing torsion. In fact, if we do not do this, the quantities $M^{A}{}_{ij}$ really corresponds to geometrical torsion of the space $B$.
Putting everything together, our Cartan connection is now $$\label{eq:22}
\omega=
\begin{pmatrix}
0&0&0\\
\theta&\bar\alpha-M\psi&-\theta^{t}M^{t}-\psi^{t}K^{t}\\
\psi&M\theta+K\psi&\gamma
\end{pmatrix}$$ It should be noted that is a vast simplification from because only $\gamma$ and $\psi$ are not pullbacks of forms defined on $N$, and the functions $M$ and $K$ have quite strong symmetries. Let us also calculate the curvature $$\label{eq:23}
\Omega=
\begin{pmatrix}
0&0&0\\
(2,1)&(2,2)&\star\\
(3,1)&(3,2)&(3,3)
\end{pmatrix}$$ where the independent entries are $$\begin{aligned}
\label{eq:24}
(2,1)&=d\theta+\bar\alpha\wedge\theta-M\psi\wedge\theta-\theta^{t}M^{t}\wedge\psi(-K^{t}\psi^{t}\wedge\psi)=0\\
\label{eq:x3.1}
(3,1)&=d\psi+M\theta\wedge\theta+K\psi\wedge\theta+\gamma\wedge\psi=0\\\label{eq:x2.2}
(2,2)&=d\bar\alpha-dM\wedge\psi-Md\psi+\bar\alpha\wedge\bar\alpha-\theta^{t}M^{t}\wedge M\theta\\
&\quad-\psi^{t}K^{t}\wedge M\theta-\theta^{t}M^{t}\wedge K\psi+M\psi\wedge M\psi-\psi^{t}K^{t}\wedge K\psi\\
\label{eq:x3.3}
(3,3)&=d\gamma+\gamma\wedge\gamma-M\theta\wedge M\theta-K\psi\wedge\theta^{t}M^{t}-M\theta\wedge\psi^{t}K^{t}-K\psi\wedge\psi^{t}K^{t}\\
(3,2)&=dM\wedge\theta+Md\theta+dK\wedge\psi+Kd\psi\\
&\quad +M\theta\wedge\bar\alpha+\gamma\wedge M\theta+K\psi\wedge\alpha+\gamma\wedge K\psi-M\theta\wedge M\psi-K\psi\wedge M\psi\end{aligned}$$ the first two are set to zero because of the torsion-free requirement.
\[Conformal submersions\] Now let us investigate the submersion problem. Again we first align the first $\theta$ to be tangential to the submersion, after which we have the Cartan connection $$\label{eq:62}
\omega=
\begin{pmatrix}
\epsilon&\nu&\mu&0\\
\theta&\alpha&-\beta^{t}&\nu^{t}\\
\psi&\beta&\gamma&\mu^{t}\\
0&\theta^{t}&\psi&-\epsilon
\end{pmatrix}$$ for the conformal submersion, the space $N$ also has a normal Möbius geometry defined on it: $$\label{eq:63}
\begin{pmatrix}
\epsilon&\bar\nu&0\\
\theta&\bar\alpha&\bar\nu^{t}\\
0&\theta^{t}&-\epsilon
\end{pmatrix}$$ again we have aligned the $\theta$ on $M$ and on $N$. Note that using the group elements $p$, the $\epsilon$ on both space have also been made the same. This requires that the group degrees of freedom of the elements $$\label{eq:64}
\begin{pmatrix}
1&p&0&s\\
0&1&0&p^{t}\\
0&0&1&0\\
0&0&0&1
\end{pmatrix}
\begin{pmatrix}
z&0&0&0\\
0&a&0&0\\
0&0&1&0\\
0&0&0&z^{-1}
\end{pmatrix}$$ are now only available on $N$, not on $M$ anymore. Once more, $$\label{eq:65}
\bar\alpha\wedge\theta=\alpha\wedge\theta-\beta^{t}\wedge\psi$$ writing $$\label{eq:66}
\beta^{A}{}_{i}=M^{A}{}_{ij}\theta^{j}+K^{A}{}_{Bi}\psi^{B}$$ we have the same restriction as in the Riemannian case: $M^{A}{}_{ij}$ is skew in $i$ and $j$, whereas $K^{A}{}_{Bi}$ is symmetric in $A$ and $B$. In the conformal case, there is actually a hidden condition in this: consider the adjoint $H$ action by the element $$\label{eq:67}
\begin{pmatrix}
1&0&q&s\\
0&1&0&0\\
0&0&1&q^{t}\\
0&0&0&1
\end{pmatrix}$$ the degree of freedom of which is still on the whole of $M$. We have $$\label{eq:68}
\beta\mapsto \beta-\theta^{t}q,\qquad M^{A}{}_{ij}\mapsto M^{A}{}_{ij}-\delta_{ij}q^{A}$$ hence, to maintain the antisymmetry of $M^{A}{}_{ij}$, we must set $q=0$. Let us recap here: the degree of freedom on $N$ is , whereas the degree of freedom on $M$ is only $$\label{eq:69}
\begin{pmatrix}
1&0&0&0\\
0&1&0&0\\
0&0&c&0\\
0&0&0&1
\end{pmatrix}$$ i.e., only rotation of the $\psi$ is allowed.
Now, exterior differentiate the forms, we can obtain similar constraints to those obtained in the Riemannian case. As we will not use them in subsequent discussions, we omit them here.
The inapplicability of the equivalence method {#sec:equiv-meth-fails}
---------------------------------------------
Suppose that we are explicitly given two structure-preserving submersions $M\rightarrow N$ and $M'\rightarrow N'$, and we ask the question that if they are equivalent. This can be solved by Cartan’s equivalence method: first, we can confirm by calculation that both of the submersions are really structure-preserving. Then, using the method of equivalence, we can check that these two are equivalent as submersions. Then, by uniqueness, these two are equivalent as structure-preserving submersions.
This is all very good, but if the problem can be solved straightforwardly using the equivalence method, then why do we need to develop our method using a mixture of Cartan geometry and lifting, which now seems to complicate things? The answer lies in the fact that the above algorithm can only be applied when the structure-preserving submersions are giving explicitly, by giving expressions involving local coordinates, for example. Suppose that we want to prove some general properties of a certain class of structure-preserving submersions, for example, classification. Now if we apply the equivalence method for general submersions, as we do not have explicit expressions for the coframe there is in general no way that we can ensure or check that the submersion is structure-preserving under the framework of the method of equivalence.
The problem is that, as we have shown, though in the structure-preserving case we still have a coframe problem, the relations between the old and new coframes are no longer expressed by a simple Lie group transformation. Indeed, for any two coframes at two base points, we can still find an element of the Lie group that transforms the old coframe into the new one, but now this transformation depends on the base point in a non-trivial and non-local way: if we specify the transformation at one base point, *parts, but not all,* of the transformation at some other points are determined, yet at another set of points they remain completely undetermined.
It is reasonable to ask if a straightforward extension to the method of equivalence could be used for the structure-preserving submersion problems. After all, we still have group actions, and an absorption procedure is still possible, if we separate the parts that can be applied to all points and the parts that can only be independently applied to a subset. Indeed this is possible, but the main problem is that this does now get us very far: the usual formulation of the method of equivalence has few, if any, constraints on the torsion elements. However, as we will see in section \[sec:riem-rigid-flow\], structure-preserving submersion questions are characterised by a large number of interconnected constraints on $M^{A}{}_{ij}$ and $K^{Ai}{}_{B}$. Thus the problem lies not so much in finding the correct “frame”, but to get meaning from the messy constraints. In simple cases, for example, in the case where the codimension of the submersion is 1, as in \[sec:riem-rigid-flow\], it is possible to get results by elementary means. However, in yet more complicated cases a more efficient method for dealing with the constraints needs to be developed.
Some examples {#sec:some-examples}
-------------
Here we give several examples of how real, specific problems arising in physics can be fitted to our framework.
\[Pseudo-Riemannian submersion of codimension 2 in 4-dimensional spacetime\] In using structure-preserving submersions, it is not necessary to start giving the Cartan geometry on the total space. Instead, we can specify some extra constraints and deduce certain things about the total space. Let us try giving the following conditions:
1. $M\rightarrow B$ is a structure-preserving submersion.
2. $H=SO(3,1)$, $H'=SO(1,1)$.
3. The Cartan connection on each $\tilde L$ satisfies the structural equation of the 2-sphere.
4. The Cartan connection on $P$ lies in the kernel of the Ricci homomorphism.
Item 3 above just says that each leaf has the full rotational symmetry, whereas item 4 says that the total space is Einstein. It is shown in [@Alvarez:2007p4209] that the unique solution satisfying these conditions is the Schwarzschild solution for black holes. It is desirable to use our method to study the existence of other black holes that can be described as a structure-preserving submersion.
\[String fluid\] Another problem that can be put into our framework in a straightforward fashion is the problem of string fluid. This arises, for example, in physics where the effective action for unstable D-branes reduces to the case where relativistic string fluid of moving electric flux lines, see [@Gibbons:2000p8621]. Using with a few change of signs we have a working model for relativity, as we have already used for the above example. The string itself is 1-dimensional, so its trajectory in spacetime (the leaves) has group $SO(1,1)$, corresponding to $\gamma$ in . $\bar\alpha$ is then the curvature of spacetime the string fluid “sees”. We see that in our framework it is now possible to give meaning to “a string fluid moving rigidly in spacetime”, and using etc., it is now possible to calculate properties.
\[Rigid motion in Newtonian spacetime\] We consider a 3-dimensional body moving in Newtonian spacetime. The relevant group is now the Galiean group acting on spacetime: $$\label{lie1}
\begin{pmatrix}
1\\
t'\\
\mathbf{x}'
\end{pmatrix}
=
\begin{pmatrix}
1&0&\mathbf{0}^{T}\\
t_{0}&1&\mathbf{0}^{T}\\
\mathbf{x}_{0}&\mathbf{v}&R
\end{pmatrix}
\begin{pmatrix}
1\\
t\\
\mathbf{x}
\end{pmatrix}$$ where $R\in SO(3)$. The Cartan connection is, $$\label{eq:80}
\omega=
\begin{pmatrix}
0&0&0\\
\tau&0&0\\
\xi&\nu&\rho
\end{pmatrix}$$ where $\rho\in\mathfrak{so}(3)$. Note that this is without doing the first reduction. Note also that, since in Newtonian mechanics the form $\tau$ is *always* aligned with motion and this alignment is preserved by the Galilean group, *this is also the form of the 1-forms after reduction*, i.e., the first reduction does nothing at all! Hence, in contrast to the previous two cases, in Newtonian spacetime rigidity places no constraints on the system whatsoever.
We see in that in general it is impossible to construct a covering map directly from the bundle of $M$ to the bundle of $B$, but as the above example shows, in Newtonian spacetime, such a map is trivially available to us. The following generalisation of this observation is immediate.
A structure-preserving submersion places no extra constraints on the system (i.e., $M^{A}{}_{ij}$ and $K^{Ai}{}_{B}$ vanish identically) if and only if in the original Lie algebra the block $\omega^{iA}$ vanish identically.
The “only if” part is clear. For the “if” part, if this block in the original Lie algebra does not vanish, we can construct geometries where the Cartan connection corresponding to this part is non-zero. After the first reduction, there are also frames where this part, now consisting of basic forms only, remains non-zero. These equations yield the constraints.
Another view of structure-preserving submersions {#sec:anoth-view-struct}
------------------------------------------------
In all the examples we see above, the space $M$ is considered more immediately available to us. The space $B$ often seem no more than a mathematical construct, as even when $M$ is flat, $B$ needs not be. Furthermore, in all applications except that of the black hole, the space $M$ is given to us together with its geometry. This needs not be the case: there are examples where $B$ is immediately available to us whereas $M$ is somehow hidden from view:
\[Kaluza-Klein\] Let $M=\mathbb{M}\times S^{1}_{r}$, where $\mathbb{M}$ is 4-dimensional Minkowski spacetime and $S^{1}_{r}$ is the circle with radius $r$, each given the usual metric. The space $M$ is given the product metric. Then the projection $\pi:M\rightarrow\mathbb{M}$ is a structure-preserving submersion. This submersion corresponds to the usual dimensional reduction in Kaluza-Klein theory.
By itself this example is not very interesting: after all locally $M$ is the same as 5-dimensional Minkowski spacetime, which we already studied before. However, if we go to the closely-related concept of gauge theory, especially Yang-Mills theory, we see new things popping up. For example, in [@Watson:1983] which studies Yang-Mills theory on $S^{4}$, the following diagram is relevant: $$\label{eq:72}
\xymatrix{
&SU(2)\ar[r]^{\pi_{4}}\ar[d]&P_{1}({\mathbb{C}})\ar[d]\\
U(1)\ar[r]&S^{7}\ar[r]^{\pi_{2}}\ar[d]_{\pi_{1}}&P_{3}({\mathbb{C}})\ar[dl]^{\pi_{3}}\\
&S^{4}&
}$$ where all $\pi_{i}$ are Riemannian submersions provided that each space in the diagram is equipped with the standard metric. However, this approach is unsatisfactory since it relies on the introduction of Riemannian metrics on the various spaces, while Yang-Mills theory itself is built from group-theoretic pieces. Hence it is desirable to study such problems using our general, group-theoretic structure-preserving submersions alone. This programme will be pursued in a subsequent paper.
In-depth example: rigid Riemannian flow {#sec:riemannian-geometry}
=======================================
The case of codimension-1 Riemannian submersion is interesting in that under some additional mild conditions the geometry of the whole flow is rigidly determined. We shall now begin to derive these seemingly surprising results. For background information, see [@me:000; @born1909; @herglotz1910; @noether1910; @pirani1962; @rayner1959; @Wahlquist:1967p2556; @Wahlquist:1966p2548; @Estabrook:1964p2608; @Giulini:2006p66].
Up until now we have avoided doing massive computations. Since this is unavoidable now, we need a clear way to express our quantities instead of carrying exterior derivatives of forms around all the time. Here it is convenient to use covariant derivatives in the principal bundle. Covariant derivatives on the bundle can only be defined in cases where the vector space $\mathfrak g/\mathfrak h$ which is isomorphic to the tangent space at each point is invariant under the $\operatorname{Ad}(H)$ representation when considered as a Lie-module, meaning that the interpretation of the usual “horizontal subspaces” makes invariant sense. Since any tensor on the base manifold is interpreted as a vector-valued *function* on the bundle where the vector space is a representation of the group $H$, given a “direction” $\mathbf{v}\in\mathfrak g/\mathfrak h$ the inverse of the Cartan connection $\omega$ maps this direction to a vector field on $P$: $\mathbf{v}^{\dagger}=\omega^{-1}(\mathbf{v})$. Then given a tensor, which we will write as $$\label{eq:30}
T=T^{A}\otimes\mathbf{e}_{A}$$ where $\mathbf{e}_{A}$ is a basis for the vector space $T$ is taking value in, the covariant derivative is then $$\label{eq:31}
\nabla_{\mathbf{v}}T=\omega^{-1}(\mathbf{v})(T^{A})\otimes\mathbf{e}_{A}\equiv T^{A}{}_{;\mathbf{v}}\otimes\mathbf{e}_{A}$$ a trick of calculating the covariant derivative of a function is to just calculate the exterior derivative of the scalar part to get a 1-form, and the terms involving the horizontal 1-forms are the covariant derivatives, e.g.: $$\label{eq:36}
dT^{A}|_{\theta^{1}}\equiv \text{the terms in $dT^A$ involving $\theta^{1}$}=T^{A}{}_{;1}\theta^{1}.$$
Riemannian rigid flow in homogeneous space {#sec:riem-rigid-flow}
------------------------------------------
This is the case where $\dim M-\dim N = 1$, which gives us vast simplification: now $\gamma = 0$ and (let us denote by $0$ the only index taken by $A,B,\dots$) $$\label{eq:21}
M^{0}{}_{ij}\equiv M_{ij},\qquad K^{0}{}_{0i}\equiv K_{i}.$$
For Riemannian rigid flow, if the ambient space is homogeneous, then $M$ is basic for the projection $\pi:M\rightarrow N$.
Using and , we can collect the terms in that are of the basis $\theta\wedge\theta$, i.e., $$\label{eq:25}
(2,2)|_{\theta\wedge\theta}=d\bar\alpha+\bar\alpha\wedge\bar\alpha-\theta^{t}M^{t}\wedge M\theta-MM\theta\wedge\theta.$$ Now the left hand side is constant since $M$ is homogeneous, $d\bar\alpha+\bar\alpha\wedge\bar\alpha$ is the curvature of the space $N$ and hence is basic. This shows the expression $$\label{eq:26}
\theta^{t}M^{t}\wedge M\theta+MM\theta\wedge\theta$$ must be basic as well. Using indices, this means that the function $$\label{eq:27}
R_{ijkl}=M_{ik}M_{jl}-M_{il}M_{jk}+2M_{ij}M_{lk}$$ is constant along each leaf. Now, for example $$\label{eq:34}
R_{1212}=-M_{12}M_{21}+2M_{12}M_{21}=-M_{12}^{2}$$ since the function $M$ is smooth, this shows that $M_{12}$ is constant along each leaf. The claim is proved by noting that we can choose indices to generate $-M_{ij}^{2}=\text{const.}$ along each leaf for any pair of $(i,j)$.
Let us remark again that it is not true that we can always use the above algebraic method to show that a function is basic for a projection: usually the right $H$ action will change the value of the function along the leaf. Our method makes sense in this case due to the fact that by using the pulled back form $\theta$ on the principal bundle over $M$, we have already killed the degree of freedom that affects $M_{ij}$, i.e., in the reduced bundle, the function $M_{ij}$ is constant in each fibre, as can be shown by explicit calculation.
Also, for homogeneous spaces we can set $(2,2)=0$ straight away by a process called mutation: instead of considering the Euclidean model , we can use a spherical model $$\label{eq:32}
\mathfrak g'=\left\{
\begin{pmatrix}
0&-v\\
v&a
\end{pmatrix}\in M_{n+1}({\mathbb{R}})\mid a\in \mathfrak o(n)
\right\}$$ or a hyperbolic model $$\label{eq:33}
\mathfrak g''=\left\{
\begin{pmatrix}
0&v\\
v&a
\end{pmatrix}\in M_{n+1}({\mathbb{R}})\mid a\in \mathfrak o(n)
\right\}$$ such mutation of model does not affect the geometry in anyway, but globally subtracts from any curvature function a constant (in the spherical model, a sphere of unit diameter is “flat” whereas a plane is not). By setting the scale correctly, we can comfortably set all curvature blocks in $M$ to zero.
\[kmmagic\] For the assumption of the above proposition, $K_{[i;j]}=0$.
For , we exterior differentiate again. Note that we have a term $K_{[i;j]}\psi^{A}\wedge\theta^{i}\wedge\theta^{j}$. Hence we need to collect the other terms that have the basis $\theta\wedge\theta\wedge\psi$. It is easy to show that the only contribution is proportional to $MM\theta\wedge\theta\wedge\psi$. However, looking at , it is clear that this term vanishes.
\[basick\] For the assumption of the above proposition, if in addition $M\neq 0$, then $K$ is basic.
We again use and , but now focus on terms containing $\theta\wedge\psi$. The only term that needs discussion is $-dM\wedge\psi$, but since $M$ is basic, $dM$ is as well. Hence the expression $$\label{eq:35}
B\wedge\psi = -MK\theta\wedge\psi-\psi^{t}K^{t}\wedge M\theta-\theta^{t}M^{t}\wedge K\psi$$ where $B$ is some basic form for the projection $\pi:M\rightarrow N$. Writing out using coordinates, this simplifies to the fact that $K_{i}M_{jk}$ is basic. Since some component of $M$ does not vanish, it follows that $K$ must also be basic.
\[herglotz1\] If the ambient space is homogeneous and $M\neq 0$, then a Riemannian rigid flow is isometric.
The easiest way to show this is to take a section of the bundle $P$ and calculate the metric. The Riemannian metric is now $$\label{eq:37}
g=\psi^{0}\otimes\psi^{0}+\sum_{i}\theta^{i}\otimes\theta^{i}.$$ Let $\mathbf{I}_{0}$ be dual to $\psi^{0}$, $\mathbf{I}_{i}$ be dual to $\theta^{i}$. For a vector field $\mathbf{V}=\lambda\mathbf{I_{0}}$ where $\lambda$ is a function on $M$, we have $$\label{eq:38}
\mathcal{L}_{\mathbf{V}}g=\sum_{ij}\lambda M_{ij}(\theta^{i}\otimes\theta^{j}+\theta^{j}\otimes\theta^{i})+\mathbf{I}_{0}(\lambda)\psi^{0}\otimes\psi^{0}+(\mathbf{I}_{i}(\lambda)-\lambda K_{i})(\theta^{i}\otimes\psi^{0}+\psi^{0}\otimes\theta^{i}).$$ If for some $\lambda$ the whole thing vanishes, we have an isometry. The term involving $M_{ij}$ vanishes automatically dual to antisymmetry. The question now reduces to: can we find $\lambda$ such that $$\label{eq:39}
\mathbf{I}_{0}(\lambda)=0,\qquad \mathbf{I}_{i}(\lambda)=\lambda K_{i}$$ The first condition just says that $\lambda$ is required to be basic for the projection $\pi:M\rightarrow N$ as well. Hence we will look for a solution on $N$. The second condition can be rewritten as $$\label{eq:41}
K_{i}=\frac{{\partial}\log\lambda}{{\partial}x^{i}}$$ for some coordinate system $x^{i}$ on $N$. Poincaré Lemma says that this can be locally solved as long as $K_{[i;j]}=0$, but this we already know. Finally, since $K_{i}$ is basic, $\lambda$ constructed this way is also basic.
Riemannian rigid flow in conformally flat space {#sec:conf-flat-case}
-----------------------------------------------
While we are at it, let us prove the following conceptually simple but calculationally very intense result
If the ambient space is conformally flat and $M\neq 0$, then a Riemannian rigid flow is isometric.
By the proof of theorem \[herglotz1\], we need to show that $K_{i}$ in this case is basic and $K_{[i;j]}=0$. Corollary \[kmmagic\] does not use properties of the ambient space, hence the latter condition reduces to showing that $M_{ij}$ is basic. Let us first state our strategy:
1. eliminate all terms containing $K$ and derivatives of $M$ in equations, leaving only products of $M$ and other functions;
2. eliminate all non-basic functions by using the conformal flatness condition;
3. the remaining equations then give algebraic constraints on $M$ in terms of basic functions.
To prepare for our calculation, let us write onwards using a basis for the Lie algebra, and use covariant derivatives. We have the following conditions $$\begin{aligned}
R_{0i0j}&=-K_{(i;j)}+K_{i}K_{j}+M_{kj}M^{k}{}_{i}\label{1stweyl}\\
R_{0ijk}&=-M_{ik;j}+M_{ij;k}-2K_{i}M_{kj}\\
R_{ij0k}&=-M_{ij;k}-K_{i}M_{jk}+K_{j}M_{ik}+K_{k}M_{ij}\\
R_{ijkl}&=\widetilde R_{ijkl}+M_{ik}M_{jl}-M_{il}M_{jk}-2M_{ij}M_{lk}\\
R_{00}&=-K^{i}{}_{;i}+K_{i}K^{i}+M^{ij}M_{ij}\\
R_{0i}&=M^{j}{}_{i;j}+2K^{j}M_{ij}\\
R_{ij}&=\widetilde R_{ij}+2M_{ik}M_{j}{}^{k}-K_{i}K_{j}+\tfrac{1}{2}(K_{i;j}+K_{j;i})\\
R&=\widetilde R+2K^{i}{}_{;i}-2K_{i}K^{i}+M_{ij}M^{ij}\end{aligned}$$ where we have repeatedly used theorem \[kmmagic\]. Here $R$ denotes the Riemannian curvature *function* on $M$ and the associated Ricci curvature functions, whereas $\tilde R$ denotes the same thing on $N$. Before, we used the homogeneous condition to set all $R$ to be constants and used algebraic conditions hidden in the set of equations to force the components of $M$ to be constant. Now the problem is more difficult, since $R$ could vary from point to point even along each leaf. We can only be sure that the Weyl part of the curvature function vanishes. The Weyl curvature function is defined in terms of the Riemann curvature function as $$\begin{aligned}
\label{eq:100}
W_{\mu\nu\rho\lambda}&=R_{\mu\nu\rho\lambda}-\frac{2}{n-2}(\delta_{\mu[\rho}R_{\lambda]\nu}-\delta_{\nu[\rho}R_{\lambda]\mu})+\frac{2}{(n-1)(n-2)}R\delta_{\mu[\rho}\delta_{\lambda]\nu}\\
&=R_{\mu\nu\rho\lambda}-\delta_{\mu\rho}F_{\lambda\nu}+\delta_{\mu\lambda}F_{\rho\nu}+\delta_{\nu\rho}F_{\lambda\mu}-\delta_{\nu\lambda}F_{\mu\rho}\end{aligned}$$ where in the last line $$\label{eq:20}
F_{\lambda\nu}=\frac{1}{n-1}R_{\lambda\nu}+\frac{1}{2(n-1)(n-2)}R\delta_{\lambda\nu}$$ Using the above, let us calculate the corresponding Weyl equation to : $$\begin{aligned}
W_{0i0j}&=\left(\frac{1}{n-2}\widetilde R_{ij}-\frac{1}{(n-1)(n-2)}\widetilde R\delta_{ij}\right)\label{aline}\\
&\quad+\left(M_{kj}M^{k}{}_{i}+\frac{2}{n-2}M_{ik}M_{j}{}^{k}-\frac{1}{(n-1)(n-2)}M_{kl}M^{kl}\right)\label{bline}\\
&\quad+\left(\frac{n-3}{n-2}(K_{i}K_{j}-K_{(i;j)})+\frac{2}{(n-1)(n-2)}\delta_{ij}(K_{k}K^{k}-K^{k}{}_{;k})\right)\label{cline}\end{aligned}$$ Observe: the left hand side and the first line are basic functions for the projection to $N$, the second line is a quadratic function of the components of $M_{ij}$, and the third line is exactly the terms containing $K$ and its derivatives occurring in $F_{ij}$ in . This means that by substitution we can exchange $F_{\lambda\nu}$ for a sum of basic functions and quadratic products of the components of $M_{ij}$. For $W_{ijkl}$, we have $$\label{eq:28}
W_{ijkl}=R_{ijkl}-\delta_{ik}F_{lj}+\delta_{il}F_{kj}+\delta_{jk}F_{li}-\delta_{jl}F_{ik}$$ now $R_{ijkl}$ is a sum of basic functions and quadratic products of $M_{ij}$ as well. So schematically we have $$\label{eq:2}
0=(\text{quadratic terms of $M_{ij}$})+(\text{basic functions})$$ expanding, we get $$\label{eq:29}
M_{ik}M_{jl}-M_{il}M_{jk}-2M_{ij}M_{lk}=\text{basic function}$$ hence $M_{ij}$ is basic as well. As all terms involving $M$ are now basic, the problem simplifies greatly and the calculation analogous to corollary \[basick\] shows that $K$ is basic as well as long as $M\neq 0$.
Even though we have not yet started investigating conformal geometry, we already have the following result:
For a flat conformal geometry, a conformally rigid flow is automatically conformally isometric as long as $M\neq0$ in some representative Riemannian structure.
Since a conformally structure allows scaling by non-zero functions, the condition $M\neq 0$ is conformally invariant. Since the flow is conformally rigid, it preserves the metric up to scale, and by using the scaling we can find a Riemannian representative that is conformally flat and in which the flow is a Riemannian rigid flow. Hence in this representative the flow is an isometry, and in the conformal geometry the original flow must be conformally isometric.
The Ricci-flat case {#sec:ricci-flat-case}
-------------------
The relevant equations in this case are: $$\begin{aligned}
\label{eq:74}
R_{00}&=0=-K^{i}{}_{;i}+K_{i}K^{i}+M^{ij}M_{ij}\\
R_{0i}&=0=M^{j}{}_{i;j}+2K^{j}M_{ij}\\
R_{ij}&=0=\widetilde R_{ij}+2M_{ik}M_{j}{}^{k}-K_{i}K_{j}+\tfrac{1}{2}(K_{i;j}+K_{j;i})\\
R&=0=\widetilde R+2K^{i}{}_{;i}-2K_{i}K^{i}+M_{ij}M^{ij}\end{aligned}$$ Using the first and last equations we get $$\label{eq:81}
\sum_{ij}M^{ij}M_{ij}=\sum_{i}(K^{i}{}_{;i}-K^{i}{}_{i})=\text{constant along each leaf}$$ Now there are not enough indices for doing the kind of computations we did before and we can only conclude that the sum of the square of all terms is constant, whereas $M_{ij}$ itself can undergo rotation in the leaf direction. However, there are still other constraints to satisfy, and this still places serious constraints on the existence of rigid flow.
The uniqueness of rigid flow
----------------------------
Even in general curves background, when the Herglotz-Noether theorem and its generalisations are false (there might not be any Killing vectors at all), we still have the following uniqueness result concerning rotational rigid flow. \[sec:exist-uniq-flow\]
Assume that we have a rigid flow in an arbitrary Riemannian manifold $M$ and let $T\subset M$ be a hypersurface transversal to the flow. If the value of $M_{ij}$ is known on $T$, then it is known throughout $M$. In addition, for any open set containing $T$ in which $M_{ij}$ does not vanish, the value of $K_{i}$ throughout the open set is determined by the value of $M_{ij}$ value on $T$.
Look at and below. For the equation involving $R_{ijkl}$, giving $M_{ij}$ at one point determines $\tilde R_{ijkl}$ for all points along the leaf, and together with values of $R_{ijkl}$ along the leaf, all values of $M_{ij}$ along the same leaf are determined. If in an open set $M_{ij}\neq 0$, then the equation for $R_{0i}$ will allow us to solve for $K^{i}$ uniquely given $M_{ij}$.
Of course, there is no reason why any rigid flow should exist at all in a general space. An example of a rotational rigid flow in a non-conformally-flat spacetime is given in [@pirani1964].
On the other hand, in a flat space if the flow is non-rotating but isometric, the above theorem is false, as one can easily give many examples by “dragging the flow along”. See [@Giulini:2006p66] for an explicit construction.
Conclusion {#sec:conclusion}
==========
In this paper we have described a general framework for dealing with problems of a structure-preserving submersion between manifolds. We gave some examples of how real problems can be adapted to our framework, and by using our framework we successfully extended the classical Herglotz-Noether theorem to all conformally flat spaces in all dimensions. Several interesting projects that fit within our framework that could be taken in the future are:
- Study the structure-preserving submersions arising in Yang-Mills and other gauge theories without introduction of any metric, using group properties only.
- Study black hole solutions in higher dimensions that has constraints that can be given by structure-preserving submersions.
- Study structure-preserving submersions arising in the fluid description of string theory, in the conformal framework [@Bhattacharyya:2007p31].
- Explore the adaptation of the framework to the supersymmetric case.
[^1]: `[email protected]`
|
---
abstract: 'A fundamental property of QCD is the presence of the chiral anomaly, which is the dominant component of the $\pi^0\rightarrow\gamma\gamma$ decay rate. Based on this anomaly and its small ($\simeq$ 4.5%) chiral correction, a prediction of the $\pi^0$ lifetime can be used as a test of QCD at confinement scale energies. The interesting experimental and theoretical histories of the $\pi^0$ meson are reviewed, from discovery to the present era. Experimental results are in agreement with the theoretical prediction, within the current ($\simeq$ 3%) experimental error; however, they are not yet sufficiently precise to test the chiral corrected result, which is a firm QCD prediction and is known to $\simeq$ 1% uncertainty. At this level there exist experimental inconsistencies, which require attention. Possible future work to improve the present precision is suggested.'
author:
- |
A. M. Bernstein\
Physics Department and Laboratory for Nuclear Science\
M.I.T., Cambridge Mass., 02139 USA\
Barry R. Holstein\
Department of Physics LGRT\
University of Massachusetts, Amherst. Mass, 01003 USA
bibliography:
- 'pi0\_review.bib'
title: Neutral Pion Lifetime Measurements and the QCD Chiral Anomaly
---
|
---
abstract: 'Hamiltonian systems with linearly dependent constraints (irregular systems), are classified according to their behavior in the vicinity of the constraint surface. For these systems, the standard Dirac procedure is not directly applicable. However, Dirac’s treatment can be slightly modified to obtain, in some cases, a Hamiltonian description completely equivalent to the Lagrangian one. A recipe to deal with the different cases is provided, along with a few pedagogical examples.'
author:
- |
Olivera Mišković$^{1,2}$ and Jorge Zanelli$^{1}$\
$^{1}$Centro de Estudios Científicos (CECS), Casilla 1469, Valdivia, Chile.\
$^{2}$Departamento de Física, Universidad de Santiago de Chile,\
Casilla 307, Santiago 2, Chile.\
`[email protected], [email protected]`
title: Irregular Hamiltonian Systems
---
Introduction
============
Dirac’s Hamiltonian analysis provides a systematic method for finding the gauge symmetries present in a theory. The analysis identifies and classifies the constraints, which are local functions of the phase space coordinates. Consistency requires that the constraints be preserved in during the evolution (for the review of the Hamiltonian analysis see Ref.[@Dirac]–[@Chitaia-Gogilidze-Surovtsev]). However, if the constraints are not functionally independent, then Dirac’s procedure is not applicable. The test of functional independence are the so-called regularity conditions, and those systems which fail the test are said to be *irregular*.
Irregular systems are not necessarily intractable or exotic. A simple example is a relativistic massless particle ($p^{\mu }p_{\mu }=0$), which is irregular at the origin of momentum space ($p^{\mu }=0$). This point in phase space is exceptional, as it is unclear whether this would be an observable state for a photon, say. On the other hand, we know the configuration $p^{\mu }=0$ to be a very important one: the ground state. There are other physical circumstances in which regularity is violated, and not only for isolated states but on large portions of the region in phase space where the system evolves. Chern-Simons theories in $2n+1$ spacetime dimensions are examples where, for some initial configurations, regularity can fail at all times and one is forced to live with this problem.
Here we discuss the possible ways in which the constraints can fail the test of functional independence, and how the Hamiltonian treatment of Dirac must be modified in each case.
Regularity Conditions
=====================
If we call $z^{i}\equiv (q,p)$ ($i=1,\ldots ,2n$) the coordinates in the phase space $\Gamma $, the constraints $\phi ^{r}(z)\approx 0$ ($r=1,...R)$ define the *constraint surface* $\Sigma $ given by $$\Sigma =\left\{ \bar{z}\in \Gamma \mid \phi ^{r}(\bar{z})=0\;\left(
r=1,\ldots ,R\right) \left( R\leq 2n\right) \right\} .$$
**Regularity Conditions*** *(RCs)*: The constraints* $\phi
^{r}\approx 0$* are regular if and only if their small variations* $%
\delta \phi ^{r}\emph{,}$ *evaluated* *on* $\Sigma \emph{,}$* are* $R$* linearly independent functions of* $\delta z^{i}$*.*
To first order in $\delta z^{i}$, the variation of the constraints are $%
\delta \phi ^{r}=J_{i}^{r}\delta z^{i}$, where $J_{i}^{r}\equiv \left. \frac{%
\partial \phi ^{r}}{\partial z^{i}}\right| _{\Sigma }$. Consequently, the RCs can also be defined as [@Henneaux-Teitelboim]:
*The set of constraints* $\phi ^{r}\approx 0$ *is regular if and only if the Jacobian* $J_{i}^{r}\equiv \left. \frac{\partial \phi ^{r}}{%
\partial z^{i}}\right| _{\Sigma }$ *has maximal rank:* $\Re (J)=R$.
A simple classical mechanical example of functionally* dependent* constraints occurs in a $2$-dimensional phase space with coordinates $\left(
q,p\right) $ and constraints $\phi ^{1}\equiv q\approx 0$ and $\phi
^{2}\equiv pq\approx 0$. In this case, $J=\left[
\begin{array}{ll}
1 & p \\
0 & q
\end{array}
\right] _{\Sigma }$ and $\Re \left[ \frac{\partial \left( \phi ^{1},\phi
^{2}\right) }{\partial \left( q,p\right) }\right] _{q=0}=1$. A system of just one constraint can also fail the test of regularity. Consider the constraint $\phi =q^{2}\approx 0$ in a $2$-dimensional phase space. In this case, $J=\left[
\begin{array}{l}
2q \\
0
\end{array}
\right] _{q^{2}=0}=0$ and $\Re \left( J\right) =0$. The same problem occurs with the constraint $q^{7}\approx 0$, which has a zero of seventh order at the constraint surface, or with any other** **constraint which is not linear in $z^{i}-\bar{z}^{i}$.
Classification of Irregular Constraints
=======================================
Irregular constraints can be classified according to their approximate behavior near the surface $\Sigma $.
**A. Linear constraints**. The Jacobian has constant, non-maximal rank throughout $\Sigma $, and $$\phi ^{r}\equiv J_{i}^{r}\left( z^{i}-\bar{z}^{i}\right) \approx 0,\qquad
\Re (J)=R^{\prime }<R.$$ These are regular systems in disguise. Regularity fails simply because $%
R-R^{\prime }$ constraints are redundant and should be discarded. The regular system gives the correct description.
**B. Multilinear constraints**. In the vicinity of $\Sigma $, the constraints are of the form $$\phi \equiv \frac{1}{m!}\,S_{i_{1}\ldots i_{m}}\left( z^{i_{1}}-\bar{z}%
^{i_{1}}\right) \cdots \left( z^{i_{m}}-\bar{z}^{i_{m}}\right) \approx
0\qquad \left( m\leq 2n\right) ,$$ where the coefficients $S_{i_{1}\ldots i_{m}}$ vanish if any two indices are equal. Thus, $\phi $ has *simple zeros* on surfaces of dimension $2n-1$, and zeros of higher order occur at the intersections of these surfaces. The RCs fail at the points of intersection, where $\phi $ has multiple zeros. At those intersections, $\phi $ can be replaced by the equivalent[^1] set of constraints, $$\varphi ^{1}\equiv z^{i_{1}}-\bar{z}^{i_{1}}\approx 0,\qquad \cdots \qquad ,%
\varphi ^{m}\equiv z^{i_{m}}-\bar{z}^{i_{m}}\approx 0.$$ At the intersections, the set $\left\{ \varphi ^{1},...,\varphi ^{m}\right\}
$ is regular and therefore this provides a recipe for substituting the irregular multilinear constraint $\phi $ by a regular set of linear constraints. For example, the constraint $\phi \equiv qp\approx 0$ is irregular at $(0,0)$ because it admits a linear approximation everywhere except at this point. Replacement $\phi $ by the linear constraints $\left\{
q\approx 0,p\approx 0\right\} $ at $(0,0)$ regularizes the system.
**C. Higher order constraints.** In the vicinity of $\Sigma $, the constraints do not possess a linear approximation: $$\phi \equiv C\left( z^{s}-\bar{z}^{s}\right) ^{k}\approx 0\qquad \left(
k>1\right) . \label{high-order}$$ The Jacobian vanishes everywhere on the constraint surface. Naively, it would seem possible to choose $z^{s}-\bar{z}^{s}\approx 0$ as an equivalent regular constraint, but it turns out that this could change the dynamics of original theory, as we show below.
These three classes are generic and, in general, there can be combinations of these three types occurring simultaneously near a constraint surface. By selecting a sufficiently small neighborhood of a point on $\Sigma $, one can always expect to be in one and only one of the three situations just described.
In correspondence with the three generic cases in which regularity can fail, the nature of the constraint surface $\Sigma $ falls into one of three categories:
**A.** *The RCs are satisfied on the whole constraint surface.* These are regular systems, either desguised or not.
**B.** *The RCs fail on a submanifold of* $\Sigma $: $\Re \left(
J\right) =R$ except on a submanifold $\Sigma _{0}\subset \Sigma $ where $\Re
\left( J\right) =R^{\prime }<R$ .
**C.** *The RCs fail everywhere on* $\Sigma $*:* $J$ has constant lower than $R$ rank on $\Sigma $.
The first case is treated in the standard texts and will not be further discussed here.
In the second case, the constraint surface can be decomposed into two non-empty submanifolds, $\Sigma _{0}$ and $\Sigma _{R}$ such that $\Sigma
=\Sigma _{0}\cup \Sigma _{R}$ and $\Sigma _{0}\cap \Sigma _{R}$ is empty. Then, the rank of the Jacobian jumps from $\Re \left( J\right) =R$ on $%
\Sigma _{R}$, to $\Re \left( J\right) =R^{\prime }$ on $\Sigma _{0}$. As mentioned above, in this case it is possible to replace $\phi $ at $\Sigma
_{0}$ by a set of regular constraints $\left\{ \varphi ^{1}\approx 0,\cdots ,%
\varphi ^{m}\approx 0\right\} $ which regularize the system at $%
\Sigma _{0}$.
The important question is how to proceed in the third case. If the RCs fail everywhere on $\Sigma $, the previous approach is not applicable, because there is no guarantee that the resulting Hamiltonian dynamics will be equivalent to that of the original Lagrangian system.
This can be seen in the example of Lagrangian in a three-dimensional configuration space, $$L\left( x,y,z\right) =\dot{x}\dot{z}+yz^{2}. \label{three particles}$$ The general solution of Euler-Lagrange equations describes a system with *one* degree of freedom – a free particle, whose time evolution $\bar{%
x}=p_{0}\,t+x_{0}$ is determined by two initial conditions $p_{0}$ and $%
x_{0}$. The remaining fields are $\bar{y}(t)$, a Lagrange multiplier with indeterminate evolution, and $\bar{z}(t)$, with trivial evolution, $\bar{z}%
(t)=0$.
The Hamiltonian approach gives just two (first class) constraints, $\
p_{y}\approx 0$,$\;z^{2}\approx 0$ ($R=2$), and *one* degree of freedom, as expected from the Lagrangian approach. However, this is only superficially correct, because the Jacobian has rank 1, and not 2. This is because the constraint function $\phi =z^{2}$ has no linear approximation at $z=0$.
On the other hand, if we naively take $z\approx 0$, which is regular and equivalent to $z^{2}\approx 0$, then the Hamiltonian analysis generates *three* first class constraints $p_{y}\approx 0,\;z\approx
0,\;p_{x}\approx 0$, which leave *zero* physical degrees of freedom. This result is not consistent with the Lagrangian description where there is one degree of freedom.
As seen in the previous example, even if two constraints are equivalent in defining the same constraint surface, they may yield different dynamics and should be treated more carefully.
Suppose $\phi \approx 0$ is a constraint of the form (\[high-order\]), which is equivalent to the *regular* constraint $$\chi \equiv \phi ^{1/k}\approx 0.$$ The question is whether $\chi \approx 0$ gives also the correct dynamics. The answer to this question depends on whether $\chi $ is a first or second class constraint. Namely, it makes a difference whether the linear constraint $\chi $ can generate a transformation in phase space that leaves the Hamiltonian action unchanged or not . As shown in [@Miskovic-Zanelli], if the linearized constraint $\chi $ is second class, then it is not only geometrically equivalent to $\phi $ in the sense that it defines the same $%
\Sigma $, but the substitution also yields the same dynamical description as the Lagrangian approach. On the other hand, if $\chi $ is first class, then the subtitution generates a system whose dynamics is different from the one obtained from the Euler-Lagrange equations.
Conclusions
===========
The recipe for treating the non-regular constraints is:
- Every linear or multi-linear set of constraints can be exchanged by an equivalent regular set. It allows to carry out Dirac’s procedure in the standard way.
- A higher order constraint $\phi =C\left( z^{s}-\bar{z}^{s}\right)
^{k}\approx 0$, can be exchanged by the equivalent linear constraint $\chi
=\phi ^{1/k}\approx 0$. If $\chi $ is a second class constraint, the dynamics of the new system is equivalent to the Lagrangian one. If $\chi $ is first class, the substitution yields a system which is not dynamically equivalent to the Lagrangian one. In this latter case, one should view the original Lagrangian as an incomplete, if not a totally inconsistent description for a dynamical system.
Acknowledgments {#acknowledgments .unnumbered}
===============
We are deeply grateful to Ricardo Troncoso for many enlightening insights and discussions. This work is partially funded by grants FONDECYT 1020629, 7020629, 1010450, 2010017, and MECESUP USA 9930 scholarship. The generous support of Empresas CMPC to the CECS is also acknowledged. CECS is a Millennium Science Institute and it is funded in part by grants from Fundación Andes and the Tinker Foundation.
[9]{}
P. A. M. Dirac, *Lectures on Quantum Mechanics* (Yeshiva University, New York, 1964).
M. Henneaux and C. Teitelboim, *Quantization of Gauge Systems* (Princeton University Press, Princeton, 1992).
M. Blagojević, *Gravity and Gauge Symmetries* (Institute of Physics Publishing, London, 2001).
N. P. Chitaia, S. A. Gogilidze and Yu. S. Surovtsev, *Phys. Rev.* **D 56** (1997) 1135; *Phys. Rev.* **D 56** (1997) 1142.
O. Mišković and J. Zanelli, *Dynamical Structure of Irregular Constrained Systems* (in preparation).
[^1]: Two sets of constraints are said to be *equivalent* if they define the same constraint surface $\Sigma $.
|
---
abstract: 'We give a construction of NC-smooth thickenings (a notion defined by Kapranov [@Kapranov]) of a smooth variety equipped with a torsion free connection. We show that a twisted version of this construction realizes all NC-smooth thickenings as $0$th cohomology of a differential graded sheaf of algebras, similarly to Fedosov’s construction in [@Fed]. We use this dg resolution to construct and study sheaves on NC-smooth thickenings. In particular, we construct an NC version of the Fourier-Mukai transform from coherent sheaves on a (commutative) curve to perfect complexes on the canonical NC-smooth thickening of its Jacobian. We also define and study analytic NC-manifolds. We prove NC-versions of some of GAGA theorems, and give a $C^\infty$-construction of analytic NC-thickenings that can be used in particular for Kähler manifolds with constant holomorphic sectional curvature. Finally, we describe an analytic NC-thickening of the Poincaré line bundle for the Jacobian of a curve, and the corresponding Fourier-Mukai functor, in terms of $A_\infty$-structures.'
author:
- Alexander Polishchuk and Junwu Tu
title: 'DG-resolutions of NC-smooth thickenings and NC-Fourier-Mukai transforms'
---
\[subsection\] \[thm\][Proposition]{} \[thm\][Conjecture]{} \[thm\][Lemma]{} \[thm\][Corollary]{} [ \[thm\][Definition]{} \[thm\][Example]{} \[thm\][Examples]{} \[thm\][Remark]{} \[thm\][Remarks]{} ]{}
207C
[^1]
Introduction
============
Background.
-----------
In the remarkable paper [@Kapranov] Kapranov initiated the study of noncommutative algebras and schemes which are complete with respect to the commutator filtration (called NC-complete algebras and NC-schemes). The commutator filtration on a ring $R$ is defined by $$\label{F-filtration-eq}
F^dR=\sum_{n_1+\ldots+n_m-m=d} R\cdot R^{{\operatorname{Lie}}}_{n_1}\cdot R\ldots\cdot R\cdot R^{{\operatorname{Lie}}}_{n_m}\cdot R,$$ where $R^{{\operatorname{Lie}}}$ is $R$ viewed as a Lie algebra, $R^{{\operatorname{Lie}}}_n$ is the $n$th term of its lower central series. In other words, $F^dR$ is the two-sided ideal spanned by all expression containing $d$ commutators (possibly nested). The ring $R$ is called NC-complete if it is complete with respect to the filtration $(F^dR)$. Such a ring can be viewed as an inverse limit of the [*NC-nilpotent*]{} rings $R/F^dR$ which are nilpotent extensions of the abelianization of $R$, $R^{{\operatorname{ab}}}=R/F^1R$. [*NC-schemes*]{} are defined in [@Kapranov] as ringed spaces constructed by gluing formal spectra of NC-complete rings. Every NC-scheme $X$ has the abelianization $X^{{\operatorname{ab}}}$ which is a usual scheme, and we say that $X$ is an [*NC-thickening*]{} (or simply [*thickening*]{}) of $X^{{\operatorname{ab}}}$ (in particular, $X$ and $X^{{\operatorname{ab}}}$ have the same underlying topological spaces). Intuitively one can think of NC-schemes as analogues of formal schemes where the formal direction is allowed to be noncommutative.
The important notion of [*NC-smoothness*]{} is introduced in [@Kapranov] by analogy with the notion of smoothness of associative algebras due to Cuntz-Quillen [@CQ]. The only difference is that in Kapranov’s setup the lifting property is required to hold only in the class of NC-nilpotent algebras. One can think of an NC-smooth thickening of a smooth variety $X$ as an analog of deformation quantization, where the algebra of functions on $X$ is replaced by its universal Poisson envelope. Kapranov showed in [@Kapranov] that every affine smooth variety $X$ admits a unique NC-smooth thickening (up to a non-canonical isomorphism). For non-affine smooth schemes there are in general obstructions to the existence of NC-smooth thickenings, and they are not necessarily unique. Kapranov in [@Kapranov] constructs examples of NC-smooth thickenings for projective spaces, Grassmannians and for some versal families of vector bundles.
There was a very limited development of the theory of NC-smooth thickenings after Kapranov (see [@Cortinas-deRham], [@Cortinas], [@LeBruyn]). In particular, before our work all constructions of smooth NC-thickenings followed one of the two patterns: either step-by-step thickening using universal central extensions (this works in the affine case), or via defining a functor on the category of NC-nilpotent algebras and proving its representability.
Summary of main results I: algebraic theory.
--------------------------------------------
In the present paper we give a different construction of NC-smooth thickenings which is analogous to Fedosov’s construction of the deformation quantization of a symplectic manifold in [@Fed]. Our construction begins with the following definition.
\[alge-nc-defi\] Let $X$ be a smooth algebraic variety. Let $\Omega^\bullet$ denote the algebraic de Rham complex of $X$, and $\hat{T}_{{\cal O}}({\Omega}^1)$ the completed tensor algebra generated by one forms. An *algebraic NC-connection* is a *degree one, square zero derivation* $D$ of the graded algebra (graded by the de Rham degree) $${{\cal A}}_X:={\Omega}^\bullet{\otimes}_{{{\cal O}}} \hat{T}_{{\cal O}}({\Omega}^1)$$ extending the de Rham differential on ${\Omega}^\bullet$ and acting on ${\Omega}^1{\subset}\hat{T}_{{\cal O}}({\Omega}^1)$ by an operator of the form $$D(1\otimes \alpha)=\alpha{\otimes}1+\nabla_1(\alpha)+\nabla_2(\alpha)+\nabla_3(\alpha)+\cdots,$$ where $\nabla_i(\alpha)\in {\Omega}^1{\otimes}T^i({\Omega}^1)$.
Note that $\nabla_1: {\Omega}^1\to {\Omega}^1{\otimes}{\Omega}^1$ is an algebraic connection on ${\Omega}^1$, while for $i\geq 2$ the maps $\nabla_i$ are ${{\cal O}}$-linear. The condition $D^2=0$ implies that the $\nabla_1$ corresponds to a torsion free connection on $X$. Conversely, we show that every torsion free connection extends to an algebraic NC-connections and that the corresponding differential graded algebra is a resolution of an NC-smooth thickening of $X$.
\[main-nc-thm\] Let $X$ be a smooth scheme endowed with an algebraic torsion free connection $\nabla$. Then
- there exists an algebraic NC-connection $D$ on $X$ with $\nabla_1=\nabla$;
- one has $\underline{H}^i({{\cal A}}_X,D)=0$ for $i\geq 1$;
- the sheaf $\underline{H}^0({{\cal A}}_X,D)$ with its induced algebra structure is an NC-smooth thickening of $X$.
Furthermore, such an NC-connection $D$ is determined uniquely up to conjugation by an automorphism of $\hat{T}_{{\cal O}}({\Omega}^1)$.
Since every smooth affine variety admits a torsion-free connection, we get an explicit construction of an NC-smooth thickening in this case, along with a dg-resolution. In the non-affine case we prove that every NC-smooth thickening has a dg-resolution obtained by a twisted version of our construction, involving a sheaf of algebras locally isomorphic to $\hat{T}_{{\cal O}}({\Omega}^1)$.
For an abelian variety $A$ our construction applied to the natural connection gives an NC-smooth thickening that we call [*standard*]{}. If $J=J(C)$ is the Jacobian of a smooth projective curve $C$ then there is another NC-smooth thickening of $J$ constructed by Kapranov in [@Kapranov] using the functor of noncommutative families of line bundles on $C$. We prove that this modular thickening is isomorphic to the standard one for $J$ viewed as an abelian variety.
Another theme of this paper is the study of ${{\cal O}}_X^{NC}$-modules for an NC-smooth thickening ${{\cal O}}_X^{NC}\to{{\cal O}}_X$. We are partially guided in this study by the analogy with modules over algebraic (or complex analytic) deformation quantization algebras that have been studied e.g. in [@KS], [@BBP], [@ABP], [@BGP], [@Pecharich]. In the case when ${{\cal O}}_X^{NC}$ is obtained from a torsion-free connection on $X$ we mimic our construction of the dg-resolution for modules. Namely, starting with a quasicoherent sheaf ${{\cal F}}$ on $X$ with a connection (not necessarily flat) we construct a dg-module over ${{\cal A}}_X$ that does not depend on a connection up to an isomorphism. Passing to the $0$th cohomology we get an ${{\cal O}}_X^{NC}$-module, which is locally free over ${{\cal O}}_X^{NC}$ in the case when ${{\cal F}}$ is a vector bundle. For an arbitrary NC-smooth thickening ${{\cal O}}_X^{NC}$ a similar construction works if the connection on ${{\cal F}}$ is flat. Furthermore, in this way we get an exact functor from $D$-modules on $X$ to ${{\cal O}}_X^{NC}$-bimodules.
One can ask which vector bundles extend to locally free modules (resp., bimodules) over a given NC-smooth thickening ${{\cal O}}_X^{NC}\to{{\cal O}}_X$. The first order obstruction to extend a vector bundle on $X$ to an ${{\cal O}}_X^{NC}$-module was computed by Kapranov in terms of the Atiyah classes (see [@Kapranov Rem. (4.6.9)]). We extend this to the case of bimodules and also answer the question of extendability completely in the case of the standard NC-smooth thickening of an abelian variety $A$ and of line bundles on $A$.
In the case of abelian varieties it is natural to look for analogs of the Fourier-Mukai transform involving their NC-smooth thickenings. We show how to extend the Poincaré line bundle on $A\times\hat{A}$ to a line bundle (a bimodule) on $A^{NC}\times A^\natural$, where $A^{NC}$ is the standard NC-smooth thickening of $A$ and $A^\natural$ is the universal extension of $\hat{A}$ by a vector space (the functions on $A^\natural$ are in the center). One can descend this NC-Poincaré line bundle to a twisted line bundle on $A^{NC}\times\hat{A}$ (since the projection $A^\natural\to \hat{A}$ splits locally). If we restrict to a curve $C$ in $\hat{A}$ then we can untwist it and get a line bundle on $A^{NC}\times C$. Using this line bundle we define an integral transform from the derived category $D^b({\operatorname{Coh}}(C))$ to the derived category of quasicoherent sheaves on $A^{NC}$. We prove that objects in the image of this transform are perfect, i.e., can be represented by complexes of vector bundles on $A^{NC}$.
Summary of main results II: analytic theory.
--------------------------------------------
The second main topic of this paper may be called *analytic NC geometry*. One advantage of the dg resolution $({{\cal A}}_X,D)$ of an NC-smooth scheme $X^{NC}$ considered above is that it enables us to define the *analytification* of $X^{NC}$: simply take the analytification of $({{\cal A}}_X,D)$, and then apply $\underline{H}^0$. For a general complex analytic manifold $X$ (which might not admit any holomorphic torsion-free connection), we define an analytic NC-smooth thickening of $X$ to be a sheaf of algebras over $X$ that is locally in the analytic topology modeled by an algebra of the form $\underline{H}^0({{\cal A}},D)$.
Similar to the case of complex geometry, in analytic NC geometry, it is often useful to work with $C^\infty$ bundles and their Dolbeault complexes. The following Definition generalizes Definition \[alge-nc-defi\].
\[nc-connection-def\] Let $X$ be a complex manifold. Let $\Lambda_X$ denote the $C^\infty$ de Rham algebra of $X$, and $\Omega^{1,0}_X$ the space of $(1,0)$-type forms. An *NC-connection* on $X$ is a *degree one, square zero derivation* ${{\cal D}}$ of the graded algebra (graded by the de Rham degree of $\Lambda_X$) $$\Lambda_X\otimes_{C^\infty_X} \hat{T}_{C_X^\infty}(\Omega_X^{1,0})$$ extending the de Rham differential on $\Lambda_X$, and acting on $\Omega_X^{1,0}{\subset}\hat{T}_{C_X^\infty}(\Omega_X^{1,0})$ by an operator of the form $${{\cal D}}(1{\otimes}\alpha)=\alpha{\otimes}1+\overline{\partial}+\nabla+A^\vee_2+A^\vee_3+\cdots$$ where $\nabla$ is a $(1,0)$-type connection on the bundle $\Omega_X^{1,0}$, and $A^\vee_k: \Omega_X^{1,0} {\rightarrow}\Lambda^1_X{\otimes}(\Omega_X^{1,0})^{{\otimes}k}$ is a $C^\infty_X$-linear morphism for each $k\geq 2$.
Analogously to Theorem \[main-nc-thm\], we prove the following results about NC connections, which demonstrates the important role they play in analytic NC geometry.
\[analytic-nc-thm\] Let $X$ be a complex manifold. Then
- one has $\underline{H}^i(\Lambda_X\otimes_{C^\infty_X} \hat{T}_{C_X^\infty}(\Omega_X^{1,0}),{{\cal D}})=0$ for $i\geq 1$;
- the sheaf $\underline{H}^0(\Lambda_X\otimes_{C^\infty_X} \hat{T}_{C_X^\infty}(\Omega_X^{1,0}),{{\cal D}})$ with its induced algebra structure is an analytic NC-smooth thickening of $X$;
- there exists an NC-connection ${{\cal D}}$ on $X$ if and only if $X$ admits an analytic NC-smooth thickening.
\[examples-thm\] Let $(X,g)$ be a Kähler manifold with constant holomorphic sectional curvature. Then $X$ admits an NC-connection. Thus, by Theorem \[analytic-nc-thm\], it also admits an analytic NC-smooth thickening.
Kähler manifolds with constant sectional curvature are classified. There are three classes of such manifolds: quotients of $({{\Bbb C}}^n,\omega_0=\sqrt{-1}/2\sum dz_id\overline{z}_j)$, complex projective spaces, and complex hyperbolic spaces. Of these three classes of examples, the first one admits holomorphic torsion-free connection, hence the existence of an NC-smooth thickening also follows from the holomorphic version of Theorem \[main-nc-thm\]. An NC-smooth thickening of the projective space was constructed by Kapranov [@Kapranov Section 5.1]. Our construction for hyperbolic spaces seems to give new examples of NC-smooth thickenings.
It is natural to ask whether analogs of GAGA results (see [@GAGA]) hold for algebraic NC geometry versus analytic NC geometry. We first deal with the algebraicity of analytic NC manifolds. Let $X$ be a smooth algebraic variety over ${{\Bbb C}}$. Following Kapranov [@Kapranov], we denote by $Th_X$ the groupoid whose objects are NC-smooth thickenings of $X$, and morphisms are morphisms of locally ringed spaces that are identity after taking Abelianization. In the analytic setting, we have the corresponding groupoid $Th_{X^{an}}$ consisting of analytic NC-smooth thickenings of $X^{an}$.
\[main-an-thm\] Let $X$ be a smooth projective variety over ${{\Bbb C}}$. Then the analytification functor is an equivalence of categories $Th_X {\rightarrow}Th_{X^{an}}$.
Next we consider analytic NC vector bundles versus algebraic ones. Again let $X$ be a smooth projective variety over ${{\Bbb C}}$. Let $X^{NC}$ be an algebraic NC-smooth thickening of $X$, and $X^{nc}$ its analytification. Denote by ${\operatorname{Per}}^{gl}(X^{NC})$ the (global) perfect derived category of locally free ${{\cal O}}_X^{NC}$-modules of finite rank, and by ${\operatorname{Per}}^{gl}(X^{NC})$ the similar perfect derived category for ${{\cal O}}_{X^{an}}^{nc}$-modules. Since the analytic topology is finer than the Zariski topology, and algebraic sections are holomorphic sections, there is a canonical functor $${\operatorname{An}}: {\operatorname{Per}}^{gl}(X^{NC}) {\rightarrow}{\operatorname{Per}}^{gl}(X^{nc}).$$
\[nc-gaga-thm\] The functor ${\operatorname{An}}$ is an equivalence of categories.
Thus, we can also use complex analytic methods to study NC vector bundles. As an example we present an analytic construction of the NC thickening of the Poincaré bundle $P$ over $C\times J$ studied earlier with algebraic methods. Using the analytic description we clarify the relationship between the NC Fourier transform of $P$ in a formal neighborhood of a point $L\in J$ and another such functor constructed by the first named author [@P-BN] using different methods.
The paper is organized as follows. In Section \[connection-constr-sec\], after reviewing basic notions on NC-thickenings, we present the Fedosov type construction of the NC-connection associated with a torsion free connection. Then we show that an NC-connection gives rise to an NC-smooth thickening ${{\cal O}}_X^{NC}$ and give a proof of Theorem \[main-nc-thm\]. We also show in Theorem \[twisted-dg-thm\] that all NC-smooth thickenings can be described in terms of twisted NC-connections.
In Section \[module-sec\] we consider the analog of the construction of NC-connections for ${{\cal O}}_X$-modules. In \[D-mod-sec\] we construct an exact functor from $D$-modules on $X$ to ${{\cal O}}_X^{NC}$-bimodules and in \[extendability-sec\] we study the question of extendability of vector bundles on $X$ to locally free ${{\cal O}}_X^{NC}$-modules (resp., bimodules).
In Section \[Jac-FM-sec\] we study NC-thickenings of the Jacobian $J=J(C)$ of a curve $C$. We show that the canonical NC-smooth thickening $J^{NC}$ of $J$ constructed by Kapranov [@Kapranov Sec. 5.4] is isomorphic to the standard NC-thickening of $J$ as an algebraic variety. In \[NC-Poincare-sec\] we construct an NC-thickening of the Fourier-Mukai functor from $D^b(C)$ to the perfect derived category of the NC-thickening of $J$.
In Section \[NC-analytic-sec\] we define and study analytic NC manifolds. We prove Theorem \[main-an-thm\] and consider the relation between analytic NC-manifodls and $C^\infty$ NC-connections (see Definition \[nc-connection-def\]). We also construct such NC-connections on Kähler manifolds with constant sectional curvature.
In Section \[analytic-vb-sec\] we study the analytification functor for vector bundles over NC-smooth thickenings and prove Theorem \[nc-gaga-thm\].
Finally, in Section \[ainf-sec\] we use the analytic approach to describe the Poincaré line bundle over $C\times J^{NC}$ and the corresponding Fourier-Mukai functor using $A_\infty$-structures.
[*Conventions.*]{} We work over a base field $k$ of characteristic zero. Starting from Section \[NC-analytic-sec\] we assume that $k={{\Bbb C}}$. For a commutative ring $A$ we denote by $A{\langle\langle}x_1,\ldots,x_n{\rangle\rangle}$ the ring of noncommutative power series in independent variables $x_1,\ldots,x_n$ with coefficients in $A$. For a smooth variety we set $D^b(X)=D^b({\operatorname{Coh}}(X))$. By a curve we mean a smooth projective curve. For an abelian group (resp., a sheaf of abelian groups) $A$, ${\operatorname{Mat}}_n(A)$ denotes the group (resp., sheaf) of $n\times n$-matrices with values in $A$. For a sheaf of rings ${{\cal O}}$ we denote by ${\operatorname{mod}}-{{\cal O}}$ the category of right ${{\cal O}}$-modules. For a ringed space $X$ we denote the derived category of right ${{\cal O}}_X$-modules by ${{\cal D}}(X)$.
[*Acknowledgments.*]{} Part of this work was done while the first author was visiting the Mathematical Sciences Research Institute and the Institut des Hautes Études Scientifiques, and the second author was visiting the Chinese University of Hong Kong. We would like to thank these institutions for excellent working conditions. Also, we are grateful to Kevin Costello, Weiyong He and Dmitry Tamarkin for useful discussions.
NC-thickenings from torsion free connections {#connection-constr-sec}
============================================
NC-algebras and NC-schemes
--------------------------
We start by recalling the main definitions from [@Kapranov]. Along the way we prove some technical lemmas that will be needed later.
A $k$-algebra $R$ is called [*NC-nilpotent of degree $d$*]{} (resp., [*NC-nilpotent*]{}) if $F^{d+1}R=0$ (resp., $F^nR=0$ for $n\gg 0$). The [*NC-completion*]{} of an algebra $R$ is $$R_{[[ab]]}={\varprojlim}R/F^nR.$$ An algebra $R$ is [*NC-complete*]{} if it is complete with respect to the filtration $(F^nR)$, i.e., the natural map $R\to R_{[[ab]]}$ is an isomorphism.
For an NC-complete algebra $R$ the affine NC-scheme $X={\operatorname{Spf}}(R)$ is defined as the locally ringed space $({\operatorname{Spec}}(R^{ab}),{{\cal O}}_X)$, where ${{\cal O}}_X$ is defined as the inverse limit of sheaves of rings obtained from $R/F^dR$ by localization (similarly to the case of usual affine schemes). The category of NC-schemes is the full subcategory of the category of locally ringed spaces consisting of spaces locally isomorphic to affine NC-schemes. For every NC-scheme $X$ the structure sheaf ${{\cal O}}_X$ has a filtration by sheaves of ideals $F^nX$, and the quotients ${{\cal O}}_X/F^{n+1}{{\cal O}}_X$ define NC-subschemes $X^{\le n}{\subset}X$ with the same underlying topological space as $X$. In particular, we set $X_{ab}=X^{\le 0}$.
\[NC-mor-lem\] To give a morphism $f:X\to Y$ of NC-schemes is the same as to give a collection of compatible morphisms $f_n:X^{\le n}\to Y$.
[[*Proof*]{}]{}. Since the construction $X\mapsto X^{\le n}$ is compatible with passing to open subsets, it is enough to consider the case when both $X$ and $Y$ are affine. In this case, by [@Kapranov Prop. 2.2.3], our statement is that a homomorphism of NC-complete algebras $R\to S$ is determined by a collection of compatible homomorphisms $R\to S/F^nS$, which follows immediately from NC-completeness of $S$.
\(i) An NC-scheme $X$ is said to be of [*finite type*]{} if it is NC-nilpotent, i.e. $F^n{{\cal O}}_X=0$ for $n\gg 0$, $X_{ab}$ is a scheme of finite type over $k$ and the sheaves $F^n{{\cal O}}_X/F^{n+1}{{\cal O}}_X$ on $X_{ab}$ are coherent.
\(ii) An NC-scheme $X$ is called [*$d$-smooth*]{} if $F^{d+1}X=0$, $X$ is of finite type and for every extension of NC-nilpotent algebras $$\label{nilp-extension-eq}
0\to I\to {\Lambda}'\to {\Lambda}\to 0$$ where $I$ is a nilpotent ideal, the map ${\operatorname{Mor}}({\operatorname{Spf}}({\Lambda}'),X)\to {\operatorname{Mor}}({\operatorname{Spf}}({\Lambda}),X)$ is surjective. An algebra $R$ is called $d$-smooth if ${\operatorname{Spf}}(R)$ is $d$-smooth.
\(iii) An NC-scheme $X$ is called [*NC-smooth*]{} if $X^{\le d}$ is $d$-smooth for every $d$. An algebra $R$ is called NC-smooth if $R/F^{d+1}R$ is $d$-smooth for every $d$.
It is easy to see that if an NC-scheme $X$ is $d$-smooth then $X^{\le d-1}$ is $(d-1)$-smooth. In particular, $X_{ab}$ is smooth as a commutative scheme over $k$. If $X$ is a $d$-smooth (resp. NC-smooth) NC-scheme then we say that $X$ is a [*$d$-smooth (resp., NC-smooth) thickening*]{} of the commutative smooth scheme $X_{ab}$.
Recall that an extension is called central if $I^2=0$ and $I$ lies in the center of ${\Lambda}'$. It is easy to see that any surjection ${\Lambda}'\to {\Lambda}$ of NC-nilpotent algebras whose kernel is a nilpotent ideal is a composition of central extensions (cf. [@Kapranov Prop. 1.2.3]. Thus, in the definition $d$-smoothness it is enough to work with central extensions of NC-nilpotent algebras.
\[smooth-local-lem\] Let $X$ be an NC-scheme. If $X$ is locally NC-smooth, i.e., admits an open covering $(U_i)$ such that $U_i$ are NC-smooth, then $X$ is NC-smooth.
[[*Proof*]{}]{}. Given a central extension $0\to I\to{\Lambda}'\to{\Lambda}\to 0$ of NC-nilpotent algebras we need to check that any map $f:{\operatorname{Spec}}({\Lambda})\to X$ extends to a map ${\operatorname{Spec}}({\Lambda}')\to X$. By assumption, we can assume that there is an affine open covering $({\operatorname{Spec}}({\Lambda}_i))$ of ${\operatorname{Spec}}({\Lambda})$ such that the corresponding restrictions $f_i:{\operatorname{Spec}}({\Lambda}_i)\to X$ extend to $f'_i:{\operatorname{Spec}}({\Lambda}'_i)\to X$, where $0\to I_i\to {\Lambda}'_i\to {\Lambda}_i\to 0$ is the induced central extension of ${\Lambda}_i$. On intersections the liftings $f'_i$ and $f'_j$ differ by a derivation of ${{\cal O}}_X$ with values in $I$. Since $I$ is a central bimodule, such derivations correspond to homomorphisms $f^*{\Omega}_X^1\to I$. Thus, we get a Cech $1$-cocycle on ${\operatorname{Spec}}({\Lambda}^{ab})$ with values in $(f^*{\Omega}_X^1)^\vee{\otimes}I$. Since $H^1({\operatorname{Spec}}({\Lambda}^{ab},(f^*{\Omega}_X^1)^\vee{\otimes}I)=0$, we can make the liftings $f'_i$ to be compatible on intersections.
Let $R$ be a $d$-smooth algebra. Then $R$ is a universal central extension of the $(d-1)$-smooth algebra $R/F^dR$ (see [@Kapranov Thm. 1.6.1, Prop. 1.6.2]). For example, a $1$-smooth algebra $R$ is a universal central extension of $R_{ab}$. The universality is equivalent to the property that the commutator bracket $R_{ab}\times R_{ab}\to F^1R$ is the universal skew-symmetric biderivation, so $F^1R\simeq{\Omega}^2_{R_{ab}}$ (see [@Kapranov Sec. 1.3]). This analogous property holds in the nonaffine setting. Namely, a central extension of sheaves of $k$-algebras $$0\to {{\cal I}}\to {\widetilde}{{{\cal O}}}\to{{\cal O}}_Y\to 0$$ on a smooth commutative scheme $Y$ is $1$-smooth if and only the commutator pairing $$(f,g)\mapsto [{\widetilde}{f},{\widetilde}{g}]\in {{\cal I}},$$ (where ${\widetilde}{f},{\widetilde}{g}\in{\widetilde}{{{\cal O}}}$ are any liftings of $f$ and $g$) viewed as a biderivation with values in ${{\cal I}}$ induces an isomorphism ${\Omega}^2_X\to {{\cal I}}$.
\[standard-1-thickening\](cf. [@Kapranov Ex. (1.3.9)]) Let $X$ be a smooth scheme. The [*standard $1$-smooth thickening*]{} of $X$ is the sheaf of $k$-vector spaces ${\widetilde}{{{\cal O}}}={\Omega}^2_X\oplus {{\cal O}}_X$ equipped with the product $$({\alpha},f)\cdot ({\alpha}',f')=(f'{\alpha}+f{\alpha}'+df{\wedge}dg, ff').$$
Thus, for any NC-smooth scheme $X$ one has a natural isomorphism $$F^1{{\cal O}}_X/F^2{{\cal O}}_X\simeq{\Omega}^2_{X_{ab}}.$$ The description of the quotients $F^n{{\cal O}}_X/F^{n+1}{{\cal O}}_X$ is more complicated. Kapranov proves that $$F^n{{\cal O}}_X/F^{n+1}{{\cal O}}_X\simeq P_n({{\cal O}}_{X_{ab}}),$$ where $P(R)=\bigoplus_{n\ge 0}P_n(A)$ denotes the Poisson envelope of a commutative algebra $A$. However, the description of $P_n(A)$ for a smooth algebra $A$ in terms of some polynomial functors proposed in [@Kapranov Thm. 4.1.3] is only known to hold locally (see [@Cortinas Thm. 1.4]; the proof of the general case in [@Kapranov] has a gap). In this work we will consider another algebra filtration on the structure sheaf of an NC-smooth scheme for which the graded quotients will have a simple descriptions as polynomial functors.
For any ring (or a sheaf of rings) $R$ and $n\ge 1$ we define the two-sided ideal $$\label{I-n-def-eq}
I_n(R)=\sum_{i_1\ge 2,\ldots,i_m\ge 2,i_1+\ldots+i_m\ge n} R\cdot R_{i_1}^{{\operatorname{Lie}}}\cdot R\cdot\ldots\cdot R\cdot R^{{\operatorname{Lie}}}_{i_m}\cdot R.$$ Note that $R_i^{{\operatorname{Lie}}}{\subset}F^{i-1}R$, so $I_n(R){\subset}F^{[n/2]}R$. On the other hand, it is clear from the definition that $F^{n-1}R{\subset}I_n(R)$ for $n\ge 2$. Thus, $(I_n(R))$ define the same topology on $R$ as $(F^nR)$.
For any ring $R$ the filtration $I_n(R)$ is an algebra filtration, so we can consider the associated graded algebra $${\operatorname{gr}}^\bullet_I(R)=\bigoplus_{n\ge 0} I_n(R)/I_{n+1}(R),$$ where we set $I_0(R)=R$. Note that ${\operatorname{gr}}^0_I(R)=R^{ab}$. Note also that $[R,I_n(R)]{\subset}I_{n+1}(R)$, so $R^{ab}$ is in the center of ${\operatorname{gr}}^\bullet_I(R)$.
For a vector space $V$ we denote by ${{\cal L}ie}(V)=\bigoplus_{n\ge 1}{{\cal L}ie}_n(V)$ the free Lie algebra generated by $V$ and we set $$\label{Lie+eq}
{{\cal L}ie}_+(V)=\bigoplus_{n\ge 2}{{\cal L}ie}_n(V).$$ There is a natural ${\operatorname{GL}}(V)$-action on ${{\cal L}ie}_n(V)$, so we can define the functor ${{\cal L}ie}_n(?)$ on the category of vector bundles over a scheme. We consider the grading on ${{\cal L}ie}_+(V)$ such that ${{\cal L}ie}_n(V)$ lives in degree $n$.[^2] We equip the universal enveloping algebra $U({{\cal L}ie}_+(V))$ with the induced grading, so that $$U({{\cal L}ie}_+(V))_n{\subset}U({{\cal L}ie}(V))_n=V^{{\otimes}n}.$$
\[homomorphism-lem\] Assume that $R^{ab}$ is smooth. There is a natural surjective homomorphism of graded algebras $$\label{Lie-map-eq}
U({{\cal L}ie}_+^{R^{ab}}({\Omega}^1_{R^{ab}}))\to gr^\bullet_I(R)$$ sending $f_0[df_1,[df_2,\ldots,[df_{n-1},df_n]\ldots]]$ to ${\widetilde}{f_0}[{\widetilde}{f_1},[{\widetilde}{f_2},\ldots,[{\widetilde}{f_{n-1}},{\widetilde}{f_n}]\ldots]]$, where ${\widetilde}{f_i}\in R$ is a lifting of $f_i\in R^{ab}$.
[[*Proof*]{}]{}. First, one can easily check that the map $$\label{tensor-lie-map}
T^n({\Omega}^1_{R^{ab}})\to gr^n_I(R):
f_0df_1{\otimes}df_2{\otimes}\ldots{\otimes}df_n\mapsto
{\widetilde}{f_0}[{\widetilde}{f_1},[{\widetilde}{f_2},\ldots,[{\widetilde}{f_{n-1}},{\widetilde}{f_n}]\ldots]]$$ is well defined for $n\ge 2$. We can also define similar maps for iterated brackets in a different order. More precisely, let $L=\bigoplus_n L(n)$ be the Lie operad. Then we get a natural map $$L(n){\otimes}_{k[S_n]} T^n({\Omega}^1_{R^{ab}})\to gr^n_I(R).$$ Recall that ${{\cal L}ie}_n(V)=L(n){\otimes}_{S_n} T^n(V)$ (see [@KM Sec. 3]). Thus, we obtain a map of Lie algebras $${{\cal L}ie}_+^{R^{ab}}({\Omega}^1_{R^{ab}})\to gr^\bullet_I(R)$$ which extends to . To check surjectivity of it is enough to see that ${\operatorname{gr}}^\bullet_I(R)$ is generated as $R^{ab}$-algebra by the images of $R_n^{{\operatorname{Lie}}}{\subset}I_n(R)$. But this follows from the inclusion $$R\cdot R_{i_1}^{{\operatorname{Lie}}}\cdot R\cdot R_{i_2}^{{\operatorname{Lie}}}\cdot \ldots\cdot R\cdot R^{{\operatorname{Lie}}}_{i_m}\cdot R{\subset}R\cdot R_{i_1}^{{\operatorname{Lie}}}\cdot R_{i_2}^{{\operatorname{Lie}}}\cdot \ldots\cdot R^{{\operatorname{Lie}}}_{i_m}+I_{i_1+\ldots+i_m+1}(R),$$ which holds since commuting factors from $R$ to the left creates additional commutators.
We will show in Corollary \[I-filtration-gen-cor\] that the map is an isomorphism if and only if $R$ is NC-smooth.
NC-thickening of the affine space {#NC-affine-space-sec}
---------------------------------
Let us recall Kapranov’s description of the completion $k{\langle}e_1,\ldots,e_n{\rangle}_{[[ab]]}$ of the free algebra $k{\langle}e_1,\ldots,e_n{\rangle}$ with respect to the commutator filtration (see [@Kapranov Sec. 3]). For a polynomial $P(y_1,\ldots,y_n)\in k[y_1,\ldots,y_n]$ we define $[[P(e)]]\in k{\langle}e_1,\ldots,e_n{\rangle}$ by replacing every monomial $cy_1^{i_1}\ldots y_n^{i_n}$ in $P$ by the ordered monomial $ce_1^{i_1}\ldots e_n^{i_n}$. Let $V$ be the $k$-vector space with the basis $e_1,\ldots,e_n$, so we can identify $k{\langle}e_1,\ldots,e_n{\rangle}$ with the tensor algebra $T(V)$. Let ${{\cal B}}=\{{\beta}_1,{\beta}_2,\ldots\}$ be a $k$-basis of ${{\cal L}ie}_+(V)$ (see ), ordered in such a way that elements of ${{\cal L}ie}_n(V)$ precede elements of ${{\cal L}ie}_{n+1}(V)$. For any function ${\lambda}:{{\cal B}}\to{{\Bbb Z}}_{\ge 0}$ with finite support we consider the monomial $$M_{\lambda}={\beta}_1^{{\lambda}({\beta}_1)}{\beta}_2^{{\lambda}({\beta}_2)}\ldots \in T(V)$$ (where we view ${{\cal L}ie}_+(V)$ as a subspace of $T(V)$). The definition of the commutator filtration implies that every infinite series $$\sum_{{\lambda}:{{\cal B}}\to{{\Bbb Z}}_{\ge 0}} [[a_{\lambda}(e)]] M_{\lambda},$$ where $a_{\lambda}(y)\in k[y_1,\ldots,y_n]$, converges in $T(V)_{[[ab]]}$. In fact, every element of $T(V)_{[[ab]]}$ can be written uniquely as such a series and a multiplication rule is given by [@Kapranov Prop. 3.4.3].
We can rewrite the above description by realizing $T[V]_{[[ab]]}$ as a subalgebra in the algebra of formal noncommutative power series in $e_1,\ldots,e_n$ over the commutative ring $A=k[x_1,\ldots,x_n]$. Namely, consider the completed tensor algebra of the free $A$-module $V{\otimes}A$, $$\hat{T}_A(V{\otimes}A):=\prod_{n\ge 0}T^n_A(V{\otimes}A)\simeq A{\langle\langle}{e_1,\ldots,e_n}{\rangle\rangle}.$$ Let ${\Omega}^{\bullet}_A={\Omega}^\bullet_{A/k}$ be the de Rham complex of $A$. We equip the algebra ${\Omega}^\bullet_A{\otimes}_A \hat{T}_A(V{\otimes}A)$ with the differential $D$ which is a unique derivation extending the de Rham differential on ${\Omega}^\bullet_A$ such that $$D(e_i)=dx_i \text{ for } i=1,\ldots,n.$$ Let $\pi:\hat{T}_A(V{\otimes}A)\to A$ be the natural projection (sending all tensors of degree $\ge 1$ to zero). Consider the $k$-algebra homomorphism $${\delta}: T(V)\to \hat{T}_A(V{\otimes}A): e_i\mapsto e_i-x_i.$$ Clearly the image is contained in the kernel of $D$, and $\pi\circ{\delta}:T(V)\to A$ is the homomorphism sending $e_i$ to $-x_i$.
\[local-constr-thm\] The homomorphism ${\delta}$ extends to an isomorphism of the NC-completion $T(V)_{[[ab]]}$ with the subalgebra ${\operatorname{ker}}(D){\subset}\hat{T}_A(V{\otimes}A)$.
[[*Proof*]{}]{}. Note that any Lie word $w\in {{\cal L}ie}_+(V)$ in generators $e_i$ is mapped by ${\delta}$ to itself viewed as an element in $T(V){\subset}T_A(V{\otimes}A)$. Hence, the same is true for the monomials $M_{\lambda}$, where ${\lambda}:{{\cal B}}\to Z_{\ge 0}$. Thus, the homomorphism ${\delta}$ extends to a homomorphism $$\hat{{\delta}}: T(V)_{[[ab]]}\to \hat{T}_A(V{\otimes}A).$$ mapping the infinite series $\sum_{{\lambda}} [[a_{\lambda}]] M_{\lambda}$ to $\sum_{{\lambda}} [[a_{\lambda}(e_i-x_i)]] M_{\lambda}$ which converges in $\hat{T}_A(V{\otimes}A)$ and still belongs to ${\operatorname{ker}}(D)$. We need to check that the image of $\hat{{\delta}}$ is the entire ${\operatorname{ker}}(D)$. Note that any element $x\in\hat{T}_A(V{\otimes}A)$ can be written as $x=\sum_{\lambda}[[f_{\lambda}(e)]] M_{\lambda}$, where $f_{\lambda}(y)$ are some formal power series in commuting variables $y_1,\ldots,y_n$ with coefficients in $A$. The condition $D(x)=0$ means that each $f_{{\lambda}}(y)$ is of the form $f_{{\lambda}}(y)=a_{{\lambda}}(y_i-x_i)$ for some power series $a_{{\lambda}}$ with coefficients in ${{\Bbb C}}$. Note that $a_{{\lambda}}(-x)=f_{\lambda}(0)$ is the constant coefficient of $f_{{\lambda}}(y)$, hence $a_{{\lambda}}\in A$.
Global construction {#NC-constr-sec}
-------------------
Let $(V, \nabla)$ be a vector bundle with a connection over a smooth scheme $X$, and let $\iota:T\to V$ be a homomoprhism of vector bundles. We denote still by $\nabla$ the induced connection on the dual bundle $V^*$. Then we can consider the completed tensor algebra $\hat{T}_{{{\cal O}}}(V^*)$ and define the differential $D=D_{\nabla,\iota}$ on the algebra ${\Omega}^\bullet{\otimes}_{{{\cal O}}}\hat{T}_{{{\cal O}}}(V^*)$ as a unique derivation, extending the de Rham differential on ${\Omega}^\bullet$ and satisfying $$D(\alpha)=\nabla(\alpha)+\iota^*(\alpha){\otimes}1 \text{ for } \alpha\in V^*.$$
\[torsion-free-lem\] One has $D^2=0$ if and only if $\nabla$ is flat and $\iota$ is torsion free, i.e., satisfies the equation $$\label{torsion-free-eq}
\iota([t_1,t_2])=\nabla_{t_1}(\iota(t_2))-\nabla_{t_2}(\iota(t_1)).$$
[[*Proof*]{}]{}. Let ${\widetilde}{\nabla}:{\Omega}^1{\otimes}V^*\to{\Omega}^2{\otimes}V$ and ${\widetilde}{\iota^*}:{\Omega}^1{\otimes}V\to {\Omega}^2$ be the maps given by $${\widetilde}{\nabla}({\omega}{\otimes}\alpha)=d{\omega}{\otimes}\alpha-{\omega}{\wedge}\nabla(\alpha),\ \ {\widetilde}{\iota^*}({\omega}{\otimes}{\alpha})=
{\omega}{\wedge}\iota*({\alpha}).$$ Then $$D^2({\alpha})={\widetilde}{\nabla}\nabla(a)+[d\iota^*({\alpha})-{\widetilde}{\iota^*}\nabla({\alpha})]\in {\Omega}^2{\otimes}V\oplus{\Omega}^2.$$ Hence $D^2=0$ if and only if ${\widetilde}{\nabla}\nabla=0$ and $$d\circ\iota^*={\widetilde}{\iota^*}\circ\nabla.$$ The former condition just means that $\nabla$ is flat. To show that the latter condition is equivalent to we note that for a pair of local vector fields $t_1$, $t_2$ and for ${\alpha}\in V^*$ one has $$d\iota^*({\alpha})(t_1, t_2)=t_1\cdot {\alpha}(\iota(t_2))-t_2\cdot{\alpha}(\iota(t_1))-{\alpha}(\iota([t_1,t_2])),$$ $${\widetilde}{\iota^*}(\nabla({\alpha}))(t_1,t_2)=\nabla_{t_1}({\alpha})(\iota(t_2))-\nabla_{t_2}({\alpha})(\iota(t_1)).$$ It remains to use the formulas $t_1\cdot {\alpha}(\iota(t_2))=\nabla_{t_1}({\alpha})(\iota(t_2))+{\alpha}(\nabla_{t_1}(\iota(t_2)))$ and a similar formula for $t_2\cdot {\alpha}(\iota(t_2))$.
Now let us specialize to the case $V=T$ and $\iota={\operatorname{id}}$, so that $\nabla$ is a a connection on the tangent bundle. Assume that $\nabla$ is torsion free and flat. Then the previous construction gives an algebraic NC-connection on $X$ in the sense of Definition \[alge-nc-defi\].
Let $D$ be an arbitrary algebraic NC-connection on $X$, i.e., a differential of degree $1$ on the algebra $${{\cal A}}_X:={\Omega}^\bullet{\otimes}_{{{\cal O}}}\hat{T}_{{{\cal O}}}({\Omega}^1),$$ where the grading on ${{\cal A}}_X$ is given by the natural grading on ${\Omega}^\bullet$, such that $$D(1\otimes \alpha)=\alpha{\otimes}1+\nabla_1(\alpha)+\nabla_2(\alpha)+\nabla_3(\alpha)+\cdots$$ for ${\alpha}\in{\Omega}^1$. The same calculation as in Lemma \[torsion-free-lem\] shows that $\nabla_1$ corresponds to a torsion free connection on $X$. In Theorem \[dg-constr-thm\] below we show that conversely, any torsion free connection $\nabla$ (not necessarily flat) gives rise to an NC-connection with $\nabla_1=\nabla$. The idea is to add higher terms to the differential that would kill the effect of the nonzero curvature.
Below we view the universal enveloping algebra $U({{\cal L}ie}_+(V))$ as a graded subalgebra of $T(V)=U({{\cal L}ie}(V))$.
\[dg-constr-thm\] (i) Let $\nabla$ be a torsion free connection on the tangent bundle on a smooth scheme $X$. Then there exists a natural construction of an algebraic NC-connection $D$ with $\nabla_1=\nabla$, the induced connection on ${\Omega}^1$.
\(ii) For any two algebraic NC-connections $D,D'$ on $X$ (possibly for different torsion free connections) there exists an algebra automorphism $\phi$ of $\hat{T}_{{\cal O}}({\Omega}^1)$, such that for ${\alpha}\in{\Omega}^1$, $\phi({\alpha})={\alpha}+\phi_2({\alpha})+\phi_3({\alpha})+\ldots$ with $\phi_i({\alpha})\in T^i({\Omega}^1)$, such that $D'=({\operatorname{id}}{\otimes}\phi) D ({\operatorname{id}}{\otimes}\phi)^{-1}$, where ${\operatorname{id}}{\otimes}\phi$ is the induced automorphism of ${{\cal A}}_X$.
\(iii) Let $G=G_D$ be the sheaf of groups of automorphisms $\phi$ of $\hat{T}_{{\cal O}}({\Omega}^1)$, such for ${\alpha}\in{\Omega}^1$, $\phi({\alpha})={\alpha}+\phi_2({\alpha})+\phi_3({\alpha})+\ldots$ with $\phi_i({\alpha})\in T^i({\Omega}^1)$, and such that $D=({\operatorname{id}}{\otimes}\phi) D ({\operatorname{id}}{\otimes}\phi)^{-1}$. The group $G$ has a decreasing filtration by normal subgroups $$G_n=\{\phi\in G\ |\ \phi_i=0 \text{ for } 2\le i\le n\},$$ where $n\ge 1$ (so $G_1=G$). Then $$G_n/G_{n+1}\simeq \underline{{\operatorname{Hom}}}({\Omega}^1,U({{\cal L}ie}_+({\Omega}^1))_{n+1})$$ as a sheaf of (abelian) groups.
First, we need to describe a certain version of de Rham complex for the free algebra (it is different from the well known noncommutative forms considered in [@CQ]).
\[nc-dR-prop\] Let $V$ be a finite-dimensional vector space. Let us equip the algebra $${{\cal A}}_V={\bigwedge}^\bullet(V){\otimes}T(V)$$ with a dg-algebra structure, where the grading is given by the natural grading of the exterior algebra (so $T(V)$ is in degree $0$) and the differential $\tau$ is the unique derivation, trivial on $\bigwedge^\bullet(V)$ and satisfying $\tau(1{\otimes}v)=v{\otimes}1$. Then we have $H^i({{\cal A}}_V)=0$ for $i>0$ and $$H^0({{\cal A}}_V)=U({{\cal L}ie}_+(V)){\subset}T(V).$$
[[*Proof*]{}]{}. It is easy to check that for any $v\in V$ and $x\in T(V)$ one has $$\tau(vx-xv)=v\tau(x)-\tau(x)v.$$ This implies that $\tau({{\cal L}ie}_+(V))=0$ and hence $U({{\cal L}ie}_+(V)){\subset}{\operatorname{ker}}(\tau)$. Let $e_1,\ldots,e_n$ be a basis of $V$. By the PBW-theorem, we can write any element of $T(V)$ uniquely as a finite sum $$\label{x-M-la-eq}
x=\sum_{{\lambda}:{{\cal B}}\to{{\Bbb Z}}_{\ge 0}} [[f_{{\lambda}}(e)]] M_{\lambda}$$ with $f_{\lambda}\in S(V)$, where we use the notation of Section \[NC-affine-space-sec\]. Since $M_{\lambda}$ are elements in $U({{\cal L}ie}_+(V))$, for $x$ given by we have $$\tau(x)=\sum_{i=1}^n e_i{\otimes}\sum_{{\lambda}}[[{\partial}_if_{\lambda}(e)]] M_{\lambda}.$$ Let ${\overline}{M}_{\lambda}$ be the monomials $M_{\lambda}$ viewed as elements of $S({{\cal L}ie}_+(V))$. Then we have a natural isomorphism $${\Omega}^\bullet_{S(V)}{\otimes}_k S({{\cal L}ie}_+(V))\to {{\cal A}}_V: f\cdot de_{i_1}{\wedge}\ldots{\wedge}de_{i_r}{\otimes}{\overline}{M}_{\lambda}\mapsto
de_{i_1}{\wedge}\ldots{\wedge}de_{i_r}{\otimes}[[f(e)]] M_{\lambda},$$ where $f\in S(V)$. Under this isomorphism the differential $\tau$ corresponds to the differential $d_{dR}{\otimes}{\operatorname{id}}$ on ${\Omega}^\bullet_{S(V)}{\otimes}_k S({{\cal L}ie}_+(V))$, where $d_{dR}$ is the de Rham differential. This immediately implies our assertion, since the $k$-linear span of the monomials $M_{\lambda}$ is contained in the subalgebra $U({{\cal L}ie}_+(V)){\subset}T(V)$.
\[homotopy-cor\] In the above situation there exists a ${\operatorname{GL}}(V)$-equivariant homotopy operators $h=(h_n)$, where $$h_n: {\bigwedge}^n(V){\otimes}T^i(V)\to {\bigwedge}^{n-1}(V){\otimes}T^{i+1}(V)$$ satisfying $h_{n+1}\tau+\tau h_n={\operatorname{id}}$ on $\bigwedge^n(V){\otimes}T(V)$ for $n\ge 1$. Hence, for any vector bundle ${{\cal V}}$ on a scheme $X$ there exist similar operators on $\bigwedge^\bullet({{\cal V}}){\otimes}T({{\cal V}})$.
[[*Proof*]{}]{}. This immediately follows from the vanishing of $H^{>0}({{\cal A}}_V)$ and from the reductivity of $GL(V)$.
We will need the following simple calculation with graded Lie commutators.
\[graded-Lie-lem\] Let ${{\bf L}}$ be the graded Lie algebra over ${{\Bbb Z}}[\frac{1}{2}]$ with generators $D_1,\ldots,D_n$, $F_1,\ldots,F_n$ of degree $1$ and relations $$\sum_{i=0}^m [D_i,D_{m-i}]=\sum_{i=0}^m [D_i,F_{m-i}]=0$$ for $m=0,\ldots.n$. Then one has $$\sum_{i=1}^n [D_0,[D_i,D_{n+1-i}]]=\sum_{i=1}^n [D_0,[D_i,F_{n+1-i}]]=0$$ in ${{\bf L}}$.
[[*Proof*]{}]{}. Note that ${{\bf L}}$ has a Lie endomorphism, identical on $D_i$ and sending $F_i$ to $D_i$. Thus, it is enough to prove the identity $$\label{formal-Lie-identity}
\sum_{i=1}^n [D_0,[D_i,F_{n+1-i}]]=0.$$ Applying Jacobi identity we can rewrite the left-hand side as follows: $$\sum_{i=1}^n [D_0,[D_i,F_{n+1-i}]]=
\sum_{i=1}^n [[D_0,D_i],F_{n+1-i}]+
\sum_{i=1}^n [[D_0,F_{n+1-i}],D_i].$$ Next, applying the relations in ${{\bf L}}$ we get $$\sum_{i=1}^n[[D_0,D_i],F_{n+1-i}]=-\frac{1}{2}\sum_{i\ge 1,j\ge 1}[[D_i,D_j],F_{n+1-i-j}],$$ $$\sum_{i=1}^n [[D_0,F_{n+1-i}],D_i]=-\sum_{i\ge 1,j\ge 1}[[D_j,F_{n+1-i-j}],D_i]$$ with the convention $F_m=0$ for $m<0$. Now applying the Jacobi identity we get $$\sum_{i\ge 1,j\ge 1}[[D_i,D_j],F_{n+1-i-j}]=2\sum_{i\ge 1,j\ge 1}[D_i,[D_j,F_{n+1-i-j}]]$$ and the assertion follows.
[*Proof of Theorem \[dg-constr-thm\].*]{} (i) Let us denote by $D_1$ the unique derivation of ${{\cal A}}_X$ extending the de Rham differential on ${\Omega}^\bullet$ and such that $D_1(1{\otimes}{\alpha})=\nabla({\alpha})\in {\Omega}^1{\otimes}{\Omega}^1$ for ${\alpha}\in{\Omega}^1$ (the existence of $D_1$ follows from the Leibnitz identity for $\nabla$). We also set $D_0=\tau$, which is the differential on ${{\cal A}}_X$, zero on ${\Omega}^\bullet$, and satisfying $\tau(1{\otimes}{\alpha})={\alpha}{\otimes}1$ for ${\alpha}\in{\Omega}^1$ (see Proposition \[nc-dR-prop\]). Note that $D_0^2=0$ and the condition that our connection is torsion-free is equivalent (cf. the proof of Lemma \[torsion-free-lem\]) to the identity $$[D_0,D_1]=D_0D_1+D_1D_0=0,$$ where we use $[\cdot,\cdot]$ to denote the supercommutator. This implies that $[D_0,D_1^2]=0$, hence $$D_0(D_1^2(1{\otimes}{\alpha}))=0$$ for any ${\alpha}\in{\Omega}^1$. Note that the derivation $D_1^2$ is zero on ${\Omega}^\bullet$, so it is ${{\cal O}}$-linear. By Corollary \[homotopy-cor\], this implies that the ${{\cal O}}$-linear map $$\nabla_2({\alpha}):=-h_2(D_1^2(1{\otimes}{\alpha}))$$ satisfies $D_0(\nabla_2({\alpha}))=-D_1^2(1{\otimes}{\alpha})$. Extending $\nabla_2$ to a degree $1$ derivation $D_2$ of ${{\cal A}}_X$, zero on ${\Omega}^\bullet$, this can be rewritten as $$[D_0,D_2]+D_1^2=0.$$ Assume that ${{\cal O}}$-linear maps $\nabla_i:{\Omega}^1\to {\Omega}^1{\otimes}T^i({\Omega}^1)$ are defined for $2\le i\le n$ and that the corresponding degree $1$ derivations $D_i$ of ${{\cal A}}_X$ (zero on ${\Omega}^\bullet$) satisfy $$\label{D-m-eq}
\sum_{i=0}^m [D_i,D_{m-i}]=0$$ for all $m\le n$. We are going to construct an ${{\cal O}}$-linear map $\nabla_{n+1}$ such that the corresponding degree $1$ derivation $D_{n+1}$ satisfies the equation for $m=n+1$, which can be rewritten as $$\label{D-n+1-eq}
[D_0,D_{n+1}]=-\frac{1}{2}\sum_{i=1}^n [D_i,D_{n+1-i}].$$ By Lemma \[graded-Lie-lem\], we have $$\sum_{i=1}^n [D_0,[D_i,D_{n+1-i}]]=0.$$ Note that the derivation $\sum_{i=1}^n[D_i,D_{n+1-i}]$ acts by zero on ${\Omega}^\bullet$, so it is ${{\cal O}}$-liinear. Hence, by Corollary \[homotopy-cor\], we can set $$\nabla_{n+1}({\alpha})=-\frac{1}{2}\sum_{i=1}^n h_{n+1}([D_i,D_{n+1-i}](1{\otimes}{\alpha})).$$
The above recursive procedure gives a degree $1$ derivation $D=D_0+D_1+D_2+D_3+\ldots$ of ${{\cal A}}_X$ and the equations mean that $D^2=0$.
\(ii) Let us write $D=D_0+D_1+D_2+\ldots$, $D'=D'_0+D'_1+D'_2+\ldots$, where $D_0=D'_0=\tau$, $D_1$ and $D'_1$ correspond to some torsion-free connections, and $D_i$ (resp., $D'_i$) are derivations extending the ${{\cal O}}$-linear operators $\nabla_i$ (resp., $\nabla'_i$). We claim that if $D_i=D'_i$ for $i<n$ then there exists an automorphism $\phi$ of $\hat{T}_{{\cal O}}({\Omega}^1)$ with $$\phi({\alpha})={\alpha}+\phi_{n+1}({\alpha}),$$ where $\phi_{n+1}({\alpha})\in T^{n+1}({\Omega}^1)$, such that $$({\operatorname{id}}{\otimes}\phi) D({\operatorname{id}}{\otimes}\phi)^{-1})_i=D'_i \text{ for } i\le n.$$ Indeed, for any $\phi$ as above we have $({\operatorname{id}}{\otimes}\phi) D ({\operatorname{id}}{\otimes}\phi)^{-1})_i=D_i$ for $i<n$, while $$({\operatorname{id}}{\otimes}\phi) D ({\operatorname{id}}{\otimes}\phi)^{-1})_n(1{\otimes}{\alpha})=D_n(1{\otimes}{\alpha})-D_0(1{\otimes}\phi_{n+1}({\alpha})).$$ Thus, we need to find $\phi_{n+1}$ such that $$D_n(1{\otimes}{\alpha})-D'_n(1{\otimes}{\alpha})=D_0(1{\otimes}\phi_{n+1}({\alpha})).$$ By Corollary \[homotopy-cor\], it is enough to check that $$D_0(D_n-D'_n)(1{\otimes}{\alpha})=0.$$ Since $D_i=D'_i$ for $i<n$ the equations for $D_i$ and $D'_i$ imply that $$[D_0,D_n-D'_n]=0.$$ Hence, $$D_0(D_n-D'_n)(1{\otimes}{\alpha})=-(D_n-D'_n)({\alpha}{\otimes}1)=0,$$ since $D_n$ and $D'_n$ have the same restriction to ${\Omega}^\bullet$ (zero if $n\neq 1$ and the de Rham differential for $n=1$).
\(iii) The condition that $\phi\in G_{n-1}$ preserves $D$ implies (by looking at the components in ${\Omega}^1{\otimes}T^{n-1}({\Omega}^1)$) that $\tau\phi_n=0$, i.e., $\phi_n$ factors through $U({{\cal L}ie}_+({\Omega}^1))_n{\subset}T^n({\Omega}^1)$. Thus, we have a natural homomorphism $$G_{n-1}\to \underline{{\operatorname{Hom}}}({\Omega}^1,U({{\cal L}ie}_+({\Omega}^1))_n): \phi\mapsto \phi_n$$ with the kernel $G_n$. It remains to prove the surjectivity of this map. In other words, for any given $\phi_n:{\Omega}^1\to T^n({\Omega}^1)$ such that $\tau\phi_n=0$, we have to construct an element $\phi\in G_{n-1}$ such that $\phi({\alpha})={\alpha}+\phi_n({\alpha})+\phi_{n+1}({\alpha})+\ldots$ for ${\alpha}\in{\Omega}^1$. Let us first consider an automorphism ${\widetilde}{\phi}$ of $\hat{T}_{{\cal O}}({\Omega}^1)$ defined by ${\widetilde}{\phi}({\alpha})={\alpha}+\phi_n({\alpha})$. Then the differential ${\widetilde}{D}={\widetilde}{\phi}D{\widetilde}{\phi}^{-1}$ has components $({\widetilde}{D})_i=D_i$ for $i<n$. Now the proof of (ii) shows that there exists an automorphism $\psi$ of $\hat{T}_{{\cal O}}({\Omega}^1)$ with $\psi({\alpha})={\alpha}+\psi_{n+1}({\alpha})+\psi_{n+2}({\alpha})+\ldots$, such that ${\widetilde}{D}=\psi D\psi^{-1}$. Then $\phi=\psi^{-1}{\widetilde}{\phi}$ is the required element of $G_{n-1}$.
Note that $D_1^2(1{\otimes}{\alpha})=R({\alpha})\in {\Omega}^2{\otimes}{\Omega}^1$, where $R$ is the curvature tensor. Hence, $\nabla_2(1{\otimes}{\alpha})=h_2(R({\alpha}))$. It is easy to see that the identities $\tau(R({\alpha}))=D_0(D_1^2(1{\otimes}{\alpha}))=0$ and $\tau([D_1,D_2](1{\otimes}{\alpha}))=\tau(\nabla(\nabla_2)({\alpha}))=0$ correspond to the first and second Bianchi identities (in a particular case of a torsion-free connection).
\[D-homotopy-lem\] Let $(C^\bullet,d)$ be a complex in an abelian category, such that each $C^n$ is complete with respect to a decreasing filtration $(F^\bullet C^n)$. Assume that $d=d_0+d_{\ge 1}$, where $d_0$ is also a differential on $C^\bullet$, i.e. $d_0^2=0$, and $$d_{\ge 1}(F^i C^n){\subset}F^i C^{n+1}.$$ Let also $h$ be an operator of degree $-1$ on $C^\bullet$ such that $$h(F^i C^n){\subset}F^{i+1} C^{n-1}.$$ Finally assume that the operator $p_0={\operatorname{id}}-hd_0-d_0h$ satisfies $p_0 d_{\ge 1}=0$. Then ${\operatorname{id}}+hd_{\ge 1}$ is an invertible operator on $C^\bullet$ and $$d=({\operatorname{id}}+hd_{\ge 1})^{-1}d_0({\operatorname{id}}+hd_{\ge 1}).$$
[[*Proof*]{}]{}. Since $d_{\ge 1}$ preserves the filtration $F^\bullet C$ and $h$ raises the filtration index by $1$, we immediately see that ${\operatorname{id}}+hd_{\ge 1}$ is invertible (since $C^n$ is complete with respect to the filtration). Thus, it is enough to check the identity $$({\operatorname{id}}+hd_{\ge 1})d=d_0({\operatorname{id}}+hd_{\ge 1}).$$ We have $$\begin{aligned}
&({\operatorname{id}}+hd_{\ge 1})d=d+hd_{\ge 1}d=d-hd_0d=d_0+d_{\ge 1}-hd_0d_{\ge 1}=\\
&d_0+({\operatorname{id}}-hd_0)d_{\ge 1}=d_0+(p_0+d_0h)d_{\ge 1}=d_0+d_0hd_{\ge 1},\end{aligned}$$ as required.
\[D-homotopy-prop\] Let $D=D_0+D_1+D_2+\ldots$ be a differential on ${{\cal A}}_X$ as in Theorem \[dg-constr-thm\], and let $h=(h_n)$ be the homotopy from Corollary \[homotopy-cor\]. Set $D_{\ge 1}=D_1+D_2+\ldots$. Then ${\operatorname{id}}+hD_{\ge 1}$ is an invertible operator on ${{\cal A}}_X$ and $$D=({\operatorname{id}}+hD_{\ge 1})^{-1}D_0({\operatorname{id}}+hD_{\ge 1}).$$ Hence, the operator $$h_D=({\operatorname{id}}+hD_{\ge 1})^{-1}h({\operatorname{id}}+hD_{\ge 1}):{{\cal A}}_X\to{\Omega}^\bullet{\otimes}\hat{T}^{\ge 1}({\Omega}^1)$$ of degree $-1$ satisfies $h_DD+Dh_D={\operatorname{id}}$ on ${{\cal A}}_X^i$ for $i>0$.
[[*Proof*]{}]{}. We apply Lemma \[D-homotopy-lem\] to $d_0=D_0$, $d_{\ge 1}=D_{\ge 1}$, the filtration ${\Omega}^\bullet {\otimes}\hat{T}^{\ge i}({\Omega}^1)$ on ${{\cal A}}_X$, and the homotopy operator $h$ from Corollary \[homotopy-cor\]. Note that $p_0={\operatorname{id}}-hD_0-D_0h$ satisfies $p_0({{\cal A}}_X^{\ge 1})=0$ which implies that $p_0 D_{\ge 1}=0$.
\[coh-vanishing\] Let $D$ be an algebraic NC-connection on a smooth scheme $X$.
\(i) The complex of sheaves of abelian groups $({{\cal A}}_X, D)$ is homotopy equivalent to its $0$th cohomology. In particular, $\underline{H}^i({{\cal A}}_X,D)=0$ for $i>0$.
\(ii) We have an isomorphism of sheaves of abelian groups $$\label{ker-D-D0-eq}
({\operatorname{id}}+hD_{\ge 1}):{\operatorname{ker}}(D)\rTo{\sim} {\operatorname{ker}}(D_0)\simeq \hat{U}({{\cal L}ie}_+({\Omega}^1)),$$ where $\hat{U}$ denotes the completion of the universal enveloping algebra with respect to the augmentation ideal corresponding to the natural grading on ${{\cal L}ie}_+({\Omega}^1)$.
\(iii) For any $d\ge 1$ consider the map $$D^{(d)}: T({\Omega}^1)/T^{\ge d+1}({\Omega}^1)\to {\Omega}^1{\otimes}(T({\Omega}^1)/T^{\ge d}({\Omega}^1))$$ induced by $D$. Then the natural map $$K/(K\cap T^{\ge d+1}({\Omega}^1))\to {\operatorname{ker}}(D^{(d)}),$$ where $K={\operatorname{ker}}(D:{{\cal A}}^0_X\to {{\cal A}}^1_X)$, is an isomorphism.
[[*Proof*]{}]{}. Parts (i) and (ii) follow directly from Proposition \[D-homotopy-prop\]. For part (iii) we need to check that for any $x\in \hat{T}({\Omega}^1)$ such that $D(x)\in {\Omega}^1{\otimes}T^{\ge d}({\Omega}^1)$, one has $x\in K+T^{\ge d+1}({\Omega}^1)$. For this we use the operator $h_D$ from Proposition \[D-homotopy-prop\]. We have $$D(h_DDx-x)=(Dh_D+h_DD)Dx-Dx=0,$$ so $h_DDx-x\in K$. It remains to note that the homotopy $h$ from Corollary \[homotopy-cor\] maps ${\Omega}^\bullet{\otimes}T^{\ge d}({\Omega}^1)$ to ${\Omega}^\bullet{\otimes}T^{\ge d+1}({\Omega}^1)$, while ${\operatorname{id}}+hD_{\ge 1}$ preserves the filtration $({\Omega}^\bullet{\otimes}T^{\ge i}({\Omega}^1))$. Hence, $$h_DDx\in h_D({\Omega}^1{\otimes}T^{\ge d}({\Omega}^1)){\subset}T^{\ge d+1}({\Omega}^1),$$ and the decomposition $x=(x-h_DDx)+h_DDx$ has the required properties.
Let us fix a smooth scheme $X$ with an algebraic NC-connection $D$. Consider the sheaf of rings $$\label{O-NC-def-eq}
{{\cal O}}^{NC}={\underline}{H}^0({{\cal A}}_X,D)={\operatorname{ker}}(D:{{\cal A}}^0_X\to {{\cal A}}^1_X).$$ We have a natural projection $\pi:{{\cal A}}^0_X=\hat{T}_{{\cal O}}({\Omega}^1)\to {{\cal O}}$ sending tensors of degree $\ge 1$ to zero, and hence the induced homomorphism $\pi:{{\cal O}}^{NC}\to {{\cal O}}$. Note that by Theorem \[dg-constr-thm\](ii), the data $({{\cal O}}^{NC},\pi)$ does not depend on a choice of $D$ up to an isomorphism.
\[section-prop\] The homomorphism $\pi$ is surjective. Furthermore, it has a splitting ${\sigma}:{{\cal O}}\to{{\cal O}}^{NC}$ as a homomorphism of sheaves (not compatible with multiplication).
[[*Proof*]{}]{}. For $f\in{{\cal O}}$ we have $D(f)=df$. By Proposition \[D-homotopy-prop\], $${\sigma}(f)=f-h_D(df)\in{{\cal O}}^{NC}$$ gives the required splitting. We want to prove that ${{\cal O}}^{NC}$ is a smooth NC-thickening of ${{\cal O}}$. First, we need to check that it is NC-complete. Consider the following filtration by two-sided ideals on ${{\cal O}}^{NC}$: $${{\cal O}}^{NC}_{\ge d}:={{\cal O}}^{NC}\cap \hat{T}^{\ge d}_{{\cal O}}({\Omega}^1) \text{ for } d\ge 1.$$
\[filtration-prop\] (i) One has ${{\cal O}}^{NC}_{\ge 1}={{\cal O}}^{NC}_{\ge 2}={\operatorname{ker}}(\pi)$.
\(ii) One has ${{\cal O}}^{NC}_{\ge n}=I_n({{\cal O}}^{NC})$, where the filtration $I_n(?)$ is given by .
\(iii) The projection ${{\cal O}}^{NC}_{\ge n}\to T^n({\Omega}^1)$ induces an isomorphism ${{\cal O}}^{NC}_{\ge n}/{{\cal O}}^{NC}_{\ge n+1}\simeq U({{\cal L}ie}_+({\Omega}^1))_n$.
[[*Proof*]{}]{}. (i) This follows from the fact that the component of the differential $${\Omega}^1\to {\Omega}^1{\otimes}\hat{T}({\Omega}^1)\to {\Omega}^1$$ is injective.
\(ii) For any local section $a\in{{\cal O}}^{NC}$ we have ${\operatorname{ad}}(a){{\cal O}}^{NC}{\subset}{{\cal O}}^{NC}_{\ge 2}$ and ${\operatorname{ad}}(a){{\cal O}}^{NC}_{\ge n}{\subset}{{\cal O}}^{NC}_{\ge n+1}$. Hence, $({{\cal O}}^{NC})_i^{{\operatorname{Lie}}}{\subset}{{\cal O}}^{NC}_{\ge i}$. This implies the inclusion $$I_n{{\cal O}}^{NC}{\subset}{{\cal O}}^{NC}_{\ge n}.$$ Since ${{\cal O}}^{NC}_{\ge n}$ is complete with respect to the filtration by ${{\cal O}}^{NC}_{\ge i}$ ($i>n$), it remains to check the surjectivity of the map $$I_n({{\cal O}}^{NC})\to{{\cal O}}^{NC}_{\ge n}/{{\cal O}}^{NC}_{\ge n+1}.$$ It follows from Proposition \[nc-dR-prop\] that the latter quotient is a subsheaf of $U({{\cal L}ie}_+({\Omega}^1))_n{\subset}T^n({\Omega}^1)$. Thus, it is enough to check that $U({{\cal L}ie}_+({\Omega}^1))_n$ is locally generated as an ${{\cal O}}$-module by elements of the form $$\begin{aligned}
\label{gen-elements-eq}
&[{\sigma}(f_1),[\ldots,[{\sigma}(f_{i_1-1}),{\sigma}(f_{i_1})]\ldots]]\cdot
[{\sigma}(f_{i_1+1}),[\ldots,[{\sigma}(f_{i_1+i_2-1}),{\sigma}(f_{i_1+i_2})]\ldots]]\cdot\ldots\cdot \nonumber \\
&[{\sigma}(f_{i_1+\ldots+i_{k-1}+1}),[\ldots,[{\sigma}(f_{n-1}),{\sigma}(f_n)]\ldots]]\in I_n({{\cal O}}^{NC}),
$$ where $f_1,\ldots,f_n\in{{\cal O}}$ and ${\sigma}:{{\cal O}}\to{{\cal O}}^{NC}$ is the section defined in Proposition \[section-prop\]. Since ${\sigma}(f)=f-df+\ldots$, for any ${\alpha}={\alpha}_i+{\alpha}_{i+1}+\ldots\in {{\cal O}}^{NC}_{\ge i}$ we have $$[{\sigma}(f),{\alpha}]\equiv -df{\otimes}{\alpha}_i+{\alpha}_i{\otimes}df{\operatorname{mod}}{{\cal O}}^{NC}_{\ge i+2}.$$ Hence, the expression is in ${{\cal O}}^{NC}_{\ge n}$ with the leading term $$\begin{aligned}
&[df_1,[\ldots,[df_{i_1-1},df_{i_1}]\ldots]]\cdot
[df_{i_1+1},[\ldots,[df_{i_1+i_2-1},df_{i_1+i_2}]\ldots]]\cdot\ldots\cdot\\
&[df_{i_1+\ldots+i_{k-1}+1},[\ldots,[df_{n-1},df_n]\ldots]]\in U({{\cal L}ie}_+({\Omega}^1))_n.\end{aligned}$$ By definition, such elements generate $U({{\cal L}ie}_+({\Omega}^1))_n$.
\(iii) This follows from the proof in (ii).
${{\cal O}}^{NC}$ is NC-complete.
\[I-filtration-cor\] The map is an isomoprhism for ${{\cal O}}^{NC}$.
[[*Proof*]{}]{}. The proof of Proposition \[filtration-prop\](ii) shows that the composition of the map with the isomorphism $${\operatorname{gr}}^n_I({{\cal O}}^{NC})\simeq {{\cal O}}^{NC}_{\ge n}/{{\cal O}}^{NC}_{\ge n+1}\rTo{\sim}U({{\cal L}ie}_+({\Omega}^1))_n$$ is the identity.
Now we can prove that ${{\cal O}}^{NC}$ is NC-smooth, thus finishing the proof of Theorem \[main-nc-thm\].
\[smoothness-thm\] The NC-thickening ${{\cal O}}^{NC}\to{{\cal O}}$ defined by is NC-smooth.
[[*Proof*]{}]{}. By Lemma \[smooth-local-lem\], we can assume $X$ to be affine, so there exists an NC-smooth thickening ${{\cal O}}^{univ}\to{{\cal O}}$. Since ${{\cal O}}^{NC}$ is NC-complete, by universality we get a homomorphism of algebras $f:{{\cal O}}^{univ}\to{{\cal O}}^{NC}$ compatible with projections to ${{\cal O}}$ (cf. [@Kapranov (1.3.8), (1.6.2)]). Since both algebras are complete with respect to the $I$-filtration, to check that $f$ is an isomorphism it is enough to check that the induced maps $$f_n: I_n{{\cal O}}^{univ}/I_{n+1}{{\cal O}}^{univ}\to I_n{{\cal O}}^{NC}/I_{n+1}{{\cal O}}^{NC}$$ are isomorphisms. By Lemma \[homomorphism-lem\], we have a commutative diagram
U([[L]{}ie]{}\_+(\^1))\_n&&U([[L]{}ie]{}\_+(\^1))\_n\
&&\
\_I\^n[[O]{}]{}\^[univ]{}&& \_I\^n[[O]{}]{}\^[NC]{}
in which vertical arrows are surjective. By Corollary \[I-filtration-cor\], the right vertical arrow is an isomorphism. Hence, $f_n$ is an isomorphism as well.
\[I-filtration-gen-cor\] Let ${\widetilde}{{{\cal O}}}\to{{\cal O}}_X$ be an NC-thickening of a smooth commutative scheme $X$. Then the map is an isomorphism for ${\widetilde}{{{\cal O}}}$ if and only if $(X,{\widetilde}{{{\cal O}}})$ is NC-smooth.
[[*Proof*]{}]{}. “If". The question is local, so we can assume that $X$ is affine and admits a torsion-free connection. Then we have an isomorphism ${\widetilde}{{{\cal O}}}\simeq{{\cal O}}^{NC}$, since any two NC-smooth thickenings are isomorphic in the affine case (see [@Kapranov Thm. 1.6.1]). Now we can use Corollary \[I-filtration-cor\].
“Only if". By Lemma \[smooth-local-lem\], the question is local, so again we can assume that $X$ is affine and admits a torsion-free connection. Let ${{\cal O}}^{NC}\to{{\cal O}}$ be the NC-smooth thickening constructed above. By universality of the NC-smooth thickening in the affine case ((cf. [@Kapranov (1.3.8), (1.6.2)]) there exists a homomorphism $f: {{\cal O}}^{NC}\to{\widetilde}{{{\cal O}}}$ of NC-thickenings. By assumption, $f$ induces an isomorphism $${\operatorname{gr}}^n_I{{\cal O}}^{NC}\to {\operatorname{gr}}^n_I{\widetilde}{{{\cal O}}}$$ for each $n$. Hence, $f$ is an isomorphism.
\[Aut-O-NC-prop\] For the NC-thickening ${{\cal O}}^{NC}\to {{\cal O}}$ defined by let $\underline{{\operatorname{Aut}}}({{\cal O}}^{NC})$ denote the sheaf of groups of automorphisms of the extension ${{\cal O}}^{NC}\to{{\cal O}}$. Let also be $G$, $G_n$ be the sheaves of groups defined in Theorem \[dg-constr-thm\](iii). Then the natural maps $$G\to \underline{{\operatorname{Aut}}}({{\cal O}}^{NC}),$$ $$G/G_n\to \underline{{\operatorname{Aut}}}({{\cal O}}^{NC}/I_{n+1}{{\cal O}}^{NC}),$$ where $n\ge 1$, are isomorphisms.
[[*Proof*]{}]{}. Recall that $G_n{\subset}G$ consists of automorphisms of $(\hat{T}_{{{\cal O}}}({\Omega}^1),D)$ which induce the identity on $\hat{T}({\Omega}^1)/\hat{T}^{\ge n+1}({\Omega}^1)$ (so $G=G_1$), and that we have natural identifications $$\label{G-quot-eq}
G_{n-1}/G_n\rTo{\sim}{\operatorname{Hom}}({\Omega}^1,U({{\cal L}ie}_+({\Omega}^1)_n))$$ (see Theorem \[dg-constr-thm\](iii)). Similarly, we can define for each $n\ge 1$ the subgroup ${\underline}{{\operatorname{Aut}}}_n({{\cal O}}^{NC}){\subset}{\underline}{{\operatorname{Aut}}}({{\cal O}}^{NC})$ consisting of automorphisms that induce the identity on ${{\cal O}}^{NC}/I_{n+1}{{\cal O}}^{NC}$. By Proposition \[filtration-prop\](ii), the homomorphism $G\to{\underline}{{\operatorname{Aut}}}({{\cal O}}^{NC})$ sends $G_n$ to ${\underline}{{\operatorname{Aut}}}_n({{\cal O}}^{NC})$. We observe that we have a natural exact sequence of groups $$1\to {\underline}{{\operatorname{Aut}}}_{n-1}({{\cal O}}^{NC})\to{\underline}{{\operatorname{Aut}}}({{\cal O}}^{NC})\to{\underline}{{\operatorname{Aut}}}({{\cal O}}^{NC}/I_n{{\cal O}}^{NC})\to 1.$$ Indeed, surjectivity here follows from the NC-smoothness of ${{\cal O}}^{NC}$. Note that $${{\cal O}}^{NC}/I_{n+1}{{\cal O}}^{NC}\to{{\cal O}}^{NC}/I_n{{\cal O}}^{NC}$$ is a central extension with the kernel $I_n{{\cal O}}^{NC}/I_{n+1}{{\cal O}}^{NC}$ which is generated by expressions . Therefore, any automorphism of ${{\cal O}}^{NC}/I_{n+1}{{\cal O}}^{NC}$, identical modulo $I_n{{\cal O}}^{NC}$, acts on $I_n{{\cal O}}^{NC}/I_{n+1}{{\cal O}}^{NC}$ as the identity. Hence, by [@Kapranov Prop. 1.2.5] we get an exact sequence $$1\to {\operatorname{Hom}}({\Omega}^1,I_n{{\cal O}}^{NC}/I_{n+1}{{\cal O}}^{NC})\to
{\underline}{{\operatorname{Aut}}}({{\cal O}}^{NC}/I_{n+1}{{\cal O}}^{NC})\to{\underline}{{\operatorname{Aut}}}({{\cal O}}^{NC}/I_n{{\cal O}}^{NC})\to 1$$ (surjectivity again follows from the NC-smoothness of ${{\cal O}}^{NC}$). Hence, using Corollary \[I-filtration-cor\] we obtain a natural identification $$\label{Aut-quot-eq}
{\underline}{{\operatorname{Aut}}}_{n-1}({{\cal O}}^{NC})/{\underline}{{\operatorname{Aut}}}_n({{\cal O}}^{NC})\rTo{\sim} {\operatorname{Hom}}({\Omega}^1,U({{\cal L}ie}_+({\Omega}^1)_n)).$$ We claim that the homomorphism $G_{n-1}/G_n\to {\underline}{{\operatorname{Aut}}}_{n-1}({{\cal O}}^{NC})/{\underline}{{\operatorname{Aut}}}_n({{\cal O}}^{NC})$ is compatible (up to a sign) with the identifications and (and hence, is an isomorphism). Note that this will imply that the homomorphism $G\to {\underline}{{\operatorname{Aut}}}({{\cal O}}^{NC})$ is an isomorphism. Given an element $\phi\in G_{n-1}$ of $\hat{T}({\Omega}^1)$, for ${\alpha}\in{\Omega}^1$ we have $\phi({\alpha})={\alpha}+\phi_n({\alpha})+\ldots$, where $\phi_n:{\Omega}^1\to U({{\cal L}ie}_+({\Omega}^1)_n)$.) We have to restrict $\phi$ to ${{\cal O}}^{NC}{\subset}\hat{T}({\Omega}^1)$ and see how it acts modulo $I_{n+1}{{\cal O}}^{NC}={{\cal O}}^{NC}_{\ge n+1}$. By definition, for a lifting ${\sigma}(f)=f-df+\ldots\in{{\cal O}}^{NC}$ of $f\in{{\cal O}}$ we have $\phi({\sigma}(f))={\sigma}(f)-\phi_n(df){\operatorname{mod}}{{\cal O}}^{NC}_{\ge n+1}$, which implies our claim.
Theorems \[dg-constr-thm\] and \[smoothness-thm\] give a construction of an NC-smooth thickening ${{\cal O}}^{NC}\to{{\cal O}}$ starting from a torsion-free connection on $X$. Such a connection always exists in the affine case as the following Lemma shows.
\[affine-torsion-free-lem\] Let $X$ be an affine smooth scheme. Then there exists a torsion-free connection on $X$.
[[*Proof*]{}]{}. Locally we can always find a basis of ${\Omega}^1$ consisting of closed forms (e.g., using local parameters), hence a torsion-free connection. Covering $X$ with open subsets and choosing torsion-free connections on them we will get a class in $H^1(X,\underline{{\operatorname{Hom}}}({\Omega}^1,S^2{\Omega}^1))$ which is an obstacle to the existence of a torsion-free connection on $X$. When $X$ is affine this class automatically vanishes.
Thus, we recover Kapranov’s result on existence of NC-smooth thickenings for smooth affine varieties. In the non-affine case the NC-smooth thickenings obtained from torsion-free connections are rather special. The following result points out two simple restrictions on NC-thickenings that we get.
\[opposite-prop\] Let ${{\cal O}}^{NC}$ be an NC-smooth thickening associated with an algebraic NC-connection via .
\(i) The NC-thickening ${{\cal O}}^{NC}\to {{\cal O}}$ is isomorphic to its opposite.
\(ii) The $1$-smooth thickening ${{\cal O}}^{NC}/F^2{{\cal O}}^{NC}\to {{\cal O}}$ is isomorphic to the standard one (see Definition \[standard-1-thickening\])
[[*Proof*]{}]{}. (i) We have a natural anti-involution $\iota$ of ${{\cal A}}_X$ which acts as the identity on the generators ${\Omega}^1{\subset}{\Omega}^\bullet{\subset}{{\cal A}}_X$ and ${\Omega}^1{\subset}\hat{T}_{{\cal O}}({\Omega}^1)$. We have some differential $D=\tau+D_1+\ldots$ on ${{\cal A}}_X$ such that ${{\cal O}}^{NC}={\operatorname{ker}}(D)$. Now set $D'=\iota D\iota$. This is again a differential of the same type since $\iota\tau\iota=\tau$. By Theorem \[dg-constr-thm\](ii), there is an isomorphism of algebras ${\operatorname{ker}}(D)\simeq{\operatorname{ker}}(D')$. But $\iota$ induces also an isomorphism of ${\operatorname{ker}}(D')$ with the opposite algebra to ${{\cal O}}^{NC}$.
\(ii) We have $${{\cal O}}^{NC}/F^2{{\cal O}}^{NC}={{\cal O}}^{NC}/I_3{{\cal O}}^{NC}={{\cal O}}^{NC}/{{\cal O}}^{NC}_{\ge 3}.$$ Set $${\widetilde}{{{\cal O}}}=\{(f,x,y)\in {{\cal O}}\oplus {\Omega}^1\oplus T^2({\Omega}^1) \ |\ df+\tau(x)=0, \nabla(x)+\tau(y)=0\}.$$ Corollary \[coh-vanishing\](iii) implies that the natural projection $${{\cal O}}^{NC}/{{\cal O}}^{NC}_{\ge 3}\to {\widetilde}{{{\cal O}}}$$ is an isomorphism. Note that the map $$\tau|_{S^2({\Omega}^1)}: S^2({\Omega}^1)\to {\operatorname{im}}(\tau){\subset}{\Omega}^1{\otimes}{\Omega}^1$$ is the multiplication by $2$. Thus, we have a natural section $${\sigma}:{{\cal O}}\to{\widetilde}{{{\cal O}}}: f\mapsto (f,-df,\frac{1}{2}\nabla(df)).$$ Now an easy computation shows that $${\sigma}(f){\sigma}(g)={\sigma}(fg)+\frac{1}{2}(df{\otimes}dg-dg{\otimes}df),$$ so ${\widetilde}{{{\cal O}}}$ is isomorphic to the standard $1$-smooth thickening of ${{\cal O}}_X$.
\[standard-thickening-def\] Let $X$ be a commutative algebraic group over a field of characteristic zero. Then $X$ has a canonical torsion-free flat connection, such that flat sections are exactly translation-invariant vector fields (the vanishing of torsion follows from the fact that translation-invariant forms are closed). We call the associated NC-smooth thickening of $X$ the [*standard NC-thickening of $X$*]{}. Note that since in this case the connection is flat, the corresponding NC-connection has just two terms $D=D_0+D_1$, where $D_0$ is the ${{\cal O}}$-linear derivation $\tau$ and $D_1$ is induced by the connection.
\[NC-homomorphism-prop\] Let $f:X\to Y$ be a homomorphism of commutative algebraic groups over a field of characteristic zero. Then it extends naturally to a morphism between the standard NC-smooth thickenings of $X$ and $Y$.
[[*Proof*]{}]{}. We have to extend the usual homomorphism $f^{-1}{{\cal O}}_Y\to {{\cal O}}_X$ to a homomorphism of the standard NC-smooth thickenings $f^{-1}{{\cal O}}_Y^{NC}\to {{\cal O}}_X^{NC}$. Such a homomorphism is obtained by passing to $0$th cohomology from the natural homomorphism of dg-algebras $$\label{NC-homomorphism-eq}
f^{-1}{{\cal A}}_Y^{NC}\to {{\cal A}}_X^{NC}$$ induced by the pull-back maps on functions and forms. We only have to check that is compatible with the differentials $D=D_0+D_1$. For $D_0$-parts this is clear, so it is enough to prove the commutativity of the diagram
f\^[-1]{}\^1\_Y && f\^[-1]{}(\^1\_Y\_[[[O]{}]{}\_Y]{}\^1\_Y)\
&&\
\^1\_X &&\^1\_X\_[[[O]{}]{}\_X]{}\^1\_X
But this follows immediately from the fact that the pull-back of translation-invariant $1$-forms on $Y$ (which are $\nabla_Y$-horizontal) are translation-invariant $1$-forms on $X$.
Let ${{\cal O}}^{NC}$ is the standard NC-thickening of a commutative algebraic group $X$. Then for each $i$ we have $$H^i(X,{{\cal O}}^{NC})\simeq H^i(X,U({{\cal L}ie}_+({\Omega}^1)))\simeq H^i(X,{{\cal O}}){\otimes}\hat{U}({{\cal L}ie}_+({\Omega}^1|_e)),$$ where ${\Omega}^1|_e$ is the fiber of ${\Omega}^1$ at the neutral element $e\in X$. In particular, if $X$ is an abelian variety then the algebra of global sections of ${{\cal O}}^{NC}$ is isomorphic to $\hat{U}({{\cal L}ie}_+({\Omega}^1|_e))$.
[[*Proof*]{}]{}. This follows from .
Next, we are going to show that all NC-smooth thickenings are obtained by a twisted version of our construction.
\[twisting-def\] For a smooth scheme $X$ we define the category ${{\cal C}}_X$ as follows. An object of ${{\cal C}}_X$, called a *twisted algebraic NC-connection*, consists of the data $({{\cal T}},{{\cal J}},\varphi,D)$, where ${{\cal T}}$ is a sheaf of ${{\cal O}}_X$-algebras equipped with a two-sided ideal ${{\cal J}}$, $$\varphi:\bigoplus_{n\ge 0}{{\cal J}}^n/{{\cal J}}^{n+1}\to T({\Omega}^1)$$ is an isomorphism of graded algebras, and $D$ is a differential of degree $1$ on $${{\cal A}}_{{\cal T}}:={\Omega}^\bullet{\otimes}_{{\cal O}}{{\cal T}}$$ such that $D$ restricts to the de Rham differential on ${\Omega}^\bullet$, and such that the composition $${{\cal J}}{\hookrightarrow}{{\cal T}}\rTo{D} {\Omega}^1{\otimes}_{{\cal O}}{{\cal T}}\to {\Omega}^1{\otimes}_{{\cal O}}{{\cal T}}/{{\cal J}}\simeq {\Omega}^1$$ is equal to the composition of the projection ${{\cal J}}\to{{\cal J}}/{{\cal J}}^2$ with the isomorphism $\varphi_1:{{\cal J}}/{{\cal J}}^2\to{\Omega}^1$. In addition, we require that $({{\cal T}},{{\cal J}})$ is locally isomorphic to $(\hat{T}_{{\cal O}}({\Omega}^1),\hat{T}^{\ge 1}_{{\cal O}}({\Omega}^1))$ (in particular, ${{\cal T}}$ is complete with respect to the filtration $({{\cal J}}^n)$). Morphisms between two such data are given by homomorphisms $f:{{\cal T}}\to {{\cal T}}'$ of ${{\cal O}}$-algebras, sending ${{\cal J}}{\subset}{{\cal T}}$ to ${{\cal J}}'{\subset}{{\cal T}}'$, induced the identity on $T({\Omega}^1)$ via the isomorphisms $\varphi$ and $\varphi'$, and such that the induced homomorphism ${\operatorname{id}}{\otimes}f:{\Omega}^\bullet{\otimes}{{\cal O}}{{\cal T}}\to {\Omega}^\bullet{\otimes}_{{\cal O}}{{\cal T}}'$ is compatible with the differentials $D$.
\[twisted-dg-thm\] For any twisted algebraic NC-connection one has ${\underline}{H}^i({{\cal A}}_{{\cal T}},D)=0$ for $i>0$, and ${\underline}{H}^0({{\cal A}}_{{\cal T}},D)$ is an NC-smooth thickening of $X$ (with respect to its natural projection to ${{\cal T}}/{{\cal J}}\simeq{{\cal O}}$). The functor $({{\cal T}},{{\cal J}},\varphi,D)\mapsto {\underline}{H}^0({{\cal A}}_{{\cal T}})$ gives an equivalence of categories $${{\cal C}}_X\rTo{\sim} Th_X,$$ where $Th_X$ is the category of NC-smooth thickenings of $X$.
[[*Proof*]{}]{}. The first two assertion are local, so they follow from Corollary \[coh-vanishing\](i) and Theorem \[smoothness-thm\]. Since $U\to {{\cal C}}_U$ and $U\to Th_U$ are both gerbes (for $Th_U$ this follows from [@Kapranov Thm. 1.6.1]), it is enough to prove that our functor induces an isomorphism on sheaves on automorphisms. Thus, for a data $({{\cal T}},{{\cal J}},\varphi,D)$ and the corresponding NC-smooth thickening $${{\cal O}}^{NC}={\operatorname{ker}}(D:{{\cal T}}\to{\Omega}^1{\otimes}_{{\cal O}}{{\cal T}})$$ we have to check that the natural map $$G\to \underline{{\operatorname{Aut}}}({{\cal O}}^{NC})$$ is an isomorphism, where $G$ is the sheaf of groups of automorphisms of $({{\cal T}},{{\cal J}},\varphi,D)$ and $\underline{{\operatorname{Aut}}}({{\cal O}}^{NC})$ is the sheaf of groups of automorphisms of the extension ${{\cal O}}^{NC}\to{{\cal O}}$. This is a local question, so we can assume that ${{\cal T}}=\hat{T}({\Omega}^1)$, in which case $D$ is one of the differentials constructed in Theorem \[dg-constr-thm\]. It remains to apply Proposition \[Aut-O-NC-prop\].
The following technical notion will be useful for us.
\[algebraic-d-def\] An NC-thickening ${{\cal O}}_X^{(d)}\to{{\cal O}}_X$ of a smooth variety $X$ is called *$(d)_I$-smooth* if $I_{d+1}{{\cal O}}_X^{(d)}=0$ and the map for ${{\cal O}}_X^{(d)}$ is an isomorphism in degrees $\le d$.
There is also a suitable truncated notion of a twisted NC-connection.
Let $d\ge 1$. A *$d$-truncated twisted algebraic NC-connection* is given by the data $({{\cal T}}^{(d)},{{\cal J}},\varphi,D^{(d)})$, where ${{\cal T}}^{(d)}$ is a sheaf of ${{\cal O}}_X$-algebras ${{\cal T}}$ equipped with a two-sided ideal ${{\cal J}}$, such that ${{\cal J}}^{d+1}=0$, $$\varphi:\bigoplus_{n=0}^d{{\cal J}}^n/{{\cal J}}^{n+1}\to T({\Omega}^1)/T^{\ge d+1}({\Omega}^1)$$ is an isomorphism of graded algebras, and $D$ is a derivation $$D^{(d)}:{{\cal T}}^{(d)}\to{\Omega}^1{\otimes}_{{\cal O}}({{\cal T}}^{(d)}/{{\cal J}}^d)$$ with values in a ${{\cal T}}^{(d)}$-bimodule, such that $D^{(d)}(f)=df{\otimes}1$ for $f\in {{\cal O}}$ and such that the composition $${{\cal J}}{\hookrightarrow}{{\cal T}}^{(d)}\rTo{D} {\Omega}^1{\otimes}_{{\cal O}}({{\cal T}}^{(d)}/{{\cal J}}^d)\to {\Omega}^1{\otimes}_{{\cal O}}({{\cal T}}/{{\cal J}})\simeq {\Omega}^1$$ is equal to the composition of the projection ${{\cal J}}\to{{\cal J}}/{{\cal J}}^2$ with the isomorphism $\varphi_1:{{\cal J}}/{{\cal J}}^2\to{\Omega}^1$. In addition, we require that our data is locally isomorphic to the truncation of an algebraic NC-connection $D$, i.e., $\hat{{{\cal T}}}^{(d)}\simeq T({\Omega}^1)/T^{\ge d+1}({\Omega}^1)$, ${{\cal J}}$ is the image of $T^{\ge 1}({\Omega}^1)$ and $D^{(d)}$ is induced by $D$. Morphisms between two twisted NC-connections are given by homomorphisms $f:{{\cal T}}^{(d)}_1\to {{\cal T}}^{(d)}_2$ of ${{\cal O}}$-algebras compatible with all the data.
\[trunc-twisted-prop\] For any $d$-truncated twisted algebraic NC-connection the sheaf of rings ${\operatorname{ker}}(D^{(d)})$, with its natural projection to ${{\cal T}}^{(d)_I}/{{\cal J}}\simeq{{\cal O}}$, is a $(d)_I$-smooth thickening of $X$. This gives an equivalence $$\label{trunc-thickenings-class-eq}
{{\cal C}}^{(d)}_X\rTo{\sim} Th^{(d)_I}_X,$$ from the category of $d$-truncated twisted algebraic NC-connections on $X$ with the category $Th_X^{(d)_I}$ of $(d)_I$-smooth thickenings of $X$.
[[*Proof*]{}]{}. Note that both these categories are groupoids: for $Th_X^{(d)_I}$ this is deduced easily by induction in $d$ from the fact that the map for an $(d)_I$-smooth NC-thickening ${{\cal O}}_X^{(d)}$ is an isomorphism in degrees $\le d$. Furthermore, we claim that $$U\mapsto {{\cal C}}^{(d)}_U, \ Th^{(d)_I}_U$$ are gerbes on $X$. Indeed, for ${{\cal C}}^{(d)}_U$ this follows from the definition. For $(d)_I$-smooth NC-thickenings it is enough to check that every $(d)_I$-smooth thickening ${{\cal O}}_U^{(d)})$ over a smooth affine $U$ is of the form ${{\cal O}}_U^{NC}/I_{d+1}{{\cal O}}_U^{NC}$ for the (unique) NC-smooth thickening ${{\cal O}}_U^{NC}$ of $U$. By the universality of ${{\cal O}}_U^{NC}$ (see [@Kapranov Prop. 1.6.2]) we can construct a homomorphism ${{\cal O}}_U^{NC}\to {{\cal O}}_U^{(d)}$, identical on the abelianizations. It factors through a homomorphism ${{\cal O}}_U^{NC}/I_{d+1}{{\cal O}}_U^{NC}\to {{\cal O}}_U^{(d)}$. The fact that it is an isomorphism follows from Corollary \[I-filtration-gen-cor\] and the definition of $(d)_I$-smoothness. It remains to check that the functor induces an isomorphism of the local groups of isomorphisms. To this end we recall that we have a natural isomorphism $$G/G_d\rTo{\sim} {\underline}{{\operatorname{Aut}}}({{\cal O}}_X^{NC}/I^{d+1}{{\cal O}}_X),$$ where $G,G_d$ are sheaves of groups introduced in Theorem \[dg-constr-thm\](iii) (see Proposition \[Aut-O-NC-prop\]). It remains to check that for the $d$-truncation $D^{(d)}$ of an algebraic NC-connection $D$ the natural map $$G/G_d\to {\underline}{{\operatorname{Aut}}}(T({\Omega}^1)/T^{\ge d+1}({\Omega}^1),{{\cal J}},\varphi,D^{(d)})$$ is an isomorphism. The injectivity follows from the definition of $G_d$. The induction in $d$ reduces the proof of surjectivity to checking that all automorphisms $\phi$ of $T({\Omega}^1)/T^{\ge d+1}({\Omega}^1)$ with $\phi({\alpha})={\alpha}+\phi_d({\alpha})$ (where $\phi_d({\alpha})\in T^d({\Omega}^1)$), compatible with $D^{(d)}$, are in the image of $G_{d-1}/G_d{\subset}G/G_d$ under the above map. But one immediately checks that the compatibility with $D^{(d)}$ implies that $\tau\phi_d({\alpha})=0$ in $T^{d-1}({\Omega}^1)$, i.e., $\phi_d$ factors through a map ${\Omega}^1\to U({{\cal L}ie}_+({\Omega}^1))_d$, so the assertion follows from Theorem \[dg-constr-thm\](iii).
\[filtrations-rem\] It is easy to see from the proof of the above Proposition that extensions of a fixed $(d)_I$-smooth thickening to a $(d+1)_I$-smooth thickening form a gerbe banded by ${\underline}{Hom}({\Omega}^1_X,(U{{\cal L}ie}_+({\Omega}^1_X))_{d+1})$ (see Lemma \[classification-lem\] below). This is analogous to Kapranov’s [@Kapranov Prop. 4.3.1] which deals with $(d+1)$-smooth thickenings extending a given $d$-smooth thickening. We’d like to point out that the results of sections (4.4)–(4.6) in [@Kapranov] hold only if one replaces the commutator filtration with our $I$-filtration (see ). Indeed, for example, in [@Kapranov Prop. 4.4.2] one needs to use an embedding of the $d$-th associated graded quotient of the filtration on ${{\cal O}}_X^{NC}$ into $({\Omega}^1_X)^{{\otimes}d}$. This does not work for the commutator filtration (e.g., the $1$st graded quotient is ${\Omega}^2_X$), however, we do have such embeddings for the $I$-filtration (see Lemma \[homomorphism-lem\]). In particular, [@Kapranov Thm. 4.6.5] holds with $d$-smooth replaced by $(d)_I$-smooth. In [@Kapranov Sec. 4.6] second order thickenings should be replaced by $(3)_I$-smooth thickenings ($1$-smooth is the same as $(2)_I$-smooth).
Modules over NC-smooth thickenings {#module-sec}
==================================
Modules from connections {#module-conn-sec}
------------------------
The construction of Theorem \[dg-constr-thm\] has a natural analog for modules.
\[module-thm\] (i) Let $D=D_0+D_1+\ldots$ be an NC-connection on $X$. Let also $({{\cal F}},\nabla^{{{\cal F}}})$ be a quasicoherent sheaf on $X$ with a connection (not necessarily integrable). Then there exists a natural construction of a differential $D^{{\cal F}}$ of degree $1$ on the ${{\cal A}}_X$-module $${{\cal F}}\hat{{\otimes}}_{{{\cal O}}}{{\cal A}}_X:={\varprojlim}{{\cal F}}{\otimes}_{{\cal O}}{\Omega}^\bullet{\otimes}_{{\cal O}}(T({\Omega}^1)/T^{\ge n}({\Omega}^1)),$$ giving it a structure of dg-module over $({{\cal A}}_X,D)$ such that for $s\in{{\cal F}}$, $$D^{{\cal F}}(s{\otimes}1)=\nabla^{{\cal F}}(s) {\operatorname{mod}}{{\cal F}}{\otimes}{\Omega}^1{\otimes}\hat{T}^{\ge 1}({\Omega}^1).$$ We refer to such a differential $D^{{\cal F}}$ as a *module NC-connection* (*mNC-connection*).
\(ii) For any two mNC-connections $D^{{\cal F}}$ and ${\widetilde}{D}^{{\cal F}}$ on ${{\cal F}}\hat{{\otimes}}_{{{\cal O}}}{{\cal A}}_X$ there exists an ${{\cal A}}_X$-linear automorphism $\Phi$ of ${{\cal F}}\hat{{\otimes}}_{{\cal O}}{{\cal A}}_X$ of degree $0$ inducing identity modulo ${{\cal F}}\hat{{\otimes}}_{{\cal O}}{\Omega}^\bullet{\otimes}_{{\cal O}}\hat{T}^{\ge 1}({\Omega}^1)$, such that $${\widetilde}{D}^{{\cal F}}=\Phi D^{{\cal F}}\Phi^{-1}.$$
\(iii) For a pair of sheaves with connection $({{\cal F}},\nabla^{{\cal F}})$, $({{\cal G}},\nabla^{{\cal G}})$, let $D^{{\cal F}}$ and $D^{{\cal G}}$ be mNC-connections compatible with $\nabla^{{\cal F}}$ and $\nabla^{{\cal G}}$. Consider the sheaf $${{\bf H}}({{\cal F}},{{\cal G}})={{\bf H}}(({{\cal F}},D^{{\cal F}}),({{\cal G}},D^{{\cal G}})):=
{\operatorname{Hom}}_{{{\cal A}}_X}(({{\cal F}}\hat{{\otimes}}_{{\cal O}}{{\cal A}}_X,D^{{\cal F}}),({{\cal G}}\hat{{\otimes}}_{{\cal O}}{{\cal A}}_X,D^{{\cal G}}))$$ consisting of homomorphisms $F$ of right ${{\cal A}}_X$-modules of degree zero such that $D^{{\cal G}}F=F D^{{\cal F}}$ (i.e., of closed homomorphisms of dg-modules over ${{\cal A}}_X$). It has a natural decreasing filtration with $${{\bf H}}({{\cal F}},{{\cal G}})_n=\{F\in{{\bf H}}({{\cal F}},{{\cal G}}) \ |\ F(s{\otimes}1)\in {{\cal G}}{\otimes}\hat{T}^{\ge n}({\Omega}^1) \text{ for }s\in{{\cal F}}\}.$$ Then for each $n\ge 0$ there is a natural isomorphism $$\label{H-graded-pieces-eq}
{{\bf H}}({{\cal F}},{{\cal G}})_n/{{\bf H}}({{\cal F}},{{\cal G}})_{n+1}\simeq {\operatorname{Hom}}_{{\cal O}}({{\cal F}},{{\cal G}}{\otimes}U({{\cal L}ie}_+({\Omega}^1))_n).$$
\(iv) One has $\underline{H}^i({{\cal F}}\hat{{\otimes}}_{{\cal O}}{{\cal A}}_X, D^{{\cal F}})=0$ for $i>0$.
[[*Proof*]{}]{}. (i) Let us set $\nabla^{{\cal F}}_1=\nabla^{{\cal F}}$. We are looking for a collection of ${{\cal O}}$-linear maps $$\nabla^{{\cal F}}_i:{{\cal F}}\to {{\cal F}}{\otimes}{\Omega}^1{\otimes}T^{i-1}({\Omega}^1),$$ $i\ge 2$, such that the map $$D^{{\cal F}}(s{\otimes}1)=\nabla^{{\cal F}}_1(s)+\nabla^{{\cal F}}_2(s)+\nabla^{{\cal F}}_3(s)+\ldots,$$ extended to an endomorphism of ${{\cal F}}\hat{{\otimes}}_{{{\cal O}}}{{\cal A}}_X$ by the Leibnitz rule, satisfies $(D^{{\cal F}})^2=0$. We can write this extension in the form $$D^{{\cal F}}=D^{{\cal F}}_0+D^{{\cal F}}_1+D^{{\cal F}}_2+D^{{\cal F}}_3+\ldots,$$ where $D^{{\cal F}}_0={\operatorname{id}}_{{\cal F}}{\otimes}D_0$ and for $i\ge 1$, $D^{{\cal F}}_i$ are endomorphisms of ${{\cal F}}\hat{{\otimes}}_{{\cal O}}{{\cal A}}_X$ of degree $1$, extending $\nabla^{{\cal F}}_i$ and satisfying the following versions of the Leibnitz rule: $$D^{{\cal F}}_i(xa)=D^{{\cal F}}_i(x)a+(-1)^{\deg(x)}x D_i(a).$$ Thus, the equation $(D^{{\cal F}})^2=0$ becomes a system of equations $$\label{V-D-eq}
[D^{{\cal F}}_0,D^{{\cal F}}_m]+\frac{1}{2}\sum_{i=1}^{m-1}[D^{{\cal F}}_i,D^{{\cal F}}_{m-i}]=0$$ for $m\ge 1$. For example, the first equation is $$[D^{{\cal F}}_0, D^{{\cal F}}_1]=0,$$ which follows from the fact that $[D^{{\cal F}}_0, D^{{\cal F}}_1]$ is ${{\cal A}}_X$-linear (since $[D_0,D_1]=0$) and vanishes on the elements $s{\otimes}1$, where $s\in {{\cal F}}$. Assume that $\nabla^{{\cal F}}_i$ with $i\le n$ are already constructed (where $n\ge 1$), so that the equations are are satisfied for $m\le n$. By Lemma \[graded-Lie-lem\], we have $$\sum_{i=1}^n[D^{{\cal F}}_0,[D^{{\cal F}}_i, D^{{\cal F}}_{n+1-i}]]=0.$$ Hence, $(D^{{\cal F}}_0)\left(\sum_{i=1}^n [D^{{\cal F}}_i, D^{{\cal F}}_{n+1-i}](s{\otimes}1)\right)=0$, so setting $$\nabla^{{\cal F}}_{n+1}(s)=-\frac{1}{2}({\operatorname{id}}_{{\cal F}}{\otimes}h_{n+1})\left(\sum_{i=1}^n [D^{{\cal F}}_i, D^{{\cal F}}_{n+1-i}](s{\otimes}1)\right),$$ where $(h_i)$ are the homotopy maps from Corollary \[homotopy-cor\], we will get that $$[D^{{\cal F}}_0, D^{{\cal F}}_{n+1}](s{\otimes}1)=(D^{{\cal F}}_0)\nabla^{{\cal F}}_{n+1}(s)=-\frac{1}{2}\left(\sum_{i=1}^n [D^{{\cal F}}_i, D^{{\cal F}}_{n+1-i}](s{\otimes}1)\right).$$ To see that this implies the equality $$[D^{{\cal F}}_0, D^{{\cal F}}_{n+1}]=-\frac{1}{2}\sum_{i=1}^n [D^{{\cal F}}_i, D^{{\cal F}}_{n+1-i}]$$ we observe that the left-hand side satisfies the version of the Leibnitz rule with respect to the derivation $[D_0,D_{n+1}]$ of ${{\cal A}}_X$, while the right-hand side satisfies the similar rule with respect to $-\frac{1}{2}\sum_{i=1}^n [D_i, D_{n+1-i}]$. It remains to use the equality .
\(ii) Assume we are given two mNC-connections $D^{{\cal F}}$ and ${\widetilde}{D}^{{\cal F}}$ as in (i) with $D^{{\cal F}}_i={\widetilde}{D}^{{\cal F}}_i$ for $i<n$ (where $D^{{\cal F}}_0={\widetilde}{D}^{{\cal F}}_0={\operatorname{id}}{\otimes}D_0$). Then the equations imply that $[{\operatorname{id}}{\otimes}D_0,D^{{\cal F}}_n-{\widetilde}{D}^{{\cal F}}_n]=0$, hence $({\operatorname{id}}{\otimes}D_0)(D^{{\cal F}}_n-{\widetilde}{D}^{{\cal F}}_n)(s{\otimes}1)=0$. Therefore, we can find $\phi_n:{{\cal F}}\to {{\cal F}}{\otimes}T^n({\Omega}^1)$ such that $(D^{{\cal F}}_n-{\widetilde}{D}^{{\cal F}}_n)(s{\otimes}1)=({\operatorname{id}}{\otimes}D_0)\phi_n(s)$ for $s\in {{\cal F}}$. Consider the automorphism $\Phi$ of ${{\cal F}}\hat{{\otimes}}_{{\cal O}}{{\cal A}}_X$ as a right ${{\cal A}}_X$-module given by $s{\otimes}1\mapsto s{\otimes}1+\phi_n(s)$. Then it easy to check that $(\Phi D^{{\cal F}}\Phi^{-1})_i={\widetilde}{D}^{{\cal F}}_i$ for $i\le n$. It remains to note that any infinite composition of automorphisms $$\ldots\Phi_3\circ\Phi_2\circ\Phi_1$$ with $\Phi_i(s{\otimes}1)=s{\otimes}1{\operatorname{mod}}{{\cal F}}\hat{{\otimes}}_{{\cal O}}\hat{T}^{\ge i}({\Omega}^1)$ converges to an automorphism of ${{\cal F}}\hat{{\otimes}}_{{\cal O}}{{\cal A}}_X$.
\(iii) For $F\in{{\bf H}}({{\cal F}},{{\cal G}})_n$ we have $F(s{\otimes}1)=F_n(s){\operatorname{mod}}{{\cal G}}\hat{{\otimes}} \hat{T}^{\ge n+1}({\Omega}^1)$ for some ${{\cal O}}$-linear map $F_n:{{\cal F}}\to {{\cal G}}{\otimes}T^n({\Omega}^1)$. It is easy to see that the condition that $F$ commutes with the differential implies that $({\operatorname{id}}{\otimes}D_0)(F_n(s))=0$, so $F_n$ lands in $U({{\cal L}ie}_+({\Omega}^1))_n{\subset}T^n({\Omega}^1)$ (see Proposition \[nc-dR-prop\]). By definition $F\in {{\bf H}}({{\cal F}},{{\cal G}})_{n+1}$ if and only if $F_n=0$. Thus, it is enough to check that the map $${{\bf H}}({{\cal F}},{{\cal G}})_n\to {\operatorname{Hom}}({{\cal F}},{{\cal G}}{\otimes}U({{\cal L}ie}_+({\Omega}^1))_n): F\mapsto F_n$$ is surjective. In other words, we have to find $F_i:{{\cal F}}\to {{\cal G}}{\otimes}T^i({\Omega}^1)$ with $i\ge n+1$ such that $F(s{\otimes}1)=F_n(s)+F_{n+1}(s)+\ldots$ defines an ${{\cal A}}_X$-linear homomorphism in ${{\bf H}}({{\cal F}},{{\cal G}})$. Let us extend each $F_i$ to an ${{\cal A}}_X$-linear homomorphism from ${{\cal F}}\hat{{\otimes}}_{{\cal O}}{{\cal A}}_X$ to ${{\cal G}}\hat{{\otimes}}_{{\cal O}}{{\cal A}}_X$. Then we can write the equations on $(F_i)$ in the form $$\sum_{i=0}^m[D_i,F_{m-i}]=0,$$ where $D_i$ appearing to the left (resp., to the right) of $F_j$ are $D^{{\cal G}}_i$ (resp., $D^{{\cal F}}_i$). If $F_i$ with $i\le n$ are already found then we can construct $F_{n+1}$ using Lemma \[graded-Lie-lem\].
\(iv) The proof is entirely parallel to that of Proposition \[D-homotopy-prop\]: we apply Lemma \[D-homotopy-lem\] to the decomposition $D^{{\cal F}}=D^{{\cal F}}_0+D^{{\cal F}}_{\ge 1}$, where $D^{{\cal F}}_{\ge 1}=D^{{\cal F}}_1+D^{{\cal F}}_2+\ldots$ and use the homotopy operator ${\operatorname{id}}_{{\cal F}}{\otimes}h$ for $D^{{\cal F}}_0={\operatorname{id}}_{{\cal F}}{\otimes}D_0$, where $h$ is the homotopy from Corollary \[homotopy-cor\].
Theorem \[module-thm\] gives a construction of right ${{\cal O}}^{NC}$-modules: namely, starting with any sheaf with connection we get such a module by taking $${\underline}{H}^0({{\cal F}}\hat{{\otimes}}_{{\cal O}}{{\cal A}}_X,D^{{\cal F}})={\operatorname{ker}}(D^{{\cal F}})\cap {{\cal F}}\hat{{\otimes}}_{{\cal O}}\hat{T}({\Omega}^1)$$ equipped with a natural right action of ${{\cal O}}^{NC}={\operatorname{ker}}(D)\cap\hat{T}({\Omega}^1)$. Note that $D^{{\cal F}}$ is continuous with respect to the topology given by the filtration $({{\cal F}}{\otimes}_{{\cal O}}{\Omega}^\bullet\hat{{\otimes}}_{{\cal O}}\hat{T}^{\ge n}({\Omega}^1))$, so ${\underline}{H}^0({{\cal F}}\hat{{\otimes}}_{{\cal O}}{{\cal A}}_X,D^{{\cal F}})$ is complete with respect to the induced topology.
\[module-cor\] (i) In the situation of Theorem \[module-thm\](i) set ${\widetilde}{{{\cal F}}}={\underline}{H}^0({{\cal F}}\hat{{\otimes}}_{{\cal O}}{{\cal A}}_X,D^{{\cal F}})$. Then the natural map ${\widetilde}{{{\cal F}}}\to{{\cal F}}$ induced by the projection ${{\cal F}}\hat{{\otimes}}\hat{T}({\Omega}^1)\to{{\cal F}}$, is surjective. Furthermore, it induces an isomorphism $${\widetilde}{{{\cal F}}}/\overline{{\widetilde}{{{\cal F}}}{{\cal O}}^{NC}_{\ge 1}}\rTo{\sim} {{\cal F}},$$ where the bar means taking the closure with respect to the natural topology on ${\widetilde}{{{\cal F}}}$.
\(ii) Assume in addition that ${{\cal F}}$ is a vector bundle of rank $r$. Then ${\widetilde}{{{\cal F}}}$ is locally free of rank $r$ as a right ${{\cal O}}^{NC}$-module and $${\widetilde}{{{\cal F}}}{\otimes}_{{{\cal O}}^{NC}}{{\cal O}}\simeq {{\cal F}}.$$
\(iii) In the situation of Theorem \[module-thm\](iii) assume that ${{\cal F}}$ is a vector bundle. Then the natural map $${{\bf H}}(({{\cal F}},D^{{\cal F}}),({{\cal G}},D^{{\cal G}}))\to {\operatorname{Hom}}_{{{\cal O}}^{NC}}({\underline}{H}^0({{\cal F}}\hat{{\otimes}}_{{\cal O}}{{\cal A}}_X,D^{{\cal F}}),
{\underline}{H}^0({{\cal G}}\hat{{\otimes}}_{{\cal O}}{{\cal A}}_X,D^{{\cal G}}))$$ is an isomorphism.
[[*Proof*]{}]{}. (i) Let us consider the natural decreasing filtration on ${\widetilde}{{{\cal F}}}$ given by $${\widetilde}{{{\cal F}}}_{\ge n}={\operatorname{ker}}(D^{{\cal F}})\cap {{\cal F}}\hat{{\otimes}}_{{\cal O}}\hat{T}^{\ge n}({\Omega}^1).$$ We have $$\label{K-H-eq}
{\widetilde}{{{\cal F}}}=\underline{{{\bf H}}}(({{\cal O}},D),({{\cal F}},D^{{\cal F}})),$$ where $\underline{{{\bf H}}}(\cdot,\cdot)$ is the sheafified version of ${{\bf H}}(\cdot,\cdot)$ (see Theorem \[module-thm\](iii)). Hence, from we get natural isomorphisms $${\widetilde}{{{\cal F}}}_{\ge n}/{\widetilde}{{{\cal F}}}_{\ge n+1}\rTo{\sim} {{\cal F}}{\otimes}_{{\cal O}}U({{\cal L}ie}_+({\Omega}^1))_n.$$ In the case $n=0$ we get the required surjectivity. Next, we observe that the natural map $${\widetilde}{{{\cal F}}}{\otimes}_{{{\cal O}}^{NC}}{{\cal O}}^{NC}_{\ge 1}\to {\widetilde}{{{\cal F}}}_{\ge 1}$$ is compatible with filtrations: it sends ${\widetilde}{{{\cal F}}}{\otimes}_{{{\cal O}}^{NC}}{{\cal O}}^{NC}_{\ge n}$ to ${\widetilde}{{{\cal F}}}_{\ge n}$. Since the induced map on graded quotients $${\widetilde}{{{\cal F}}}{\otimes}_{{{\cal O}}^{NC}}{{\cal O}}^{NC}_{\ge n}/{{\cal O}}^{NC}_{\ge n+1}\to
{\widetilde}{{{\cal F}}}_{\ge n}/{\widetilde}{{{\cal F}}}_{\ge n+1}$$ is surjective, we deduce the surjectivity of the map $${\widetilde}{{{\cal F}}}{\otimes}_{{{\cal O}}^{NC}}{{\cal O}}^{NC}_{\ge 1}\to {\widetilde}{{{\cal F}}}_{\ge 1}/{\widetilde}{{{\cal F}}}_{\ge n}$$ for $n\ge 1$. In other words, we get that $${\widetilde}{{{\cal F}}}_{\ge 1}={\widetilde}{{{\cal F}}}{\otimes}_{{{\cal O}}^{NC}}{{\cal O}}^{NC}_{\ge 1}+{\widetilde}{{{\cal F}}}_{\ge n}$$ for any $n\ge 1$, so ${\widetilde}{{{\cal F}}}_{\ge 1}$ is the closure of ${\widetilde}{{{\cal F}}}{\otimes}_{{{\cal O}}^{NC}}{{\cal O}}^{NC}_{\ge 1}$.
\(ii) This follows from the fact that ${\widetilde}{{{\cal F}}}{\otimes}_{{{\cal O}}^{NC}}{{\cal O}}$ does not depend on a choice of connection (up to a canonical isomorphism). Hence, localizing we reduce to the case ${{\cal F}}={{\cal O}}^{\oplus n}$.
\(iii) It is enough to prove that the same map is an isomorphism locally, so we can assume that ${{\cal F}}={{\cal O}}$ with trivial connection. Then ${\widetilde}{{{\cal F}}}={{\cal O}}^{NC}$, so the assertion follows from .
The functor from the category of $D$-modules {#D-mod-sec}
--------------------------------------------
Let $X$ be a smooth variety with an NC-smooth thickening ${{\cal O}}_X^{NC}\to {{\cal O}}_X$ obtained from a twisted NC-connection $({{\cal T}},{{\cal J}},\varphi,D)$ as in Theorem \[twisted-dg-thm\], so that ${{\cal O}}_X^{NC}$ is the $0$th cohomology of the sheaf of dg-algebras $${{\cal A}}_{{{\cal T}},D}:=({{\cal A}}_{{\cal T}}={\Omega}^\bullet_X{\otimes}{{\cal T}},D).$$
For any scheme $S$ over $k$ we consider the induced relative NC-thickening $${{\cal O}}_{S\times X/S}^{NC}\to {{\cal O}}_{S\times X}$$ obtained as the tensor product $p_S^{-1}{{\cal O}}_S{\otimes}p_X^{-1}{{\cal O}}_X^{NC}$, where $p_S$ and $p_X$ are the projections of the product $S\times X$ onto its factors. (thus, ${{\cal O}}_S$ is a central subring in ${{\cal O}}_{S\times X/S}^{NC}$). Note that ${{\cal O}}_{S\times X/S}$ has as dg-resolution the sheaf of dg-algebras $p_X^*{{\cal A}}_{{{\cal T}},D}$ on $S\times X$.
\[D-mod-defi\] Let ${{\cal F}}$ be a quasicoherent sheaf on $S\times X$ equipped with a relative flat connection $\nabla$ in the direction of $X$ (so $\nabla$ is ${{\cal O}}_S$-linear). We associate with ${{\cal F}}$ a right dg-module over $p_X^*{{\cal A}}_{{{\cal T}},D}$ by setting $$\label{K-F-nabla-eq}
K({{\cal F}},\nabla):=({{\cal F}}{\otimes}_{{\cal O}}p_X^*{\Omega}^\bullet)\hat{{\otimes}}_{p_X^*{\Omega}^\bullet} p_X^*{{\cal A}}_{{{\cal T}},D},$$ where ${{\cal F}}{\otimes}_{{\cal O}}p_X^*{\Omega}^\bullet$ is the de Rham complex of $\nabla$, which we view as a dg-module over $p_X^*{\Omega}^\bullet$. In other words, $$K({{\cal F}},\nabla)={{\cal F}}\hat{{\otimes}}_{{\cal O}}p_X^*{{\cal A}}_{{{\cal T}},D}$$ as a right $p_X^*{{\cal A}}_{{{\cal T}},D}$-module and the differential $D^{{\cal F}}$ on it is uniquely determined by $$\label{flat-D-F-eq}
D^{{\cal F}}(s{\otimes}1)=\nabla(s)\in {{\cal F}}{\otimes}_{{\cal O}}{\Omega}^1{\subset}K({{\cal F}},\nabla),$$ for $s\in{{\cal F}}$. Since ${\Omega}^\bullet$ is in the graded center of ${\Omega}^\bullet{\otimes}{{\cal T}}$, we also have a left action of $p_X^*{{\cal A}}_{{{\cal T}},D}$ on $K({{\cal F}},\nabla)$, so we can view it as a dg-bimodule over $p_X^*{{\cal A}}_{{{\cal T}},D}$. Set $${{\bf K}}({{\cal F}},\nabla)={\underline}{H}^0K({{\cal F}},\nabla).$$ This is bimodule over ${{\cal O}}^{NC}_{S\times X/S}$, so we get a functor $$\label{D-mod-functor-eq}
{{\bf K}}={{\bf K}}_{S,X^{NC}}:D_{S\times X/S}-{\operatorname{mod}}_{qc}\to{{\cal O}}^{NC}_{S\times X/S}-{\operatorname{bimod}}_{/S},$$ where ${{\cal O}}^{NC}_{S\times X/S}-{\operatorname{bimod}}_{/S}$ is the category of ${{\cal O}}^{NC}_{S\times X/S}$-bimodules with central action of $p_S^{-1}{{\cal O}}_S$.
\[D-module-rem\] The results of Section \[module-conn-sec\] (which are valid for NC-thickenings constructed from global torsion-free connections) also have natural relative analogs. For example, in Theorem \[module-thm\] we can start with a quasicoherent sheaf ${{\cal F}}$ on $S\times X$ with a relative connection $\nabla$ (over $S$) and construct a dg-module over $p_X^*{{\cal A}}_X$, which does not depend on a connection up to a (noncanonical) isomorphism. Note that in the untwisted case ${{\cal T}}=\hat{T}({\Omega}^1)$ the dg-module obtained from a flat relative connection $\nabla$ on ${{\cal F}}$ fits into the framework of the relative version of Theorem \[module-thm\] (in the case of flat connection we don’t need to add higher corrections to the formula ).
\[D-module-prop\] Let $X$ be a smooth variety with a torsion free connection, and let $D$ be the corresponding NC-connection giving rise to a relative NC-smooth thickening ${{\cal O}}^{NC}_{S\times X/S}={\underline}{H}^0(p_X^*{{\cal A}}_X,D)$ over a scheme $S$.
\(i) Then for a quasicoherent sheaf ${{\cal F}}$ on $S\times X$ with a relative connection $\nabla$ (over $S$) the dg-module $K({{\cal F}},\nabla)$ and the ${{\cal O}}^{NC}_{S\times X/S}$-module ${{\bf K}}({{\cal F}},\nabla)$ do not depend on a choice of a relative flat connection $\nabla$ on ${{\cal F}}$ up to an isomorphism.
\(ii) If ${{\cal F}}$ is a vector bundle of rank $r$ on $S\times X$ with a relative flat connection then ${{\bf K}}({{\cal F}},\nabla)$ is locally free of rank $r$ as a right ${{\cal O}}^{NC}_{S\times X/S}$-module.
\(iii) If $L$ is a line bundle with a flat connection $\nabla$ then ${{\bf K}}(L,\nabla)$ is an invertible ${{\cal O}}^{NC}$-bimodule.
[[*Proof*]{}]{}. (i) By Remark \[D-module-rem\] this follows from Theorem \[module-thm\](ii).
\(ii) This follows from the relative analog of Corollary \[module-cor\](ii).
\(iii) This can be checked locally using Corollary \[module-cor\](iii).
The assertion of Proposition \[D-module-prop\](i) does not hold for the functor ${{\bf K}}$ associated with an arbitrary twisted NC-connection $({{\cal T}},{{\cal J}},\varphi,D)$: the module ${{\bf K}}({{\cal F}},\nabla)$ does depend on $\nabla$ in general (see Remark \[twisted-ab-var-conn\] below).
Note that for every $D$-module $({{\cal F}},\nabla)$ we have a natural map $$\label{bimodule-map-eq}
{{\bf K}}(E,\nabla){\otimes}_{{{\cal O}}^{NC}} {{\cal O}}\to E$$ induced by the projection $K(E,\nabla)\to E$. Furthermore, this map is compatible with the ${{\cal O}}^{NC}$-bimodule structure on ${{\bf K}}(E,\nabla)$ and the central ${{\cal O}}$-bimodule structure on $E$ via the homomorphism ${{\cal O}}^{NC}\to {{\cal O}}$.
\[D-mod-prop\] (i) The functor is exact.
\(ii) If $(E,\nabla)$ is a vector bundle with a relative flat connection on $S\times X$ then the map is an isomorphism.
[[*Proof*]{}]{}. (i) Since by the relative version of Theorem \[module-thm\](iv), $K({{\cal F}},\nabla^{{\cal F}})$ has no higher cohomology, it suffices to prove that the functor ${{\cal F}}\mapsto {{\cal F}}\hat{{\otimes}}_{{\cal O}}{{\cal T}}$ is exact (on quasicoherent sheaves). But this is clear, since over a sufficiently small open affine $U$ we can identify ${{\cal T}}$ with $\hat{T}({\Omega}^1)$, and so $$({{\cal F}}\hat{{\otimes}}_{{\cal O}}{{\cal T}})(U)\simeq \prod_n ({{\cal F}}{\otimes}_{{\cal O}}T^n({\Omega}^1))(U).$$
\(ii) It suffices to check this locally, so we can use Corollary \[module-cor\](ii).
\[rel-conn-cd-lem\] Let $f:S\to S'$ be a morphism of schemes of finite type over $k$.
\(i) Assume $f$ is an affine morphism. Then we have a commutative diagram $$\label{NC-push-forward-cd}
\begin{diagram}
D_{S\times X/S}-{\operatorname{mod}}_{qc} &\rTo{{{\bf K}}_{S,X^{NC}}}& {{\cal O}}_{S\times X/S}^{NC}-{\operatorname{bimod}}_{/S} \\
\dTo{(f\times{\operatorname{id}})_*}&&\dTo{(f\times{\operatorname{id}})_*}\\
D_{S'\times X/S'}-{\operatorname{mod}}_{qc} &\rTo{{{\bf K}}_{S',X^{NC}}}& {{\cal O}}_{S'\times X/S'}^{NC}-{\operatorname{bimod}}_{/S'}
\end{diagram}$$ where the vertical arrows are the natural push-forward functors. If $S\to S'$ is an arbitrary morphism then there is a similar commutative diagram for the derived categories and derived push-forward functors.
\(ii) We have a commutative diagram $$\label{NC-pull-back-cd}
\begin{diagram}
D_{S\times X/S}-{\operatorname{mod}}^{lff} &\rTo{{{\bf K}}_{S,X^{NC}}}& {{\cal O}}_{S\times X/S}^{NC}-{\operatorname{bimod}}_{/S}^{lff} \\
\uTo{f^*}&&\uTo{f^*}\\
D_{S'\times X/S'}-{\operatorname{mod}}^{lff}_{qc} &\rTo{{{\bf K}}_{S',X^{NC}}}& {{\cal O}}_{S'\times X/S'}^{NC}-{\operatorname{bimod}}_{/S'}^{lff}
\end{diagram}$$ where the superscript $lff$ means “locally free of finite rank" (as right modules) and the vertical arrows are the pull-back functors given by $$f^*:{{\cal F}}\mapsto p_S^{-1}{{\cal O}}_S{\otimes}_{p_S^{-1}f^{-1}{{\cal O}}_{S'}}(f\times{\operatorname{id}}_X)^{-1}{{\cal F}}.$$
[[*Proof*]{}]{}. (i) The case of affine morphism is straightforward. The case of general morphism reduces to it by using Cech resolution to calculate the push-forwards.
\(ii) We have a natural isomorphism of dg-bimodules over $p_X^*{{\cal A}}_{{{\cal T}},D}$ $$\label{K-F-F'-eq}
p_S^{-1}{{\cal O}}_S{\otimes}_{p_S^{-1}f^{-1}{{\cal O}}_{S'}}(f\times{\operatorname{id}}_X)^{-1}K({{\cal F}},\nabla)\rTo{\sim} K({{\cal F}}',f^*\nabla).$$ Passing to the $0$th cohomology we obtain the required isomorphism of ${{\cal O}}^{NC}_{S\times X/S}$-bimodules.
Extendability of vector bundles to an NC-smooth thickening {#extendability-sec}
----------------------------------------------------------
\[extension-defi\] Let ${\widetilde}{{{\cal O}}}\to {{\cal O}}_X$ be an NC-thickening, and let $E$ be a vector bundle on $X$. We say that a locally free right ${\widetilde}{{{\cal O}}}$-module ${\widetilde}{E}$ is an [*extension of $E$ to ${\widetilde}{{{\cal O}}}$*]{} if ${\widetilde}{E}{\otimes}_{{\widetilde}{{{\cal O}}}}{{\cal O}}\simeq E$. We say that ${\widetilde}{E}$ is a [*bimodule extension of $E$*]{} if in addition ${\widetilde}{E}$ has a compatible left ${\widetilde}{{{\cal O}}}$-module structure making it an ${\widetilde}{{{\cal O}}}$-bimodule (with $k$ in the center), and the map $${\widetilde}{{{\cal O}}}\to{\underline}{{\operatorname{End}}}({\widetilde}{E})\to {\underline}{{\operatorname{End}}}(E)$$ induced by the left action of ${\widetilde}{{{\cal O}}}$ on ${\widetilde}{E}$ factors through the standard embedding ${{\cal O}}\to{\underline}{{\operatorname{End}}}(E)$.
Given a vector bundle $E$ on $X$ and an NC-smooth thickening ${{\cal O}}^{NC}\to {{\cal O}}$ one can ask what are the obstructions for existence of an extension (resp., bimodule extension) of $E$ to ${{\cal O}}^{NC}$. Our construction of ${{\cal O}}^{NC}$-modules from connections gives such extensions in the case of the NC-smooth thickening constructed from a torsion free connection on $X$ (see Corollary \[module-cor\](ii)). Also, it gives the following result for arbitrary NC-smooth thickenings.
Let ${\widetilde}{{{\cal O}}}\to{{\cal O}}$ be an NC-smooth thickening of $X$. Then for any vector bundle $E$ of rank $n$ on $X$ admitting a flat connection there exists a ${\widetilde}{{{\cal O}}}$-bimodule ${\widetilde}{E}$, locally free of rank $n$ as a left (resp., right) ${\widetilde}{{{\cal O}}}$-module such that ${\widetilde}{E}{\otimes}_{{\widetilde}{{{\cal O}}}}{{\cal O}}\simeq E$.
[[*Proof*]{}]{}. By Theorem \[twisted-dg-thm\], we can assume that ${\widetilde}{{{\cal O}}}$ arises from a twisted NC-connection $({{\cal T}},{{\cal J}},\varphi,D)$. Hence, the assertion follows from Proposition \[D-mod-prop\](ii).
Below we will give an obstruction for extending to $1$-smooth thickenings in terms of Atiyah classes. Recall (see [@Kapranov]) that $1$-smooth thickenings ${\widetilde}{{{\cal O}}}\to X$ form a gerbe banded by $T{\otimes}{\Omega}^2$, where $T=T_X$, and that there is a standard $1$-smooth thickening (see Definition \[standard-1-thickening\]). Thus, isomorphism classes of $1$-smooth thickenings are classified by $H^1(X,T{\otimes}{\Omega}^2)$.
Recall (see [@Atiyah]) that for a vector bundle $E$ on $X$ the Atiyah class ${\alpha}_E\in H^1(X,{\Omega}^1{\otimes}\underline{{\operatorname{End}}}(E))$ is the class of the extension $$0\to E{\otimes}{\Omega}^1\to J^{\le 1}(E)\to E\to 0$$ where $J^{\le 1}(E)$ is the bundle of 1st jets in $E$. Equivalently, if $\nabla_i$ are local connections on $E$ defined over an open covering $(U_i)$ then $(\nabla_j-\nabla_i\in {\Gamma}(U_i\cap U_j,{\Omega}^1{\otimes}\underline{{\operatorname{End}}}(E)))$ is a Čech representative of ${\alpha}_E$.
\[obs-prop\] Let ${\widetilde}{{{\cal O}}}\to{{\cal O}}$ be a $1$-smooth thickening of $X$ corresponding to the class $c({\widetilde}{{{\cal O}}})\in H^1(X,T{\otimes}{\Omega}^2)$, and let $E$ be a vector bundle on $X$.
\(i) $E$ extends to a right locally free ${\widetilde}{{{\cal O}}}$-module if and only its Atiyah class ${\alpha}_E$ satisfies $$\label{alpha-c-alpha-eq}
{\alpha}_E\cup {\alpha}_E-{\langle}c({\widetilde}{{{\cal O}}}),{\alpha}_E{\rangle}=0$$ in $H^2(X,{\Omega}^2{\otimes}{\underline}{{\operatorname{End}}}(E))$.
\(ii) Let ${\widetilde}{{{\cal O}}'}\to{{\cal O}}$ be another $1$-smooth thickening of $X$. Then $E$ extends to a ${\widetilde}{{{\cal O}}'}-{\widetilde}{{{\cal O}}}$-bimodule, locally free over ${\widetilde}{{{\cal O}}}$, and such that the induced ${{\cal O}}-{{\cal O}}$-bimodule structure on $E$ is standard, if and only if $$\label{kappa-alpha-c-c-eq}
2(\kappa{\otimes}{\operatorname{id}}_{{\underline}{{\operatorname{End}}}(E)})({\alpha}_E)=\left(c({\widetilde}{{{\cal O}}})-c({\widetilde}{{{\cal O}}'})\right)\cdot {\operatorname{id}}_E$$ in $H^1(X,T{\otimes}{\Omega}^2{\otimes}{\underline}{{\operatorname{End}}}(E))$, where $$\label{kappa-map-eq}
\kappa:{\Omega}^1\to {\underline}{{\operatorname{Hom}}}({\Omega}^1,{\Omega}^2)\simeq T{\otimes}{\Omega}^2:x\mapsto (y\mapsto x{\wedge}y).$$
\(iii) Assume $\dim X>1$. Then $E$ extends to an ${\widetilde}{{{\cal O}}}$-bimodule if and only if ${\alpha}_E=0$.
[[*Proof*]{}]{}. (i) We have an exact sequence of sheaves of groups on $X$ $$0\to {\operatorname{Mat}}_n({\Omega}^2)\to {\operatorname{GL}}_n({\widetilde}{{{\cal O}}})\to {\operatorname{GL}}_n({{\cal O}})\to 1$$ where ${\operatorname{Mat}}_n({\Omega}^2)$ embeds as an abelian normal subgroup in ${\operatorname{GL}}_n({\widetilde}{{{\cal O}}})$ via $M\mapsto 1+M$. Let $(g_{ij}=\varphi_i^{-1}\varphi_j)$ be transition functions of $E$ with respect to some trivializations $\varphi_i:{{\cal O}}_{U_i}^n\to E|_{U_i}$ over open covering $(U_i)$. We can assume that $U_i$ are such that ${\widetilde}{{{\cal O}}}|_{U_i}$ is isomorphic to the standard $1$-smooth thickening of $U_i$, so over each $U_i$ we have a section ${\sigma}_i:{{\cal O}}_{U_i}\to{\widetilde}{{{\cal O}}}_{U_i}$ such that $${\sigma}_i(f){\sigma}_i(g)={\sigma}_i(fg)+df{\wedge}dg.$$ Note that $${\sigma}_j(f)-{\sigma}_i(f)={\langle}v_{ij},df{\rangle},$$ for some Cech $1$-cocycle $(v_{ij})$ with values in $T{\otimes}{\Omega}^2$, such that $(v_{ij})$ represents the class $c({\widetilde}{{{\cal O}}})$. The bundle $E$ extends to ${\widetilde}{{{\cal O}}}$ if and only if for some open covering we can lift some transition functions $(g_{ij})$ for $E$ to a Cech $1$-cocycle $({\widetilde}{g}_{ij})$ with values in ${\operatorname{GL}}_n({\widetilde}{{{\cal O}}})$. Let us write $${\widetilde}{g}_{ij}={\sigma}_i(g_{ij})+{\omega}_{ij},$$ where ${\omega}_{ij}\in{\operatorname{Mat}}_n({\Omega}^2)(U_{ij})$. Then the $1$-cocycle condition for ${\widetilde}{g}_{ij}$ is $$({\sigma}_i(g_{ij})+{\omega}_{ij})({\sigma}_j(g_{jk})+{\omega}_{jk})={\sigma}_i(g_{ik})+{\omega}_{ik}$$ over $U_{ijk}$. Since ${\sigma}_j(g_{jk})={\sigma}_i(g_{jk})+{\langle}v_{ij},dg_{jk}{\rangle}$ and $g_{ij}g_{jk}=g_{ik}$, this is equivalent to $$dg_{ij}{\wedge}dg_{jk}+g_{ij}{\langle}v_{ij},dg_{jk}{\rangle}+{\omega}_{ij}g_{jk}+g_{ij}{\omega}_{jk}={\omega}_{ik}.$$ Multiplying with $\varphi_i$ on the left and $\varphi_k^{-1}$ on the right we obtain the following equation in ${\Omega}^2{\otimes}{\underline}{{\operatorname{End}}}(E)|_{U_{ij}}$: $$\label{dg-cocycle-eq}
\varphi_i dg_{ij}{\wedge}dg_{jk} \varphi_k^{-1}+{\langle}v_{ij},\varphi_j dg_{jk}\varphi_k^{-1}{\rangle}=
\eta_{ik}-\eta_{ij}-\eta_{jk},$$ where $\eta_{ij}=\varphi_i{\omega}_{ij}\varphi_j^{-1}$. On the other hand, setting $$\nabla_i=\varphi_i\circ d\circ \varphi_i^{-1},$$ where $d$ is the natural connection on ${{\cal O}}^n$, we get connections on $E|_{U_i}$. Thus, the Atiyah class ${\alpha}_E$ is represented by the Cech $1$-cocycle $${\alpha}_{ij}=\nabla_j-\nabla_i=\varphi_i[g_{ij}\circ d\circ g_{ij}^{-1}-d]\varphi_i^{-1}=
-\varphi_i(dg_{ij})g_{ij}^{-1}\varphi_i^{-1}=-\varphi_i dg_{ij} \varphi_j^{-1}$$ Hence, the class ${\alpha}_E\cup {\alpha}_E$ is represented by the Cech $2$-cocycle $${\alpha}_{ij}{\wedge}{\alpha}_{jk}=\varphi_i dg_{ij} \varphi_j^{-1}{\wedge}\varphi_j dg_{jk} g_{jk}^{-1}\varphi_k^{-1}
=\varphi_i dg_{ij}{\wedge}dg_{jk} \varphi_k^{-1},$$ so the left-hand side of represents the class ${\alpha}_E\cup {\alpha}_E-{\langle}c({\widetilde}{{{\cal O}}}),{\alpha}_E{\rangle}$.
\(ii) We use the notation of part (i). Assume that $E$ is lifted to a right ${\widetilde}{{{\cal O}}}$-module, as in part (i). The additional structure that makes an extension of $E$ into a bimodule should be given by a collection of homomorphisms of sheaves of algebras $$A_i:{\widetilde}{{{\cal O}}'}|_{U_i}\to {\operatorname{Mat}}_n({\widetilde}{{{\cal O}}})|_{U_i}$$ such that $$\label{A-ij-bimodule-str-eq}
A_i({\widetilde}{f}){\widetilde}{g}_{ij}={\widetilde}{g}_{ij}A_j({\widetilde}{f})$$ for ${\widetilde}{f}\in {\widetilde}{{{\cal O}}'}_{U_{ij}}$. The compatibility with the central bimodule structure on $E$ in Definition \[extension-defi\] is that $A_i$ is compatible with the standard diagonal embedding ${{\cal O}}_{U_i}\to{\operatorname{Mat}}_n({{\cal O}}_{U_i}):f \mapsto f I$. This implies (by considering commutators) that the restriction of $A_i$ induces the similar diagonal embedding ${\Omega}^2\to{\operatorname{Mat}}_n({\Omega}^2)$. We can assume that the $1$-smooth thickenings ${\widetilde}{{{\cal O}}'}|_{U_i}$ are standard, so there exist sections ${\sigma}'_i:{{\cal O}}\to{\widetilde}{{{\cal O}}'}|_{U_i}$, such that $${\sigma}'_i(f){\sigma}'_i(g)={\sigma}'_i(fg)+df{\wedge}dg,$$ $${\sigma}'_j(f)-{\sigma}'_i(f)={\langle}v'_{ij},df{\rangle},$$ where $(v'_{ij})$ is the $1$-cocycle with values in $T{\otimes}{\Omega}^2$ representing $c({\widetilde}{{{\cal O}}'})$. Let us write for $f\in{{\cal O}}_{U_i}$ $$A_i({\sigma}'_i(f))={\sigma}_i(f)I+\eta_i(f),$$ where $\eta_i(f)\in{\operatorname{Mat}}_n({\Omega}^2_{U_i})$. Then the condition that $A_i$ is a homomorphism is equivalent to $\eta_i$ being a derivation with values in ${\operatorname{Mat}}_n({\Omega}^2)|_{U_i}$, so we can write $\eta_i(f)={\langle}w_i,df{\rangle}$, where $w_i\in T{\otimes}{\operatorname{Mat}}_n({\Omega}^2)|_{U_i}$. Note that $$A_j({\sigma}'_i(f))=A_j({\sigma}'_j(f)-{\langle}v'_{ij},df{\rangle})={\sigma}_j(f)I+\eta_j(f)-{\langle}v'_{ij},df{\rangle}I=
[{\sigma}_i(f)+{\langle}v_{ij}-v'_{ij},df{\rangle}]\cdot I+\eta_j(f).$$ Thus, the equation becomes $$({\sigma}_i(f)I+\eta_i(f))({\sigma}_i(g_{ij})+{\omega}_{ij})=({\sigma}_i(g_{ij})+{\omega}_{ij})([{\sigma}_i(f)+{\langle}v_{ij}-v'_{ij},df{\rangle}]\cdot I+\eta_j(f))$$ which is equivalent to $$2df{\wedge}dg_{ij}=g_{ij}{\langle}w_j+(v_{ij}-v'_{ij})I,df{\rangle}-{\langle}w_i,df{\rangle}g_{ij},$$ We can rewrite this as an equality in ${\Omega}^2{\otimes}{\underline}{{\operatorname{End}}}(E)|_{U_{ij}}$: $$2df{\wedge}\varphi_i dg_{ij}\varphi_j^{-1}={\langle}u_j-u_i+(v_{ij}-v'_{ij}){\operatorname{id}}_E, df{\rangle},$$ where $u_i=\varphi_i w_i \varphi_i^{-1}$. This equality means that the $1$-cocycles representing $2(\kappa{\otimes}{\operatorname{id}})({\alpha}_E)$ and $c({\widetilde}{{{\cal O}}})-c({\widetilde}{{{\cal O}}'})$ differ by the coboundary $(u_j-u_i)$, and our assertion follows.
\(iii) For ${\widetilde}{{{\cal O}}'}={\widetilde}{{{\cal O}}}$ the equation becomes $$2(\kappa{\otimes}{\operatorname{id}})({\alpha}_E)=0.$$ It remains to observe that if $\dim X>1$ then $\kappa$ is an embedding of a direct summand, so $\kappa{\otimes}{\operatorname{id}}$ induces an embedding on $H^1$. Thus, the above equation is equivalent to ${\alpha}_E=0$.
It is easy to see that $c({\widetilde}{{{\cal O}}}^{op})=-c({\widetilde}{{{\cal O}}})$, so if we are considering extension of $E$ to left ${\widetilde}{{{\cal O}}}$-modules then the equation should be replaced by $${\alpha}_E\cup {\alpha}_E+{\langle}c({\widetilde}{{{\cal O}}}),{\alpha}_E{\rangle}=0.$$ This condition differs slightly from the condition in [@Kapranov Rem. (4.6.9)]. The equivalence of the two conditions follows from the Bianchi identity for Atiyah classes proved in [@Kapranov2 Prop. (1.2.2)].
\[van-cor\] Let $X$ be a smooth variety. If a vector bundle $E$ on $X$ extends to the standard $1$-smooth thickening of $X$ then ${\operatorname{ch}}_i(E)=0$ in $H^i(X,{\Omega}^i)$ for $i>1$.
[[*Proof*]{}]{}. For the standard thickening $c({\widetilde}{{{\cal O}}})=0$, so the condition of Proposition \[obs-prop\](i) becomes ${\alpha}_E\cup {\alpha}_E=0$. Now we use the fact that the component ${\operatorname{ch}}_i(E)$ of the Chern character, up to a sign, is obtained from the Atiyah class ${\alpha}_E\in H^1({\Omega}^1{\otimes}\underline{{\operatorname{End}}}(E))$ as the trace of $\frac{1}{i!}{\alpha}_E^{\cup i}\in H^i(({\Omega}^i{\otimes}\underline{{\operatorname{End}}}(E))$ (see [@Atiyah Sec. 5]).
\[ample-bun-cor\] Let $X$ be a smooth proejctive variety of dimension $\ge 1$, and let $L$ be a line bundle on $X$, such that either $L$ or $L^{-1}$ is ample. Let ${\widetilde}{{{\cal O}}}\to{{\cal O}}_X$ be a $1$-smooth thickening of $X$ such that $c({\widetilde}{{{\cal O}}})=\kappa(\xi)$ for some $\xi\in H^1(X,{\Omega}^1)$, where $\kappa$ is the map . Then $L$ extends to a right ${\widetilde}{{{\cal O}}}$-module if and only if $\xi=c_1(L)$.
[[*Proof*]{}]{}. Since $${\langle}\kappa({\alpha}), c_1(L){\rangle}={\alpha}\cup c_1(L)\in H^2(X,{\Omega}^2),$$ the equation gives $$(c_1(L)-\xi)\cup c_1(L)=0.$$ By Lefschetz theorem, this implies that $c_1(L)-\xi=0$.
Note that the assumption $c({\widetilde}{{{\cal O}}})=\kappa({\alpha})$ in the above Corollary is always satisfied for surfaces, since in this case the map is an isomorphism.
Kapranov constructed an NC-smooth thickening ${{\cal O}}^{NC}_{{\operatorname{left}}}$ of the projective space ${{\Bbb P}}^n$, such that the line bundle ${{\cal O}}(1)$ extends to a right ${{\cal O}}^{NC}_{{\operatorname{left}}}$-module. One can check that the map $$\kappa:H^1({{\Bbb P}}^n,{\Omega}^1)\to H^1({{\Bbb P}}^n,T{\otimes}{\Omega}^2)$$ is an isomorphism. Hence, by Corollary \[ample-bun-cor\], we obtain $$c({{\cal O}}^{NC,\le 1}_{{\operatorname{left}}})=\kappa(c_1({{\cal O}}(1))).$$
In the case of line bundles on abelian varieties and the standard NC-smooth thickening we can give a complete answer to the extension problem.
\[line-bundle-thm\] Let $L$ be a line bundle on an abelian variety $A$, where $\dim A>1$.
\(i) $L$ extends to a (right) line bundle on the standard NC-smooth thickening $A^{NC}$ if and only if $c_1(L)^2=0$ in $H^2(A,{\Omega}^2)$.
\(ii) $L$ extends to an invertible ${{\cal O}}^{NC}_A$-bimodule if and only if $c_1(L)=0$ in $H^1(A,{\Omega}^1)$.
[[*Proof*]{}]{}. (i) The “only if" part follows from Corollary \[van-cor\]. Conversely, assume $c_1(L)^2=0$. This means that the map $\eta: T_{A,0}\to H^1(A,{{\cal O}})$ given by $c_1(L)$ satisfies $\wedge^2\eta=0$, i.e., the image of $\eta$ is at most $1$-dimensional. But $\eta$ is the tangent map to the homomorphism $\phi_L:A\to\hat{A}$ associated with $L$, Hence, either $\phi_L=0$ or $\phi_L$ factors through an elliptic curve. In former case $L$ is in ${\operatorname{Pic}}^0(A)$, so it extends to the NC-thickening. In the latter case we have a codimension $1$ abelian subvariety $B{\subset}A$ such that $L|_B$ is in ${\operatorname{Pic}}^0(B)$. Thus, for some $L_0\in{\operatorname{Pic}}^0(A)$ the restriction of $L{\otimes}L_0^{-1}$ to $B$ is trivial. This implies that $L{\otimes}L_0^{-1}=f^*M$, where $f:A\to A/B$ is the corresponding morphism to an elliptic curve and $M$ is a line bundle on $A/B$. Since any line bundle in ${\operatorname{Pic}}^0(A)$ admits a flat connection, it lift to an invertible ${{\cal O}}^{NC}$-bimodule. Thus, it is enough to deal with pull-backs of line bundles from elliptic curves. In fact, pull-back of any vector bundle from elliptic curve can be extended to the thickening because in this situation the commutative sheaf of functions on the elliptic curves embeds into ${{\cal O}}^{NC}$ on the abelian variety, since all homomorphisms between abelian varieties always extend to NC-thickenings—see Proposition \[NC-homomorphism-prop\].
\(ii) The “if" part follows from Proposition \[D-mod-prop\](ii) since any line bundle in ${\operatorname{Pic}}^0(A)$ admits a flat connection. The “only if" part follows from Proposition \[obs-prop\](ii).
\[nonstandard-NC-cor\] Let $f:A\to E$ be a surjective morphism from an abelian variety to an elliptic curve, and let $L_E$ be a line bundle on $E$ of nonzero degree. Then $f^*L_E$ extends to a right line bundle ${{\cal L}}$ on the standard NC-smooth thickening of $A$ and ${\underline}{{\operatorname{End}}}({{\cal L}})$ is an NC-smooth thickening of $A$, which is not isomorphic to the standard one.
[[*Proof*]{}]{}. The algebra ${\underline}{{\operatorname{End}}}({{\cal L}})$ is locally isomorphic to ${{\cal O}}^{NC}_A$ (the standard NC-smooth thickening of ${{\cal O}}_A$), so it is an NC-smooth thickening of ${{\cal O}}_A$. If it were isomorphic to ${{\cal O}}^{NC}_A$, we would get a ${{\cal O}}^{NC}_A$-bimodule structure on ${{\cal L}}$. But by Theorem \[line-bundle-thm\](ii), this is impossible since $c_1(f^*L_E)\neq 0$.
Let ${{\cal O}}^{NC}\to {{\cal O}}$ be an NC-smooth thickening constructed using a torsion-free connection on $X$ (see Sec. \[NC-constr-sec\]). By Proposition \[obs-prop\](ii), if a vector bundle $E$ on $X$ admits a bimodule extension to ${{\cal O}}^{NC}$ then its Atiyah class ${\alpha}_E$ vanishes. In the other direction, we know by Corollary \[module-cor\](ii), that if ${\alpha}_E=0$ i.e., $E$ has a connection, then $E$ admits an extension to ${{\cal O}}^{NC}$, but not necessarily as a bimodule. Also, we know that bundles with flat connection admit bimodule extensions (see Proposition \[obs-prop\](ii)). It seems plausible that extendability of $E$ as a bimodule to the $2$-smooth extension ${{\cal O}}^{NC}/F^3{{\cal O}}^{NC}$ should imply the existence of a flat connection on $E$.
NC-Jacobian and the NC-Fourier-Mukai transform {#Jac-FM-sec}
==============================================
NC-thickening of a versal family of line bundles {#moduli-sec}
------------------------------------------------
Let $Z$ be a complete variety, $L$ a line bundle on $Z\times B$, where $B$ is a smooth algebraic variety. Following [@Kapranov Sec. 5.4] let us make the following assumptions:
\(a) the Kodaira-Spencer map $\kappa:{{\cal T}}_B\to H^1(Z,{{\cal O}}){\otimes}{{\cal O}}_B$ is an isomorphism,
\(b) $H^2(Z,{{\cal O}})=0$.
In addition let us fix a point $p_0\in Z$ and make one more assumption:
\(c) the line bundle $L|_{p_0\times B}$ is trivial.
Let us fix a trivialization $s_0:{{\cal O}}_B\to L|_{p_0\times B}$ and consider the functor $h^{rig}=h^{rig}_B$ on the category of NC-nilpotent algebras ${{\cal N}}$, where $h^{rig}({\Lambda})$ is the set of isomorphism classes of the following data:
\(1) a line bundle ${{\cal L}}$ on $Z\times{\operatorname{Spec}}({\Lambda})$ together with a trivialization $s:{{\cal O}}\to {{\cal L}}|_{p_0\times{\operatorname{Spec}}({\Lambda})}$,
\(2) a morphism $f:{\operatorname{Spec}}({\Lambda}^0_{ab})\to B$, where ${\Lambda}^0_{ab}$ is the quotient of ${\Lambda}_{ab}$ by the ideal of nilpotent elements,
\(3) an isomorphism $\varphi:{{\cal O}}_{C\times{\operatorname{Spec}}({\Lambda}^0_{ab})}\otimes{{\cal L}}\stackrel{\sim}{\to} ({\operatorname{id}}\times f)^*L$, compatible with the trivializations over $p_0$.
Here by a line bundle on an NC-scheme we mean a sheaf of left ${{\cal O}}$-modules, locally isomorphic to ${{\cal O}}$.
The following Lemma implies that an isomorphism of two data $({{\cal L}},s)$ in (1) is unique (and also an isomorphism in (3) is unique).
\[rig-lem\] Let ${{\cal L}}$ be a line bundle on $Z\times{\operatorname{Spec}}({\Lambda})$. Then any endomorphism of ${{\cal L}}$ on $Z\times{\operatorname{Spec}}({\Lambda})$ is uniquely determined by its restriction to $p_0\times{\operatorname{Spec}}({\Lambda})$.
[[*Proof*]{}]{}. Let $0=J_n{\subset}J_{n-1}{\subset}\ldots{\subset}J_1{\subset}J_0={\Lambda}$ be a filtration by ideals such that ${\Lambda}/J_1={\Lambda}^0_{ab}$ and $J_i/J_{i+1}$ are central bimodules over ${\Lambda}/J_1$. Consider the induced filtration of ${{\cal L}}$ by $J_i{{\cal L}}$ and set ${{\cal L}}^0={{\cal L}}/J_1{{\cal L}}$ Note that the natural map $$J_i/J_{i+1}{\otimes}_{{\Lambda}^0_{ab}} {{\cal L}}^0\to J_i{{\cal L}}/J_{i+1}{{\cal L}}$$ is an isomorphism (by local triviality of ${{\cal L}}$). Now suppose we have an endomorphism $f$ of ${{\cal L}}$ on $C\times{\operatorname{Spec}}({\Lambda})$ which is zero over $p_0$. Since $f$ is compatible with the filtration $(J_i{{\cal L}})$, it induces an endomorphism of ${{\cal L}}^0$. But ${\operatorname{End}}({{\cal L}}^0)={\Lambda}^0_{ab}$ and the condition that $f$ vanishes over $p_0$ implies that $f$ induces zero endomorphism of ${{\cal L}}^0$. Hence, $f$ factors through $J_1{{\cal L}}{\subset}{{\cal L}}$. Next, assume $f$ factors through $J_i{{\cal L}}$ for some $i\ge 1$ and consider the induced map $${{\cal L}}^0={{\cal L}}/J_1{{\cal L}}\to J_i{{\cal L}}/J_{i+1}{{\cal L}}\simeq (J_i/J_{i+1}){\otimes}_{{\Lambda}^0_{ab}} {{\cal L}}^0.$$ As we have seen above such maps correspond to elements of $J_i/J_{i+1}$, so again the condition that restriction of $f$ to $p_0$ vanishes implies that $f$ factors through $J_{i+1}{{\cal L}}$.
Let $h(\Lambda)$ be the set of isomorphism classes of the same data as for $h^{rig}$, except for a trivialization over $p_0$ (this is the functor considered in [@Kapranov]). We have a natural map $$\label{forget-rig-map}
h^{rig}(\Lambda)\to h(\Lambda).$$
\[h-rig-h-lem\] The map is surjective. It is an isomorphism for commutative $\Lambda$.
[[*Proof*]{}]{}. First, let us check that any vector bundle over ${\operatorname{Spec}}({\Lambda})$, which becomes trivial on ${\operatorname{Spec}}({\Lambda}^0_{ab})$, is itself trivial. Since ${\Lambda}$ is obtained from ${\Lambda}^0_{ab}$ by successive central extensions, it suffices to check that if a finitely generated projective module $P$ over ${\Lambda}'$, where ${\Lambda}'$ is a central extension of ${\Lambda}$ by $I$, is such that $P/IP$ is a free ${\Lambda}$-module then $P$ itself is free. Lifting generators of $P/IP$ to $P$ we get a morphism $({\Lambda}')^n\to P$ Using the condition $I^2=0$, we derive that it is an isomorphism. Thus, if we have a line bundle ${{\cal L}}$ over $Z\times{\operatorname{Spec}}({\Lambda})$ together with an isomorphism $\varphi:{{\cal L}}|_{Z\times{\operatorname{Spec}}({\Lambda}^0_{ab})}\to ({\operatorname{id}}\times f)^*L$ then the restriction ${{\cal L}}|_{p_0\times{\operatorname{Spec}}({\Lambda})}$ is a trivial bundle over ${\operatorname{Spec}}({\Lambda})$. Let us choose its trivialization. It induces a trivialization of ${{\cal L}}_{p_0\times{\operatorname{Spec}}({\Lambda}^0_{ab})}$, which differs from the one coming from that of $L|_{p_0\times B}$ by some invertible element of ${\Lambda}^0_{ab}$. Since such an element can be lifted to an invertible element of ${\Lambda}$, we can choose a trivialization of ${{\cal L}}|_{p_0\times{\operatorname{Spec}}({\Lambda})}$ inducing a given trivialization on $p_0\times{\operatorname{Spec}}({\Lambda}^0_{ab})$.
Now suppose that $\Lambda$ is commutative. Any two trivializations of ${{\cal L}}|_{p_0\times {\operatorname{Spec}}({\Lambda})}$ differ by an invertible element of ${\Lambda}$. Since such an element gives an automorphism of ${{\cal L}}$, we see that different choices of trivialization lead to isomorphic data.
\[h-rig-repr-thm\] The functor $h^{rig}$ is represented by an NC-smooth thickening $B^{NC}$ of $B$. Furthermore, there is a universal line bundle $L^{NC}$ on $Z\times B^{NC}$, trivialized over $p_0\times B^{NC}$ and extending the line bundle $L$ on $Z\times B$.
[[*Proof*]{}]{}. First, let us prove formal smoothness of the functors $h$ and $h^{rig}$. Let $$0\to I\to{\Lambda}'\to {\Lambda}\to 0$$ be a central extension of NC-nilpotent algebras. Then we have an exact sequence $$1\to {{\cal I}}\to {{\cal O}}_{Z\times{\operatorname{Spec}}({\Lambda}')}^*\to {{\cal O}}_{Z\times{\operatorname{Spec}}({\Lambda})}^*\to 1$$ of sheaves of groups on $Z\times{\operatorname{Spec}}({\Lambda}')$, where we view $Z\times{\operatorname{Spec}}({\Lambda})$ as a closed NC-subscheme of $Z\times{\operatorname{Spec}}({\Lambda}')$ with the corresponding sheaf of ideals ${{\cal I}}\simeq{\widetilde}{I}\boxtimes{{\cal O}}_Z$, where ${\widetilde}{I}$ is the quasicoherent sheaf on ${\operatorname{Spec}}({\Lambda}_{ab})$ associated with the ${\Lambda}_{ab}$-module $I$. We have $$H^2(Z\times{\operatorname{Spec}}({\Lambda}_{ab}),{{\cal I}})\simeq I{\otimes}H^2(Z,{{\cal O}})=0$$ since $H^2(Z,{{\cal O}})=0$ by assumption. This implies surjectivity of the map of nonabelian cohomology $$H^1(Z,({\Lambda}'{\otimes}{{\cal O}}_Z)^*)\to H^1(Z,({\Lambda}{\otimes}{{\cal O}}_Z)^*),$$ hence the map $h({\Lambda}')\to h({\Lambda})$ is surjective, i.e., $h$ is formally smooth. An element $x\in h^{rig}({\Lambda})$ induces an element $y\in h({\Lambda})$ which can be lifted to $y'\in h({\Lambda}')$ and then to $x'\in h^{rig}({\Lambda}')$. However, the element $x'$ does not necessarily lift $x$: the induced rigidification over ${\Lambda}$ may differ from the original one by some invertible element of ${\Lambda}$. Lifting it to an invertible element of ${\Lambda}'$ and changing $x'$ accordingly, we get a lifting of $x$ to $h^{rig}({\Lambda}')$.
The fact that on commutative algebras $h$ (and hence, by Lemma \[h-rig-h-lem\], $h^{rig}$) is isomorphic to $h_B$ was proved in [@Kapranov Prop. 5.4.3(a)].
Finally, representability for $h^{rig}$ is proved along the lines of [@Kapranov Sec. 5.4] (with a slight modification, see Remark \[repr-rem\] below). By [@Kapranov Thm. 2.3.5], we have to check that for any Cartesian diagram of algebras in ${{\cal N}}$
\_[12]{} && \_1\
&&\
\_2 &&
such that $p_1$ is a central extension, the natural map $$\label{NC-descent-map}
h^{rig}({\Lambda}_{12})\to h^{rig}({\Lambda}_1)\times_{h^{rig}({\Lambda})} h({\Lambda}_2)$$ is an isomorphism. To this end we are going to construct the inverse map. Namely, assume we are given for $i=1,2$ the data $({{\cal L}}_i, f_i,\varphi_i)\in h^{rig}({\Lambda}_i)$ that induce the same element of $h^{rig}({\Lambda})$. Let us view ${{\cal L}}_i$, for $i=1,2$, as locally free modules of rank $1$ over ${{\cal O}}_Z\otimes_k{\Lambda}_i$. Then by Lemma \[rig-lem\], we have a unique isomorphism $${{\cal L}}_1{\otimes}_{{\Lambda}_1}{\Lambda}\simeq {{\cal L}}_2{\otimes}_{{\Lambda}_2}{\Lambda}$$ of ${{\cal O}}_Z\otimes_k{\Lambda}$-modules, compatible with trivializations over $p_0$. Thus, setting ${{\cal L}}={{\cal L}}_i{\otimes}_{{\Lambda}_i}{\Lambda}$, we can define a locally free ${{\cal O}}_Z\otimes{\Lambda}_{12}$-module of rank $1$ $${{\cal L}}_{12}:={{\cal L}}_1\times_{{{\cal L}}} {{\cal L}}_2.$$
\[repr-rem\] In [@Kapranov Sec. 5.4] it is claimed that the functor $h$ is representable. However, the proof of [@Kapranov Prop. 5.4.3] has a gap: a choice in the gluing procedure makes the proposed inverse map to ill-defined. This problem does not appear for $h^{rig}$ (due to Lemma \[rig-lem\]). Note that since $h$ is formally smooth and $h^{rig}$ and $h$ restrict to the same functor on commutative schemes, it follows that $h$ is representable if and only if the map $h^{rig}\to h$ is an isomorphism. This is equivalent to the statement that for an NC-nilpotent algebra ${\Lambda}$, and a line bundle ${{\cal L}}$ over $C\times{\operatorname{Spec}}({\Lambda})$, such that the restriction ${{\cal L}}|_{C\times{\operatorname{Spec}}({\Lambda}^0_{ab})}$ is induced via some map $f:{\operatorname{Spec}}({\Lambda}^0_{ab})\to B$ by our family of line bundles over $B$, the map $${\operatorname{Aut}}({{\cal L}})\to {\operatorname{Aut}}({{\cal L}}_{p_0\times{\operatorname{Spec}}({\Lambda})})$$ is an isomorphism (note that by Lemma \[rig-lem\], this map is injective). In the case of commutative algebras this statement follows immediately from the identification of endomorphisms of a line bundle over $C\times{\operatorname{Spec}}({\Lambda})$ with ${\Lambda}$.
NC-thickening of the Jacobian
-----------------------------
Let $C$ be a smooth projective curve, $p_0\in C$ a point. Let ${{\cal P}}_C$ be the Poincaré line bundle on $C\times J$, where $J=J(C)$ is the Jacobian of $C$, trivialized over $p_0\times J$. This family of line bundles over $J$ satisfies the assumptions (a),(b),(c) from Section \[moduli-sec\], so we have the corresponding functor $h^{rig}_J$ of NC-families on the category ${{\cal N}}$ of NC-nilpotent algebras. On the other hand, we define the functor $H^{rig}$ on NC-complete schemes by letting $H^{rig}(S)$ be the set of isomorphism classes of pairs $({{\cal L}},s)$, where ${{\cal L}}$ is a line bundle on $C\times S$ together with a trivialization $s:{{\cal O}}\to {{\cal L}}|_{p_0\times S}$.
\[h-rig-H-lem\] The natural map of functors $h^{rig}_J\to H^{rig}|_{{{\cal N}}}$ is an isomorphism.
[[*Proof*]{}]{}. This follows immediately from the universality of the Poincaré line bundle: for any NC-nilpotent algebra ${\Lambda}$ and $({{\cal L}},s)\in H^{rig}({\operatorname{Spec}}({\Lambda}))$ the line bundle ${{\cal O}}_{C\times{\operatorname{Spec}}({\Lambda}_{ab})}{\otimes}{{\cal L}}$ on $C\times{\operatorname{Spec}}({\Lambda}_{ab})$ corresponds to a morphism $f:{\operatorname{Spec}}({\Lambda}_{ab})\to J$, which allows to construct an element of $h^{rig}_J({\operatorname{Spec}}({\Lambda}))$.
\[Jac-mod-cor\] The functor $H^{rig}$ is represented by an NC-smooth thickening $J^{NC,mod}$ of $J$. We denote by ${{\cal P}}^{NC,mod}$ the corresponding universal family, which is a line bundle on $C\times J^{NC,mod}$ trivialized over $p_0\times J^{NC,mod}$.
[[*Proof*]{}]{}. First, by Theorem \[h-rig-repr-thm\] and by Lemma \[h-rig-H-lem\] we know the representability of $H^{rig}|_{{\cal N}}$, so $$H^{rig}|_{{{\cal N}}}\simeq h_{J^{NC,mod}}|_{{\cal N}}$$ for some NC-smooth NC-thickening $J^{NC,mod}$ of $J$. We claim that this isomorphism of functors extends uniqely to an isomorphism of functors $H^{rig}$ and $h_{J^{NC,mod}}$ on the category of all NC-schemes. Indeed, this easily reduces to the case of affine NC-schemes. By Lemma \[NC-mor-lem\], it is enough to check that the natural map $$H^{rig}(R)\to {\varprojlim}H^{rig} (R/F^nR)$$ is an isomorphism for any NC-complete algebra $R$. For this it suffices to prove that for any NC-scheme $X$ the category of line bundles on $X$ is equivalent to the category of collections $(L_n,f_n)$, where $L_n$ is a line bundle on $X^{\le n}$ and $f_n$ is an isomorphism $L_{n+1}|_{X^{\le n}}\simeq L_n$. Since ${{\cal O}}_X^*={\varprojlim}{{\cal O}}_{X^{\le n}}^*$, the natural restriction functor is fully faithful, so it suffices to check that any collection $(L_n,f_n)$ is locally trivial. Indeed, assume that $X$ is affine and $L_0\simeq{{\cal O}}_X$. Note that for each $n$, we have an exact sequence $$0\to (F^{n+1}{{\cal O}}_X)L_{n+1}\to L_{n+1}\to L_n\to 0$$ of sheaves on $X$, where $$(F^{n+1}{{\cal O}}_X)L_{n+1}\simeq (F^{n+1}{{\cal O}}_X/F^{n+2}{{\cal O}}_X){\otimes}_{{{\cal O}}_{X^{\le 0}}}L_0$$ is a quasicoherent sheaf of ${{\cal O}}_{X^{\le 0}}$-modules. Hence, the map $H^0(X,L_{n+1})\to H^0(X,L_n)$ is surjective. Using this we can lift a trivialization ${{\cal O}}_{X^{\le 0}}\rTo{\sim} L_0$ to a compatible system of maps ${{\cal O}}_{X^{\le n}}\to L_n$, which will automatically be isomorphisms.
We call the NC-smooth thickening $J^{NC,mod}$ of the Jacobian $J$, constructed above, the [*modular NC-thickening of $J$*]{}, and call the universal line bundle ${{\cal P}}^{NC,mod}$ on $C\times J^{NC,mod}$ the [*NC-Poincaré bundle*]{}.
Below we will show that the modular NC-thickening of $J$ is isomorphic to the standard NC-thickening of $J$ as an abelian variety (see Corollary \[J-identification-cor\]).
NC-Poincaré bundle and NC-Fourier-Mukai transform {#NC-Poincare-sec}
-------------------------------------------------
Recall that for an abelian variety $A$ there is a universal extension $A^\natural\to \hat{A}$ of $\hat{A}$ by a vector space, namely, $A^\natural$ is the moduli space of line bundles with flat connection on $A$. Then the line bundle ${{\cal P}}^\natural$ on $A^\natural\times A$ obtained as the pull-back of ${{\cal P}}$, has a relative flat connection $\nabla^\natural$ (in the direction of $A$, i.e., relative over $A^\natural$). This is used in [@Laumon] to construct the Fourier-Mukai transform (in fact, an equivalence) $$F^D: D({\operatorname{Qcoh}}(A^\natural))\to D(D_A-{\operatorname{mod}}_{qc}):E\mapsto Rp_{2*}({{\cal P}}^\natural{\otimes}p_1^*E).$$ Applying the above functor to ${{\cal P}}^\natural$ we obtain a right ${{\cal O}}^{NC}_{A^\natural\times A/A^\natural}$-module ${{\bf K}}_{A^\natural, A}({{\cal P}}^\natural,\nabla^\natural)$, locally free of rank $1$. Hence, we can define the corresponding Fourier-Mukai type functor $$\label{FNC-natural-eq}
F^{NC,\natural}:D({\operatorname{Qcoh}}(A^\natural))\to D({\operatorname{mod}}-{{\cal O}}^{NC}_A):
E\mapsto Rp_{2*}({{\bf K}}_{A^\natural,A}({{\cal P}}^\natural,\nabla^\natural){\otimes}_{p_1^{-1}{{\cal O}}_{A^\natural}} p_1^{-1}E).$$
\[F-NC-natural-prop\] One has a commutative triangle of functors
D((A\^))&& D(D\_A-\_[qc]{})\
&&\_[[[**K**]{}]{}\_[pt,A]{}]{}\
&&D(-[[O]{}]{}\^[NC]{}\_A)
[[*Proof*]{}]{}. For $E\in D({\operatorname{Qcoh}}(A^\natural))$ we consider ${{\cal P}}^\natural{\otimes}_{p_1^{-1}{{\cal O}}_{A^\natural}} p_1^{-1}E$ equipped with the relative connection over $A^\natural$. Then we can proceed in two ways: either apply the functor ${{\bf K}}_{A^\natural,A}$ and then take push-forward to $A$, or first take push-forward and then apply the functor ${{\bf K}}_{pt,A}$. The first procedure gives $F^{NC,\natural}(E)$, while the second gives ${{\bf K}}_{pt,A}(F^D(E))$. It remains to use Lemma \[rel-conn-cd-lem\](i) for the projection $A^\natural\to pt$.
If $U{\subset}\hat{A}$ is an open affine then there exists a section $U\to A^\natural$, hence a flat connection on ${{\cal P}}|_{U\times A}$, relative to $U$. Applying the functor ${{\bf K}}_{U,A}$ we get a line bundle on $U\times A^{NC}$. Thus, for an affine open covering $(U_i)$ of $\hat{A}$ we obtain line bundles ${\widetilde}{{{\cal P}}}_{U_i}$ on $U_i\times A^{NC}$ extending ${{\cal P}}|_{U_i\times A}$. On double intersections we will have isomorphisms of line bundles (by Proposition \[D-module-prop\](i)) but they will not be compatible on triple intersections. Thus, we obtain a twisted line bundle on the NC-scheme $\hat{A}\times A^{NC}$ (in particular, we get a canonical gerbe on this NC-scheme).
[**Construction**]{}. Assume that we have a map $f:C\to\hat{A}$ where $C$ is a curve. Pulling back the Poincaré line bundle from $\hat{A}\times A$ we obtain a line bundle ${{\cal P}}(f)=(f\times{\operatorname{id}})^*{{\cal P}}$ on $C\times A$. We can cover $C$ by two open affine subsets $U_1$ and $U_2$, and choose liftings ${\sigma}_1:U_1\to A^\natural, {\sigma}_2:U_2\to A^\natural$. Hence, we get the induced relative connections $\nabla_1, \nabla_2$ on ${{\cal P}}(f)|{U_1\times A}$ and ${{\cal P}}(f)|_{U_2\times A}$, respectively. By Proposition \[D-module-prop\](i), the corresponding line bundles over the relative NC-thickenings $U_1\times A^{NC}$ and $U_2\times A^{NC}$ are isomorphic over the intersection $U_1\cap U_2$. Hence, choosing such an isomorphism $\varphi$ and gluing them we obtain a line bundle ${{\cal P}}^{NC}_{{\sigma}_1,{\sigma}_2,\varphi}$ on the relative NC-thickening $C\times A^{NC}$, which extends the line bundle ${{\cal P}}(f)$.
Let us assume in addition that $f(p_0)=0$ for some fixed point $p_0\in C$. We can choose $U_1{\subset}C$ to be an open neighborhood of $p_0$ and require that a lifting ${\sigma}_1:U_1\to A^\natural$ satisfies ${\sigma}_1(p_0)=0$, where we use the vector space structure on the fiber of $A^\natural\to \hat{A}$ over zero. Let us also set $U_2=C\setminus p_0$ and choose any lifting ${\sigma}_2:U_2\to A^\natural$. Then the above construction gives an extension ${{\cal P}}(f)^{NC}_{{\sigma}_1,{\sigma}_2,\varphi}$ of ${{\cal P}}(f)$ to the relative NC-thickening $C\times A^{NC}$, equipped with a trivialization over $p_0\times A^{NC}$.
\[J-morphism-prop\] There exist a morphism of NC-schemes $\phi:A^{NC}\to J^{NC,mod}$, where $J^{NC,mod}$ is the modular NC-thickening of the Jacobian $J=J(C)$ and an isomorphism $${{\cal P}}(f)^{NC}_{{\sigma}_1,{\sigma}_2,\varphi}\simeq ({\operatorname{id}}_C\times\phi)^*{{\cal P}}^{NC,mod}$$ of line bundles on $C\times A^{NC}$, compatible with trivializations over $p_0\times J^{NC}$ and with the morphism of abelian varieties $A\to J$ associated with $f$.
[[*Proof*]{}]{}. This follows immediately from Corollary \[Jac-mod-cor\].
\[J-identification-cor\] Let $f: C\to J$ be the standard embedding of a curve into its Jacobian, such that $f(p_0)=0$. Applying the above construction to some liftings $U_i\to J^\natural$, where $C=U_1\cup U_2$ is an open covering, we get an extension ${\widetilde}{{{\cal P}}^{NC}}$ of the Poincaré line bundle ${{\cal P}}$ on $C\times J$ to $C\times J^{NC}$ (trivialized over $p_0$), where $J^{NC}$ is the standard NC-smooth thickening of $J$ as an abelian variety. There exists an isomorphism $$\phi:J^{NC}\rTo{\sim} J^{NC,mod}$$ of NC-thickenings of $J$ and an isomorphism $${\widetilde}{{{\cal P}}}^{NC}\simeq ({\operatorname{id}}_C\times\phi)^*{{\cal P}}^{NC,{\operatorname{mod}}}$$ of line bundles on $C\times J^{NC}$, compatible with trivializations over $p_0\times J^{NC}$ and inducing the identity isomorphism of ${{\cal P}}$ over $C\times J$.
[[*Proof*]{}]{}. This follows from Proposition \[J-morphism-prop\] together with the fact that any morphism of NC-smooth thickenings of a given variety is an isomorphism (see [@Kapranov Prop. 4.3.1(a)]).
\[twisted-ab-var-conn\] Suppose a curve $C$ admits a nonconstant map to an elliptic curve $E$. Then we have the induced map $f:J(C)\to E$ and hence, a non-standard NC-thickening ${\widetilde}{J^{NC}}$ of $J=J(C)$, given by ${\underline}{{\operatorname{End}}}({{\cal L}})$, where ${{\cal L}}$ is a line bundle on the standard NC-smooth thickening $J^{NC}$, extending $f^*{{\cal O}}_E(p)$ for some point $p\in E$ (see Corollary \[nonstandard-NC-cor\]). If the Poincaré line bundle ${{\cal P}}$ on $C\times J$ extended to a line bundle on $C\times {\widetilde}{J^{NC}}$, trivialized over $p_0$, then arguing as in Corollary \[J-identification-cor\], we would get an isomorphism of ${\widetilde}{J^{NC}}$ with the standard NC-smooth thickening. Since we know that ${\widetilde}{J^{NC}}$ is non-standard, it follows that ${{\cal P}}$ cannot be extended to a line bundle on $C\times {\widetilde}{J^{NC}}$, trivialized over $p_0$. Let $C=U_1\cup U_2$ be an affine open covering of $C$, equipped with liftings ${\sigma}_i:U_i\to J^\natural$, where $p_0\not\in U_2$, $p_0\in U_1$ and ${\sigma}_1(p_0)=0$. Then as in the above Construction we get relative flat connections $\nabla_i$ on the restrictions ${{\cal P}}_C|_{U_i\times J}$ for $i=1,2$, and then line bundles ${{\bf K}}_{U_i,J}({{\cal P}}_C|_{U_i\times J},\nabla_i)$ on $U_i\times {\widetilde}{J^{NC}}$, where ${{\bf K}}_{U_1,J}({{\cal P}}_C|_{U_1\times J},\nabla_1)$ is trivialized over $p_0$. Since these do not glue into a global line bundle on $C\times{\widetilde}{J^{NC}}$, this implies that ${{\bf K}}_{U_{12},J}({{\cal P}}_C|_{U_{12}\times J},\nabla_1)$ and ${{\bf K}}_{U_{12},J}({{\cal P}}_C|_{U_{12}\times J},\nabla_2)$ are not isomorphic. Thus, the functor ${{\bf K}}_{U_{12},J}$ does depend on a choice of a connection for our non-standard thickening ${\widetilde}{J^{NC}}$ (for the standard thickening it does not, by Proposition \[D-module-prop\](i)).
We now turn to the study of the NC-Fourier-Mukai transform $$F^{NC}: D({\operatorname{Qcoh}}(C))\to D({\operatorname{mod}}-{{\cal O}}^{NC}_A):E\mapsto Rp_{2*}({{\cal P}}(f)^{NC}{\otimes}_{p_1^{-1}{{\cal O}}_C} p_1^{-1}E)$$ associated with a morphism $f:C\to \hat{A}$, where ${{\cal P}}(f)^{NC}={{\cal P}}(f)^{NC}_{{\sigma}_1,{\sigma}_2,\varphi}$ is the NC-extension of the line bundle ${{\cal P}}(f)$ on $C\times A$, where for a smooth variety we set $D^b(X)=D^b({\operatorname{Coh}}(X))$. Let also $F:D^b(C)\to D^b(A)$ denote the usual Fourier-Mukai functor associated with ${{\cal P}}(f)$. Apriori the image of $F^{NC}$ is just in the derived category of ${{\cal O}}_{A^{NC}}$-modules. Let us define the subcategory of [*globally perfect*]{} objects $${\operatorname{Per}}^{gl}(A^{NC}){\subset}D({\operatorname{mod}}-{{\cal O}}^{NC}_A)$$ as the full subcategory consisting of objects represented globally by a finite complex of vector bundles on $A^{NC}$.
\[curve-Fourier-thm\] For every $E\in D^b(C)$ one has $F^{NC}(E)\in{\operatorname{Per}}^{gl}(A^{NC})$. Furthermore, if $E$ is a coherent sheaf on $C$ such that $F(E)$ is a vector bundle on $A$, where $F:D^b(C)\to D^b(A)$ is the usual Fourier-Mukai transform associated with ${{\cal P}}(f)$, then $F^{NC}(E)$ is a vector bundle on $A^{NC}$. We have a commutative diagram of functors $$\label{F-res-diagram}
\begin{diagram}
D^b(C)&\rTo{F^{NC}}& {\operatorname{Per}}^{gl}(A^{NC})\\
&\rdTo{F}&\dTo_{}\\
&&D^b(A)
\end{diagram}$$ where the vertical arrow is the natural restriction functor.
[[*Proof*]{}]{}. First, to prove that $F^{NC}(E)$ is globally perfect it is enough to check this for a set of objects $E$ generating $D^b(C)$ as a triangulated category. Since $F^{NC}({{\cal O}}_p)$, where $p\in C$, is a line bundle on $A^{NC}$, it is enough to consider the case when $E$ is a coherent sheaf on $C$ such that $F(E)$ is a vector bundle on $A$. Let $j_i:U_i{\hookrightarrow}C$ and $j_{12}:U_{12}{\hookrightarrow}C$ be the natural open embeddings. Recall that the restriction ${{\cal P}}(f)^{NC}|_{U_i\times A}$ is the ${{\cal O}}^{NC}_{U_i\times A/U_i}$-module associated with the relative flat connection on ${{\cal P}}(f)|_{U_i\times A}$ coming from the isomorphism $${{\cal P}}(f)|_{U_i\times A}\simeq ({\sigma}_i\times{\operatorname{id}}_A)^*{{\cal P}}^\natural$$ and the natural relative flat connection $\nabla^\natural$ on ${{\cal P}}^\natural$. Hence, by Lemma \[rel-conn-cd-lem\](ii), we get an isomorphism $${{\cal P}}(f)^{NC}|_{U_i\times A}\simeq ({\sigma}_i\times{\operatorname{id}}_A)^*{{\bf K}}_{A^\natural,A}({{\cal P}}^\natural,\nabla^\natural).$$ Thus, for a quasicoherent sheaf ${{\cal F}}$ on $U_i$ we have an isomoprhism $$F^{NC}(j_{i*}({{\cal F}}))\simeq Rp_{2*}({{\cal P}}(f)^{NC}|_{U_i\times A}{\otimes}_{p_1^{-1}{{\cal O}}_{U_i}} p^{-1}{{\cal F}})\simeq
F^{NC,\natural}({\sigma}_{i*}{{\cal F}}),$$ where $F^{NC,\natural}$ is the functor . Let us write for brevity ${{\bf K}}={{\bf K}}_{pt,A}$. Combining this isomorphism with Proposition \[F-NC-natural-prop\] we get $$\label{FNC-K-isom1}
F^{NC}(j_{i*}E|_{U_i})\simeq F^{NC,\natural}({\sigma}_{i_*}E|_{U_i})\simeq
{{\bf K}}(F^D({\sigma}_{i*}E|_{U_i}))$$ for $i=1,2$, where $F^D({\sigma}_{i*}E|_{U_i})\simeq F(j_{i*}E|_{U_i})$ as an ${{\cal O}}_A$-module, but it is equipped with a connection coming from ${\sigma}_i$. Similarly, we have two isomorphisms $$\label{FNC-K-isom2}
F^{NC}(j_{12*}E|_{U_{12}})\simeq {{\bf K}}(F^D({\overline}{{\sigma}}_{i*}E|_{U_{12}})),$$ where for $i=1,2$, ${\overline}{{\sigma}}_i={\sigma}_i|_{U_{12}}:U_{12}\to A^\natural$. The isomorphisms , fit into the commutative diagrams (for $i=1,2$)
F\^[NC]{}(j\_[i\*]{}(E|\_[U\_i]{}))&& [[**K**]{}]{}(F\^D(\_[i\*]{}E|\_[U\_i]{}))\
&&\
F\^[NC]{}(j\_[12\*]{}E|\_[U\_[12]{}]{})&&[[**K**]{}]{}(F\^D(\_[i\*]{}E|\_[U\_[12]{}]{}))
where the vertical arrows are induced by restrictions from $U_i$ to $U_{12}$. Note that we have isomorphisms of ${{\cal O}}_A$-modules $$F^D({\overline}{{\sigma}}_{1*}E|_{U_{12}}))\simeq F^D({\overline}{{\sigma}}_{2*}E|_{U_{12}}))\simeq F(j_{12*}E|_{U_{12}}),$$ however, the two $D$-module structures are different: one connection, $\nabla_1$, is induced by ${\sigma}_1$ and another, $\nabla_2$, by ${\sigma}_2$. Applying $F^{NC}$ to the Cech exact sequence $$0\to E\to j_{1*}E|_{U_1}\oplus j_{2*}E|_{U_2}\to j_{12*}E|_{U_{12}}\to 0$$ we get an exact triangle $$F^{NC}(E)\to {{\bf K}}({{\cal V}}_0)\rTo{{\alpha}} {{\bf K}}({{\cal V}}_1)\to F^{NC}(E)[1],$$ where ${{\cal V}}_0=F(j_{1*}E|_{U_1})\oplus F(j_{2*}E|_{U_2}))$ and ${{\cal V}}_1=F(j_{12*}E|_{U_{12}})$ are viewed as $D$-modules on $A$ (for ${{\cal V}}_1$ we take the connection induced by ${\sigma}_1$), and ${\alpha}$ is the morphism coming from the above identifications , . We can represent ${\alpha}$ as the composition $$\begin{aligned}
&{{\bf K}}(F(j_{1*}E|_{U_1}))\oplus {{\bf K}}(F(j_{2*}E|_{U_2})))\to\\
&{{\bf K}}(F(j_{12*}E|_{U_{12}}),\nabla_1)\oplus {{\bf K}}(F(j_{12*}E|_{U_{12}}),\nabla_2)\rTo{(-{\operatorname{id}},\varphi)}
{{\bf K}}(F(j_{12*}E|_{U_{12}}),\nabla_1)\end{aligned}$$ where $\varphi$ is induced by some isomorphism of ${{\cal A}}$-dg-modules $$K(F(j_{12*}E|_{U_{12}}),\nabla_2)\rTo{\sim} K(F(j_{12*}E|_{U_{12}}),\nabla_1).$$ Thus, ${\alpha}$ is the map on ${\underline}{H}^0$ induced by a closed morphism of degree zero of ${{\cal A}}$-dg-modules $$f:{{\cal V}}_0\hat{{\otimes}}_{{\cal O}}{{\cal A}}=K(F(j_{1*}E|_{U_1})\oplus F(j_{2*}E|_{U_2}))\to
K(F(j_{12*}E|_{U_{12}}),\nabla_1)={{\cal V}}_1\hat{{\otimes}}_{{\cal O}}{{\cal A}}.$$ Hence, $F^{NC}(E)$ is realized by the complex $${\underline}{H}^0({{\cal V}}_0\hat{{\otimes}}_{{\cal O}}{{\cal A}},d)\rTo{{\underline}{H}^0(f)} {\underline}{H}^0({{\cal V}}_1\hat{{\otimes}} {{\cal A}},d)$$ Note that the complex $${{\cal V}}_0 \rTo{f|_{{{\cal V}}_0}{\operatorname{mod}}F^1} {{\cal V}}_1$$ represents $F(E)$. In other words, the map ${{\cal V}}_0\to {{\cal V}}_1$ induced by $f{\operatorname{mod}}F^1+{{\cal A}}^1$ is surjective, and its kernel is $F(E)$. This immediately implies that $f$ is surjective. Next, we claim that ${\operatorname{ker}}(f)$ is locally isomorphic to $F(E)\hat{{\otimes}}_{{\cal O}}{{\cal A}}$, as an ${{\cal A}}$-module. Indeed, locally we can choose a splitting $s:{{\cal V}}_1\to{{\cal V}}_0$, so that the corresponding projection $\pi:{{\cal V}}_0\to F(E)$ fits into an exact sequence $$0\to {{\cal V}}_1\rTo{s} {{\cal V}}_0\rTo{\pi} F(E)\to 0.$$ Hence, we have an induced exact sequence of ${{\cal A}}$-modules $$0\to {{\cal V}}_1\hat{{\otimes}}_{{\cal O}}{{\cal A}}\rTo{s{\otimes}{\operatorname{id}}} {{\cal V}}_0\hat{{\otimes}}_{{\cal O}}{{\cal A}}\rTo{\pi{\otimes}{\operatorname{id}}} F(E)\hat{{\otimes}}_{{\cal O}}{{\cal A}}\to 0.$$ The composed morphism of ${{\cal A}}$-modules $${{\cal V}}_1\hat{{\otimes}}_{{\cal O}}{{\cal A}}\rTo{s{\otimes}{\operatorname{id}}} {{\cal V}}_0\hat{{\otimes}}_{{\cal O}}{{\cal A}}\rTo{f} {{\cal V}}_1\hat{{\otimes}}_{{\cal O}}{{\cal A}}$$ induces the identity modulo $F^1+{{\cal A}}^1$, hence, it is an isomorphism. By diagram chasing this implies that the composed morphism $${\operatorname{ker}}(f)\to {{\cal V}}_0\hat{{\otimes}}_{{\cal O}}{{\cal A}}\rTo{\pi{\otimes}{\operatorname{id}}}F(E)\hat{{\otimes}}_{{\cal O}}{{\cal A}}$$ is an isomorphism. This proves our claim. Note that the differential $d$ on ${{\cal V}}_0\hat{{\otimes}}_{{\cal O}}{{\cal A}}$ induces a differential $d$ on ${\operatorname{ker}}(f)$, so that we have an exact sequence of dg-modules over ${{\cal A}}$ $$\label{dg-mod-ex-seq}
0\to ({\operatorname{ker}}(f),d)\to ({{\cal V}}_0\hat{{\otimes}}_{{\cal O}}{{\cal A}},d)\to {{\cal V}}_1\hat{{\otimes}}_{{\cal O}}{{\cal A}}\to 0.$$ By Theorem \[module-thm\](ii), for any dg-module structure on $F(E)\hat{{\otimes}}_{{\cal O}}{{\cal A}}$ the higher cohomology of the differential vanish, while ${\underline}{H}^0$ is a vector bundle on $A^{NC}$. Hence, the same is true for the dg-module $({\operatorname{ker}}(f),d)$, and the exact sequence leads to an exact sequence of ${{\cal O}}_A^{NC}$-modules $$0\to {\underline}{H}^0({\operatorname{ker}}(f),d)\to
{\underline}{H}^0({{\cal V}}_0\hat{{\otimes}}_{{\cal O}}{{\cal A}},d)\rTo{{\underline}{H}^0(f)} {\underline}{H}^0({{\cal V}}_1\hat{{\otimes}} {{\cal A}},d)\to 0.$$ Therefore, $F^{NC}(E)\simeq {\underline}{H}^0({\operatorname{ker}}(f),d)$ which is a vector bundle on $A^{NC}$.
Note that we have a natural isomorphism $${{\cal P}}(f)^{NC}{\otimes}_{{{\cal O}}^{NC}_{C\times A/C}}{{\cal O}}_{C\times A}\rTo{\sim} {{\cal P}}(f)$$ of ${{\cal O}}_{C\times A}$-modules. Hence, we obtain a natural map $$\label{F-res-map}
\begin{array}{l}
F^{NC}(E){\otimes}^{\Bbb L}_{{{\cal O}}^{NC}_A}{{\cal O}}\to
Rp_{2*}(p_1^{-1}E{\otimes}_{p_1^{-1}{{\cal O}}_C} {{\cal P}}(f)^{NC}{\otimes}_{{{\cal O}}^{NC}}{{\cal O}})
\rTo{\sim} Rp_{2*}(p_1^{-1}E{\otimes}_{p_1^{-1}{{\cal O}}_C}{{\cal P}}(f))\\
=F(E).
\end{array}$$ If $E$ is a coherent sheaf such that $F(E)$ is a vector bundle, i.e., $E$ is an [*${\operatorname{IT}}_0$-sheaf*]{} in the terminology of [@Mukai], then the above argument shows that the map is an isomorphism. Since every coherent sheaf can be embedded into an ${\operatorname{IT}}_0$-sheaf, we can calculate $F^{NC}$ using resolutions of ${\operatorname{IT}}_0$-sheaves, and the commutativity of the diagram follows.
The above Theorem immediately implies that the Picard bundles on the Jacobian extend to the standard NC-thickening.
Let $V$ be the vector bundle on the Jacobian $J=J(C)$ obtained as the Fourier transform of a coherent sheaf on $C$. Then $V$ extends to a vector bundle on the standard NC-thickening $J^{NC}$.
Analytic NC manifolds {#NC-analytic-sec}
=====================
Analytic NC manifolds.
----------------------
\[analytic-nc-manifolds\]
Let $X$ be a complex manifold. We define the notion of a *holomorphic NC-connection* in the same way as an algebraic NC-connection (see Definition \[alge-nc-defi\]), replacing algebraic forms by holomorphic ones. The constructions of Theorem \[dg-constr-thm\] go through in the holomorphic setting. Thus, starting with a holomorphic torsion free connection $\nabla$, we obtain a holomorphic NC-connection $D$, which is a differential on $${{\cal A}}^{an}_X:=(\Omega_X^\bullet)^{an}\otimes_{{{\cal O}}_X^{an}} \hat{T}_{{{\cal O}}_X^{an}}({\Omega}^{1})^{an},$$ the analytic version of the sheaf ${{\cal A}}_X$ studied in Theorem \[dg-constr-thm\].
Let $U$ be an open subset of ${{\Bbb C}}^n$. Complex analytic coordinates $x_1,\ldots,x_n$ induce a trivialization $$({\Omega}^1_U)^{an} \cong {{\cal O}}_U^{an}dx_1\oplus\ldots\oplus {{\cal O}}_U^{an}dx_n.$$ of the cotangent bundle. Let us set $e_i:= dx_i$. The above trivialization induces a *flat* torsion free connection $\nabla$ on $({\Omega}^1_U)^{an}$ such that $\nabla(e_i)=0$. We have the corresponding NC-connection $D=d_{\nabla,{\operatorname{id}}}$ (see Lemma \[torsion-free-lem\]).
In view of Theorem \[smoothness-thm\] we define local models of an analytic NC-smooth thickening of $U$ by $${{\cal O}}_U^{nc}:= \underline{H}^0(D)={\operatorname{ker}}(D:({{\cal A}}^0_U)^{an}\to ({{\cal A}}^1_U)^{an}).$$ This local model ${{\cal O}}_U^{nc}$ can also be described explicitly in the manner of Theorem \[local-constr-thm\]. Indeed, a local section of ${{\cal O}}_U^{nc}$ can be uniquely written as an infinite series $\sum_\lambda [[f_{\lambda}(e)]] M_{\lambda}$ where $M_{\lambda}$ are monomials in the Lie words of degree $\ge 2$ in $e_i$, $f_{\lambda}$ are holomorphic functions on $U$, and $[[f_{\lambda}(e)]]$ is an infinite series of the form $$[[f_{\lambda}(e)]] = \sum (-1)^{i_1+\ldots i_n} (\partial_1)^{i_1}\ldots(\partial_n)^{i_n}f_{\lambda}e_1^{i_1}\cdots e_n^{i_n}$$ where $\partial_i=\partial/\partial x_i$. As before, we use the grading on $U({{\cal L}ie}_+(e_1,\ldots,e_n))$ induced by the grading on the free Lie algebra, where $\deg(e_i)=1$. We define $\deg(M_{\lambda})$ by viewing a monomial $M_{\lambda}$ as an element of $U({{\cal L}ie}_+(e_1,\ldots,e_n))$ and using the above grading. The $I$-filtration on ${{\cal O}}_U^{nc}$ (see ) can be described explicitly as follows: $I_d{{\cal O}}_U^{nc}$ consists of formal series $\sum_\lambda [[f_{\lambda}(e)]] M_{\lambda}$ such that $\deg(M_{\lambda})\ge d$. Locally the corresponding quotient admits a trivialization $${{\cal O}}_U^{nc}/I_{d+1}{{\cal O}}_U^{nc} \cong {{\cal O}}_U^{an}\otimes_{{\Bbb C}}\big(U({{\cal L}ie}_+(e_1,\ldots,e_n))_0\oplus \ldots \oplus U({{\cal L}ie}_+(e_1,\ldots,e_n))_d\big)$$ as sheaves of abelian groups over $U$. The product structure, written in this trivialization, is described in [@Kapranov Proposition (3.4.3)]. Again we let ${\operatorname{Aut}}({{\cal O}}^{nc}/I_{d+1}{{\cal O}}^{nc})$ be automorphisms that are compatible with the projection map $\pi: {{\cal O}}^{nc}/I_{d+1}{{\cal O}}^{nc} \rightarrow {{\cal O}}^{an}$.
\[analytic-def\] An *analytic NC manifold* is a ringed space $(X,{{\cal O}}^{nc})$ which is locally of the form $(U,{{\cal O}}_U^{nc})$.
\[surj-lem\] The natural homomorphism $\underline{{\operatorname{Aut}}}({{\cal O}}^{nc}/I_{d+1}{{\cal O}}^{nc}) \rightarrow \underline{{\operatorname{Aut}}}({{\cal O}}^{nc}/I_{d}{{\cal O}}^{nc})$ is surjective.
[[*Proof*]{}]{}. The question is local, so we can work with the explicit local model described above. In this case, for any automorphism $\phi\in {\operatorname{Aut}}({{\cal O}}^{nc}/I_{d}{{\cal O}}^{nc})$, let us write $$\phi([[x_i]])=[[x_i]]+\sum_{d-1\geq \deg(M_{\lambda})\geq 2} [[f^i_{\lambda}(e)]]M_{\lambda}.$$ We define an ${{\cal O}}^{an}$-linear automorphism $\tilde{\phi}$ of $\hat{T}_{{\cal O}}^{an}(({\Omega}^1)^{an})=\hat{T}_{{\cal O}}^{an}({{\cal O}}^{an}e_1\oplus\ldots\oplus{{\cal O}}^{an}e_n)$ which acts on the generators by $$\tilde{\phi}: e_i \mapsto e_i-\sum_{d-1\geq \deg(M_{\lambda})\geq 2} [[f^i_{\lambda}(e)]]M_{\lambda}.$$ Since the element $\sum_{d-1\geq \deg(M_{\lambda})\geq 2} [[f^i_{\lambda}(e)]]M_{\lambda}$ is $D$-closed, this automorphism commutes with the differential $D$. It is also straightforward to verify that $\tilde{\phi}$ induces an automorphism on ${\operatorname{ker}}(d)$ that extends the automorphism $\phi$. This shows that the composition $${\operatorname{Aut}}({{\cal O}}^{nc}) \rightarrow {\operatorname{Aut}}({{\cal O}}^{nc}/I_{d+1}{{\cal O}}^{nc}) \rightarrow {\operatorname{Aut}}({{\cal O}}^{nc}/I_d{{\cal O}}^{nc})$$ is surjective. Hence, the second arrow is also surjective.
For the proof of Theorem \[main-an-thm\] we will need the following analytic analog of the notion of a $(d)_I$-smooth thickening.
\[analytic-d-def\] An *analytic $(d)_I$-manifold* is a ringed space $(X,{{\cal O}}_X^{(d)})$ which is locally of the form $(U,{{\cal O}}^{nc}/I_{d+1}{{\cal O}}^{nc})$.
By an *algebraic $(d)_I$-manifold* we mean a $(d)_I$-smooth thickening of a smooth algebraic variety. Let $(X,{{\cal O}}_X^{(d)})$ be an algebraic (resp., analytic) $(d)_I$-manifold. For an open subset $U\subset X$ we define a category $Th^{(d+1)}({{\cal O}}_X^{(d)},U)$ whose objects are $(d+1)_I$-manifold thickenings of ${{\cal O}}_X^{(d)}|_U$, and morphisms are morphisms of ringed spaces that are identical on ${{\cal O}}_X^{(d)}|_U$.
\[classification-lem\] Let $(X,{{\cal O}}_X^{(d)})$ be an algebraic (resp., analytic) $(d)_I$-manifold. The association $U\mapsto Th^{(d+1)}({{\cal O}}_X^{(d)},U)$ defines a gerbe in the Zariski (resp., classical) topology of $X$ banded by $${\underline}{{\operatorname{Hom}}}(\Omega_X^1, (U{{\cal L}ie}_+(\Omega_X^1))_{d+1}).$$
[[*Proof*]{}]{}. The fact that all morphisms are isomorphisms is checked by induction in $Q$ using standard facts about central extensions (see [@Kapranov Prop. 1.2.6]). The local triviality in the algebraic case was checked in Proposition \[trunc-twisted-prop\] (in the analytic case the local triviality is a part of the definition). It remains to observe that by [@Kapranov Prop. 1.2.5], an automorphism of ${{\cal O}}_U^{(d+1)}\in {{\cal C}}({{\cal O}}_X^{(d)},U)$ trivial on ${{\cal O}}_U^{(d)}$ corresponds to a derivation of ${{\cal O}}_U$ with values in $$I_{d+1}{{\cal O}}_U^{(d+1)}\simeq (U{{\cal L}ie}_+(\Omega_U^1))_{d+1}.$$
Analytification functors.
-------------------------
Let $(X,{{\cal O}}^{NC})$ be an algebraic NC manifold, i.e., an NC-smooth thickening of a smooth algebraic variety over ${{\Bbb C}}$. We can associate with $(X,{{\cal O}}^{NC})$ an analytic NC manifold $(X^{an},{{\cal O}}^{nc})$ as follows. The underlying analytic manifold $X^{an}$ is simply the analytification of $X$. To construct ${{\cal O}}^{nc}$ we observe that by Theorem \[twisted-dg-thm\], we have a global dg resolution $({{\cal A}}_{{\cal T}},D)$ of ${{\cal O}}^{NC}$ that is locally (in the Zariski topology) of the form $({{\cal A}}_U,D)$ considered above. Now we simply apply the analytification functor to ${{\cal A}}_{{\cal T}}$ and set $${{\cal O}}^{nc}:={\underline}{H}^0({{\cal A}}_{{\cal T}}^{an},D).$$ We refer to the resulting analytic NC manifold $(X^{an},{{\cal O}}^{nc})$ as the *analytification of* $(X,{{\cal O}}^{NC})$.
The analytification of an algebraic $(d)_I$-manifold is defined similarly using Proposition \[trunc-twisted-prop\].
[*Proof of Theorem \[main-an-thm\]*]{}. It suffices to prove that the analytification functor $$\label{d-I-gerbes-map}
Th^{(d)_I}_X\to Th_{X^{an}}^{(d)_I}$$ between the gerbes of algebraic and analytic $(d)_I$-manifold thickenings of $X$ and $X^{an}$, is an equivalence. Let us prove by induction in $Q$ that for an algebraic $(d)_I$-manifold ${{\cal O}}_X^{(d)}$ the map $${\operatorname{Aut}}(X, {{\cal O}}_X^{(d)}) \to {\operatorname{Aut}}(X^{an}, ({{\cal O}}_X^{(d)})^{an}).$$ is an isomorphism (where we consider automorphisms trivial on the abelianization). We use the short exact sequence $$0 {\rightarrow}\underline{{\operatorname{Hom}}}\big(\Omega_X^1, U{{\cal L}ie}_+(\Omega_X^1)_{d}\big) {\rightarrow}\underline{{\operatorname{Aut}}}( {{\cal O}}^{NC}/I_{d+1}{{\cal O}}^{NC}){\rightarrow}\underline{{\operatorname{Aut}}}({{\cal O}}^{NC}/I_{d}{{\cal O}}^{NC}){\rightarrow}0,$$ which gives the corresponding long exact sequence $$\mbox{ \footnotesize $0{\rightarrow}{\operatorname{Hom}}\big(\Omega_X^1, U{{\cal L}ie}_+(\Omega_X^1)_{d}\big) {\rightarrow}{\operatorname{Aut}}( {{\cal O}}^{NC}/I_{d+1}{{\cal O}}^{NC}){\rightarrow}{\operatorname{Aut}}({{\cal O}}^{NC}/I_{d}{{\cal O}}^{NC}) {\rightarrow}{\operatorname{Ext}}^1\big(\Omega_X^1, U{{\cal L}ie}_+(\Omega_X^1)_{d+1}\big){\rightarrow}\ldots$}.$$ Lemma \[surj-lem\] implies that we also have a similar long exact sequence in the analytic setting. Now our claim follows by the GAGA theorem and the five lemma. The fact that is essentially surjective is also deduced by induction in $Q$, using Lemma \[classification-lem\] and the GAGA theorem for the cohomology of the coherent sheaf ${\underline}{{\operatorname{Hom}}}(\Omega_X^1, (U{{\cal L}ie}_+(\Omega_X^1))_{d+1})$ and its analytification.
In the remaining subsections, we will construct examples of NC manifolds using analytic methods.
$L_\infty$ spaces in Kähler geometry.
-------------------------------------
To explain how Kähler geometry might enter into constructions of analytic NC manifolds, we first recall an analogous construction of Costello (see [@Costello]) in the commutative case: the construction of an *$L_\infty$ space* associated to a Kähler manifold $(X,g)$. In fact Costello’s construction works for any complex manifold. However, in the Kähler situation we can write down explicit formulas of the $L_\infty$ structure in terms the curvature tensor of a Kähler metric $g$, following a beautiful paper of Kapranov [@Kapranov2].
Consider the infinite dimensional vector bundle ${\mathsf{Jet}}_X$, the infinite holomorphic jet bundle of $X$. The bundle ${\mathsf{Jet}}_X$ admits a decreasing filtration whose associated graded are symmetric powers of $\Omega^1_X$. This filtration almost never splits, thus in general we do not have an isomorphism $${\mathsf{Jet}}_X\cong \hat{S}_{{{\cal O}}_X} \Omega^1_X.$$ However, Kapranov [@Kapranov] observed that we can write the Dolbeault complex of ${\mathsf{Jet}}_X$ in terms of the Dolbeault complex of $\hat{S}_{{{\cal O}}_X} \Omega^1_X$ endowed with a perturbed differential. This perturbed differential can be explicitly written down using the Levi-Civita connection $\nabla$ of $g$ and its associated curvature tensor $R$, which we briefly recall in the following paragraph.
To avoid confusions, we will denote by $\Omega^{1,0}_X$ the sheaf of smooth sections of the holomorphic vector bundle $\Omega^1_X$. The curvature operator maybe written as a morphism $$R: T_X^{1,0}\otimes T_X^{1,0} {\rightarrow}\Omega^{0,1}_X\otimes T_X^{1,0}.$$ Due to the Kähler condition, the operator $R$ factor through $$R: {\operatorname{Sym}}^2 T_X^{1,0} {\rightarrow}\Omega^{0,1}_X\otimes T_X^{1,0}.$$ For each $n\geq 2$ we define an operator $R_n\in \Omega^{0,1}_X\big({\operatorname{Hom}}({\operatorname{Sym}}^2T_X^{1,0}{\otimes}(T_X^{1,0})^{{\otimes}(n-2)}, T_X^{1,0})\big)$ inductively by putting $$R_2:= R, \mbox{\;\; and \;\;} R_{n+1}:=\nabla R_{n}.$$ Kapranov showed that $R_n$ is a totally symmetric operator, i.e. it is an operator $${\operatorname{Sym}}^n T_X^{1,0} {\rightarrow}\Omega^{0,1}_X {\otimes}T_X^{1,0}.$$ Dualizing, we obtain a morphism $$R_n^\vee: \Omega^{1,0}_X {\rightarrow}\Omega^{0,1}_X\otimes {\operatorname{Sym}}^n \Omega^{1,0}_X.$$ Now the perturbed differential $D$ on the Dolbeault complex $\Omega^{0,*}_X(\hat{S} \Omega^1_X)$ is defined to be the unique derivation of this algebra extending the $\overline{\partial}$ operator, and acting on $\Omega^{1,0}$ by the infinite sum $\overline{\partial}+\sum_{n\geq 2} R_n^\vee$. The following Theorem is obtained by Kapranov [@Kapranov2 Theorem 2.8.2].
We have $(\overline{\partial}+\sum_{n\geq 2} R_n^\vee)^2=0$. Furthermore, there is an isomorphism of commutative differential graded algebras $$\Omega^{0,*}({\mathsf{Jet}}_X) \cong \big(\Omega^{0,*}(\hat{S}(\Omega^1_X)), \overline{\partial}+\sum_{n\geq 2} R_n^\vee\big).$$
It is well-known that ${\mathsf{Jet}}_X$ has a flat holomorphic connection. Furthermore the canonical embedding $$i: {{\cal O}}_X\hookrightarrow \Omega^*({\mathsf{Jet}}_X)=\Omega^*{\otimes}_{{{\cal O}}_X} {\mathsf{Jet}}_X,$$ sending a holomorphic function to the associated constant jets, is a quasi-isomorphism of differential graded commutative algebras. Applying Kapranov’s Theorem to replace ${\mathsf{Jet}}_X$ by $\big(\Omega^{0,*}(\hat{S}(\Omega^1_X)), \overline{\partial}+\sum_{n\geq 2} R_n^\vee\big)$, we obtain an explicit resolution of ${{\cal O}}_X$ by a quasi-free commutative differential graded algebra over the smooth de Rham algebra $\Lambda_X:=\oplus_{p,q} \Omega_X^{p,q}$.
More precisely the underlying algebra is simply $\Lambda_X\otimes_{C^\infty_X} \hat{S}_{C^\infty}(\Omega_X^{1,0})$. The differential on this algebra is the unique derivation which extends the de Rham differential on $\Lambda_X$, and which acts on the space $\Omega_X^{1,0}$ by the infinite sum $d_\tau+ \overline{\partial}+\nabla+ \sum_{n\geq 2} R_n^\vee$ (as depicted below).
\^1 && \^1\_X\^[1,0]{} &&\^1\^2\_X\^[1,0]{} &&\
&(2,2) [d\_]{} & [+]{} & (2,2) [R\^\_2]{} &\
& & \_X\^[1,0]{} &
Here $d_\tau(1\otimes \alpha):=\alpha\otimes 1$ using the fact that $\Omega_X^{1,0}$ is a summand of $\Lambda^1$, and $\nabla$ is the $(1,0)$-type connection associated to the Kähler metric $g$. There is a quasi-isomorphism of commutative differential graded algebras $$\label{resolution-eq}
{{\cal O}}_X \cong \big(\Lambda_X\otimes_{C^\infty_X} \hat{S}_{C^\infty}(\Omega_X^{1,0}),d_\tau+ \overline{\partial}+\nabla+ \sum_{n\geq 2} R_n^\vee\big).$$ This resolution, referred to as the *$L_\infty$ space* associated to $X$, plays an important role in the recent work of Costello on Witten genus [@Costello].
$A_\infty$ spaces and NC thickenings.
-------------------------------------
Now we are going to establish an NC analog of (see Proposition \[if-part-prop\] below). Let $X$ be a complex manifold. The NC analog of the differential appearing in is given by the notion of an *NC-connection* ${{\cal D}}$, which is a differential on the algebra $$\Lambda_X\otimes_{C^\infty_X} \hat{T}_{C_X^\infty}(\Omega_X^{1,0})$$ extending the de Rham differential on $\Lambda_X$, and acting on the space of generators $\Omega_X^{1,0}$ by an operator of the form $$d_\tau+\overline{\partial}+\nabla+A^\vee_2+A^\vee_3+\ldots,$$ where $\nabla$ is a $(1,0)$-type connection on the bundle $\Omega_X^{1,0}$ (see Definition \[nc-connection-def\]). For $k\ge 2$ we split the $C^\infty_X$-linear morphism $A^\vee_k: \Omega_X^{1,0} {\rightarrow}\Lambda^1_X{\otimes}(\Omega_X^{1,0})^{{\otimes}k}$ as a sum $$A^\vee_k:= E_k+ F_k$$ where $E_k: \Omega_X^{1,0} {\rightarrow}\Lambda^{1,0}_X{\otimes}(\Omega_X^{1,0})^{{\otimes}k}$ and $F_k:\Omega_X^{1,0} {\rightarrow}\Lambda^{0,1}_X{\otimes}(\Omega_X^{1,0})^{{\otimes}k}$. Thus, the operator ${{\cal D}}$ is of the form $${{\cal D}}^{1,0}+{{\cal D}}^{0,1}:=(d_\tau+\nabla+E_2 +E_3+\ldots)+(\overline{\partial} + F_2 +F_3+\ldots).$$ The integrability condition ${{\cal D}}^2=0$ then implies that $$({{\cal D}}^{1,0})^2=[{{\cal D}}^{1,0},{{\cal D}}^{0,1}]=({{\cal D}}^{0,1})^2=0.$$ Thus, the complex $\big(\Lambda_X\otimes_{C^\infty_X} \hat{T}_{C^\infty}(\Omega_X^{1,0}),{{\cal D}}\big)$ is in fact a bicomplex whose degree $(p,q)$-component is $$\Lambda^{p,q}{\otimes}_{C^\infty_X} \hat{T}_{C^\infty}(\Omega_X^{1,0}).$$ The following proposition describes the local structure of the cohomology of ${{\cal D}}^{0,1}$.
\[nc-jet-prop\] Let $U$ be a Stein open subset of $X$. Then there exists an algebra automorphism $\phi$ of $\hat{T}_{C^\infty}(\Omega_U^{1,0})$ of the form $$\phi(\alpha)=\alpha+\phi_2(\alpha)+\phi_3(\alpha)+\ldots$$ for $\alpha\in \Omega_U^{1,0}$, and $\phi_n: \Omega_U^{1,0}{\rightarrow}(\Omega_U^{1,0})^{{\otimes}n}$ are $C^\infty_U$-linear morphisms, such that on the Dolbeault complex $\Lambda_U^{0,*}{\otimes}_{C^\infty_U}\hat{T}_{C_U^\infty}(\Omega_U^{1,0})$ we have the identity $$({\operatorname{id}}{\otimes}\phi)^{-1}(\overline{\partial})({\operatorname{id}}{\otimes}\phi)= {{\cal D}}^{0,1}.$$ This implies that $\underline{H}^i\big(\Lambda_U^{0,*}{\otimes}_{C^\infty_U}\hat{T}_{C^\infty}(\Omega_U^{1,0}),{{\cal D}}^{0,1}\big) =0$ if $i\geq 1$, and $\underline{H}^0\big(\Lambda_U^{0,*}{\otimes}_{C^\infty_U}\hat{T}_{C^\infty}(\Omega_U^{1,0}),{{\cal D}}^{0,1}\big)$ is isomorphic to $\hat{T}_{{{\cal O}}_U}(\Omega_U)$ where $\Omega_U$ denotes the space of holomorphic one forms.
[[*Proof*]{}]{}. We inductively construct $\phi_n$’s as follows. For the construction of $\phi_2$, we use the fact that ${{\cal D}}^{0,1}=\overline{\partial} + F_2 +F_3+\ldots$ squares to zero. This implies in particular that $\overline{\partial} F_2=0$. Over a Stein manifold the $\overline{\partial}$-closedness implies the exactness. We set $\phi_2$ so that $\overline{\partial} \phi_2=F_2$. Assuming there exists $\phi_k$ for $k\leq n$ such that $$({\operatorname{id}}{\otimes}\phi_{\leq n})\circ {{\cal D}}^{0,1}\circ ({\operatorname{id}}{\otimes}\phi_{\leq n})^{-1}=\overline{\partial} \pmod{\hat{T}^{\geq n+1}}.$$ This implies that $$({\operatorname{id}}{\otimes}\phi_{\leq n})\circ {{\cal D}}^{0,1}\circ ({\operatorname{id}}{\otimes}\phi_{\leq n})^{-1}=\overline{\partial}+G_{n+1}\pmod{\hat{T}^{\geq n+2}}$$ for some morphism $G_{n+1}: \Omega_U^{1,0}{\rightarrow}\Lambda_X^{0,1}{\otimes}(\Omega_U^{1,0})^{{\otimes}n+1}$. The integrability $({{\cal D}}^{0,1})^2=0$ implies that $\overline{\partial} G_{n+1}=0$. Since we are over a Stein manifold there exists $\phi_{n+1}: \Omega_U^{1,0}{\rightarrow}(\Omega_U^{1,0})^{{\otimes}n+1}$ such that $\overline{\partial}(\phi_{n+1})= G_{n+1}$. We then check that $$({\operatorname{id}}{\otimes}\phi_{\leq n+1})\circ {{\cal D}}^{0,1}\circ ({\operatorname{id}}{\otimes}\phi_{\leq n+1})^{-1}=\overline{\partial} \pmod{\hat{T}^{\geq n+2}}.$$ The proposition is proved.
In view of the commutative situation, the cohomology sheaf $\underline{H}^0\big(\Lambda_X^{0,*}{\otimes}_{C^\infty_X}\hat{T}_{C^\infty}(\Omega_X^{1,0}),{{\cal D}}^{0,1}\big)$ should be thought of as a generalization of the notion of jet bundle in the NC setting.
Observe that the natural projection $\underline{H}^0\big(\Lambda_X\otimes_{C^\infty_X} \hat{T}_{C^\infty}(\Omega_X^{1,0}),{{\cal D}}\big) {\rightarrow}C^\infty_X$ to smooth functions lands in fact inside holomorphic functions. So there is a morphism of algebras $$\underline{H}^0\big(\Lambda_X\otimes_{C^\infty_X} \hat{T}_{C^\infty}(\Omega_X^{1,0}),{{\cal D}}\big) {\rightarrow}{{\cal O}}_X.$$
\[if-part-prop\] Let ${{\cal D}}$ be an NC-connection on a complex manifold $X$. Then we have $\underline{H}^i\big(\Lambda_X\otimes_{C^\infty_X} \hat{T}_{C^\infty}(\Omega_X^{1,0}),{{\cal D}}\big)=0$ for $i\geq 1$. Furthermore, the sheaf of algebras $\underline{H}^0\big(\Lambda_X\otimes_{C^\infty_X} \hat{T}_{C^\infty}(\Omega_X^{1,0}),{{\cal D}}\big)$ defines an analytic NC manifold structure on $X$ with respect to the projection $\underline{H}^0\big(\Lambda_X\otimes_{C^\infty_X} \hat{T}_{C^\infty}(\Omega_X^{1,0}),{{\cal D}}\big) {\rightarrow}{{\cal O}}_X$.
[[*Proof*]{}]{}. Both statements are local, so it is enough to prove them over a Stein open subset $U$ of $X$. As we observed before, the complex $\big(\Lambda_X\otimes_{C^\infty_X} \hat{T}_{C^\infty}(\Omega_X^{1,0}),{{\cal D}}\big)$ is a bicomplex graded by the $(p,q)$-degree of forms in $\Lambda_X$. In particular, this is a bounded complex in the first quadrant whose cohomology can be computed via the spectral sequence with the $E^1$-page given by the cohomology of ${{\cal D}}^{0,1}=\overline{\partial}+F_2+\ldots$. By Proposition \[nc-jet-prop\], there exists an automorphism $\phi$ of $\hat{T}_{C^\infty}({\Omega}^{1,0}_U)$ with $\phi({\alpha})={\alpha}+\phi_2({\alpha})+\phi_3({\alpha})+\ldots$ such that $({\operatorname{id}}{\otimes}\phi){{\cal D}}^{0,1}({\operatorname{id}}{\otimes}\phi)^{-1}={\overline{\partial}}$. Hence, ${\operatorname{id}}{\otimes}\phi$ induces an isomorphism of the cohomology of ${{\cal D}}^{0,1}$ with $\Omega_U^*{\otimes}_{{{\cal O}}_U}\hat{T}(\Omega^1_U)$. In particular, the spectral sequence collapses, and the total cohomology is isomorphic to the cohomology of $D=({\operatorname{id}}{\otimes}\phi){{\cal D}}^{1,0}({\operatorname{id}}{\otimes}\phi)^{-1}$ acting on $\Omega_U^*{\otimes}_{{{\cal O}}_U}\hat{T}(\Omega^1_U)$. Note that $D$ is a derivation. We claim that in fact $D$ is a holomorphic NC-connection. Indeed, for $f\in{{\cal O}}_U$ we have $$D(f)=({\operatorname{id}}{\otimes}\phi){{\cal D}}^{1,0}(f)=({\operatorname{id}}{\otimes}\phi)(df{\otimes}1)=df{\otimes}1.$$ Also, for $\alpha\in \Omega^1_U$ we have $$D(1{\otimes}{\alpha})=({\operatorname{id}}{\otimes}\phi)({\alpha}{\otimes}1+\ldots)={\alpha}{\otimes}1+\ldots,$$ where the dots denote the terms in $\Omega_U^*{\otimes}_{{{\cal O}}_U}\hat{T}^{\ge 1}(\Omega^1_U)$. Hence, $D$ is a holomorphic NC-connection, and the assertion now follows from the holomorphic version of Theorem \[main-nc-thm\].
Let $X$ be a complex manifold. Then $X$ admits an analytic NC thickening if and only if there exists an NC-connection on $X$.
[[*Proof*]{}]{}. The “if" part follows from Proposition \[if-part-prop\]. Conversely, assume that we are given an analytic NC thickening. Then by Theorem \[twisted-dg-thm\] [^3] there exists a quadruple $({{\cal T}},{{\cal J}},\varphi, D)$ in the sense of Definition \[twisting-def\]. Since in the smooth category exact sequences of vector bundles always split, there exists an isomorphism between ${{\cal T}}$ and its associated graded: $$\psi:C^\infty({{\cal T}}) \cong \prod_{n\geq 0} C^\infty({{\cal J}}^n/{{\cal J}}^{n+1}).$$ Followed by the isomorphism $\varphi$ we get an isomorphism $$\varphi\circ\psi: C^\infty({{\cal T}}) \cong \hat{T} (\Omega^{1,0}_X).$$ Observe that the first short exact sequence associated to the ${{\cal J}}$-adic filtration on ${{\cal T}}$ $$0{\rightarrow}{{\cal J}}/{{\cal J}}^2 {\rightarrow}{{\cal T}}/{{\cal J}}^2 {\rightarrow}{{\cal T}}/{{\cal J}}{\rightarrow}0$$ always splits as ${{\cal T}}/{{\cal J}}\cong {{\cal O}}$ and ${{\cal T}}$ is an ${{\cal O}}$-algebra. Thus, we may choose $\psi$ so that $\varphi\circ\psi$ is a *holomorphic* isomorphism $$(\varphi\circ\psi)^{\leq 1}: C^\infty({{\cal T}}/{{\cal J}}^2) \cong \hat{T}^{\leq 1} (\Omega^{1,0}_X).$$ Now the connection operator $D$ on ${{\cal T}}$ induces a connection ${{\cal D}}$ on $\hat{T}(\Omega^{1,0}_X)$ via the isomorphism $\varphi\circ \psi$. The component $$\Omega^{1,0}_X {\rightarrow}\Lambda_X^{0,1}{\otimes}\Omega^{1,0}_X$$ of ${{\cal D}}$ is simply $\overline{\partial}$ due to the holomorphicity of $(\varphi\circ\psi)^{\leq 1}$. Thus, we conclude that ${{\cal D}}$ is an NC-connection as described in Definition \[nc-connection-def\].
Analytic NC manifolds from K" ahler manifolds with constant holomorphic sectional curvature.
--------------------------------------------------------------------------------------------
We are going to show that the above construction of NC-smooth thickenings is applicable to Kähler manifolds with constant holomorphic sectional curvature. First, let us recall the definition of such manifolds. Since the Kähler form $g$ is nondegenerate $(1,1)$-form, it induces an operator (which is in fact an isomorphism) $$\rho: T^{1,0}_X {\rightarrow}\Omega_X^{0,1}.$$ Then $X$ is said to have constant holomorphic sectional curvature if the curvature operator $R: {\operatorname{Sym}}^2T^{1,0}_X {\rightarrow}\Omega_X^{0,1}{\otimes}T^{1,0}_X$ can be written as $$R(V,W) = c\rho(V)\otimes W +c\rho(W){\otimes}V$$ for some constant $c\in {{\Bbb R}}$. If we define an operator $A: T^{1,0}_X\otimes T^{1,0}_X {\rightarrow}\Omega^{0,1}_X{\otimes}T^{1,0}_X$ by $$A(V,W):= c\rho(V)\otimes W,$$ the curvature operator $R$ is then the symmetrization of the operator $A$.
We define an operator associated to the metric $g$ by $${{\cal D}}_g:=d_\tau+\overline{\partial}+\nabla+A^\vee$$ on the algebra $\Lambda_X\otimes_{C^\infty_X} \hat{T}_{C^\infty}(\Omega_X^{1,0})$ as the unique derivation extending the de Rham differential and acting on $\Omega_X^{1,0}$ by the diagram
\^1 && \^1\_X\^[1,0]{} &&\^1T\^2\_X\^[1,0]{} &\
&(2,2) [d\_]{} & [+]{} & (2,2) [A\^]{}\
& & \_X\^[1,0]{}.&
Here $d_\tau(1\otimes \alpha):=\alpha\otimes 1$ is the same as in the previous subsection. The second operator $\overline{\partial}+\nabla$ is the Levi-Civita connection associated to the metric $g$. The last operator $$A^\vee: \Omega_X^{1,0} {\rightarrow}\Lambda^1\otimes T^2\Omega_X^{1,0},$$ which is dual to $A$ (over the de Rham algebra $\Lambda$), is explicitly given by left multiplication by $c\tau(g)$ where $\tau(g)$ is the Kähler form $g$ considered as an element in $\Lambda^1\otimes \Omega_X^{1,0}$. Observe that the operator $D_g$ defined here is of the form defined in Definition \[nc-connection-def\]. Next we prove that it squares to zero, which shows that $D_g$ is an NC-connection.
\[nc-kahler-prop\] Let $(X,g)$ be a Kähler manifold with constant holomorphic sectional curvature. Then we have $D_g^2=0$.
[[*Proof*]{}]{}. We need to check that $(d_\tau+\overline{\partial}+\nabla+A^\vee)^2=0$. First it is clear that $d_\tau^2=0$. We then note that $[d_\tau,\overline{\partial}+\nabla]=0$ due to the Kähler condition which implies the torsion freeness of $\overline{\partial}+\nabla$. The next equation that $$(\overline{\partial}+\nabla)^2=-[d_\tau, A^\vee]$$ is due to the fact that the curvature $R$ is the symmetrization of $A$. Next the equation $$[\overline{\partial}+\nabla, A^\vee ]=0$$ holds since the connection $\overline{\partial}+\nabla$ preserves the Kähler metric, and $A^\vee$ is defined to multiply with $c\tau( g )$. Finally we need to show that $$[A^\vee, A^\vee]=0,$$ which is equivalent to its dual $A$ being associative. The operator $A$ is associative since we have $$A(A(U,V),W)=c\rho(A(U,V))\otimes W=c^2\rho(U)\rho(V)\otimes W;$$ and $$A(U,A(V,W))= c\rho(U)\otimes A(V,W)= c^2\rho(U)\rho(V)\otimes W.$$ The proposition is proved.
In the definition of $A^\vee$ we can also define it to be the right multiplication by $\tau( g )$.
Let $(X,g)$ be a Kähler manifold with constant holomorphic sectional curvature. Then $X$ admits an analytic NC thickening.
[[*Proof*]{}]{}. This follows from the previous proposition and Proposition \[if-part-prop\].
Kähler manifolds with constant sectional curvature are classified:
**(A.)** In the case $c=0$ such a manifold corresponds to the quotient of $({{\Bbb C}}^n,\omega_0=\sqrt{-1}/2\sum dz_id\overline{z}_j)$ by a discrete subgroup of its isometry group. For example, complex tori are such quotients.
**(B.)** If $c>0$, these are complex projective spaces.
**(C.)** If $c<0$, we get complex hyperbolic manifolds which are quotient of upper half space $H^n$ endowed with its hyperbolic metric by a discrete subgroup of its isometry group. An interesing example is the moduli space of cubic surfaces (see [@ACT]).
Analytic vector bundles over NC manifolds {#analytic-vb-sec}
=========================================
NC algebraic geometry and analytic geometry {#GAGA-sec}
-------------------------------------------
We first define the analytification of NC vector bundles. Let $X$ be a smooth variety over ${{\Bbb C}}$, equipped with an algebraic NC-thickening ${{\cal O}}_X^{NC}$. Let $E$ be an algebraic vector bundle over $X$ of rank $n$, i.e. a locally free ${{\cal O}}_X$-module of rank $n$ in the Zariski topology of $X$. Let $E^{NC}$ be an algebraic NC thickening of $E$, i.e. a locally free ${{\cal O}}_X^{NC}$-module of rank $n$ whose abelianization is $E$. The embedding of algebraic sections to analytic sections is a morphism of sheaves in the Zariski topology of $X$ $${{\cal O}}_X^{NC} \hookrightarrow {{\cal O}}_X^{nc}.$$ We define the analytification of $E^{NC}$ by $$E^{nc}=(E^{NC})^{an}:={{\cal O}}_X^{nc}{\otimes}_{{{\cal O}}_X^{NC}} E^{NC}.$$ Note that $E^{nc}$ is a locally free ${{\cal O}}^{nc}$-module with respect to Zariski topology (and hence with respect to the classical topology). It is clear that the abelianization of $E^{nc}$ is $E^{an}$.
Next we compare the algebraic ${\operatorname{Ext}}$ groups with the analytic ones, where in the analytic setting we consider ${\operatorname{Ext}}$’s in the category of sheaves of ${{\cal O}}^{nc}$-modules with respect to the classical topology.
\[ext-thm\] Let $X$ be a smooth projective scheme, and let $X^{NC}$ be a smooth NC thickening of $X$. Let $E^{NC}$ and $F^{NC}$ be two vector bundles on $X^{NC}$ of finite rank. Then there is a natural isomorphism $$\label{Ext-map-eq}
{\operatorname{Ext}}^*(E^{NC},F^{NC}) {\rightarrow}{\operatorname{Ext}}^*(E^{nc}, F^{nc}),$$ where the ${\operatorname{Ext}}$-groups are computed in the category of ${{\cal O}}_X^{NC}$- and ${{\cal O}}_X^{nc}$-modules, respectively.
[[*Proof*]{}]{}. We have natural isomorphisms (see [@G-HA Thm. 4.2.1]) $${\operatorname{Ext}}^i(E^{NC},F^{NC})\simeq H^i(X, \underline{{\operatorname{Hom}}}_{{{\cal O}}^{NC}} (E^{NC},F^{NC})),$$ $${\operatorname{Ext}}^i(E^{nc},F^{nc})\simeq H^i(X^{an}, \underline{{\operatorname{Hom}}}_{{{\cal O}}^{nc}} (E^{nc},F^{nc}))$$ Since the classical topology is finer than the Zariski topology, we have a natural map $$H^i(X, \underline{{\operatorname{Hom}}}_{{{\cal O}}^{NC}} (E^{NC},F^{NC}))\to H^i(X, \underline{{\operatorname{Hom}}}_{{{\cal O}}^{nc}} (E^{nc},F^{nc}))
\to H^i(X^{an}, \underline{{\operatorname{Hom}}}_{{{\cal O}}^{nc}} (E^{nc},F^{nc})),$$ where in the middle we consider cohomology with respect to the Zariski topology, hence we get a map . Let $d\in {{\Bbb Z}}_{\geq 0}$ be a nonnegative integer. We first prove the statement for vector bundles over $X^{(d)}$ by induction on $Q$. The case $d=0$ is the usual GAGA theorem (see [@GAGA]). In the following we set $G_{d+1}:=(U{{\cal L}ie}_+(\Omega_X^1))_{d+1}$. By Theorem \[module-thm\](iii), we have a short exact sequence $$0 {\rightarrow}G_{d+1}{\otimes}\underline{{\operatorname{Hom}}}(E,F) {\rightarrow}\underline{{\operatorname{Hom}}}(E^{(d+1)},F^{(d+1)}) {\rightarrow}\underline{{\operatorname{Hom}}}(E^{(d)},F^{(d)}) {\rightarrow}0,$$ and in the analytic setting we have $$0 {\rightarrow}G^{an}_{d+1}{\otimes}\underline{{\operatorname{Hom}}}(E^{an},F^{an}) {\rightarrow}\underline{{\operatorname{Hom}}}((E^{(d+1)})^{an},(F^{(d+1)})^{an}) {\rightarrow}\underline{{\operatorname{Hom}}}((E^{(d)})^{an},(F^{(d)})^{an}) {\rightarrow}0.$$ The analytification map induces a morphism of the associated long exact sequences of cohomology
\^[i-1,(d)]{} && H\^i(G\_[d+1]{}(E,F)) && \^[i,(d+1)]{} & &\^[i,(d)]{} & & H\^[i+1]{}(G\_[d+1]{}(E,F))\
& & && && &&\
\_[an]{}\^[i-1,(d)]{} && H\^i(G\^[an]{}\_[d+1]{}(E\^[an]{},F\^[an]{})) && \_[an]{}\^[i,(d+1)]{} & &\_[an]{}\^[i,(d)]{} & & H\^[i+1]{}(G\^[an]{}\_[d+1]{}(E\^[an]{},F\^[an]{}))
where ${\operatorname{Ext}}^{i,(d)}:={\operatorname{Ext}}^i(E^{(d)},F^{(d)})$ (resp., ${\operatorname{Ext}}_{an}^{i,(d)}={\operatorname{Ext}}^i((E^{(d)})^{an},(F^{(d)})^{an})$). The induction assumption together with the usual GAGA implies that all the vertical arrows except for the middle one are isomorphisms. Hence, by the five lemma, the middle vertical arrow is also an isomorphism. To prove the theorem, we observe that $$\underline{{\operatorname{Hom}}}(E^{NC},F^{NC})= {\varprojlim}\underline{{\operatorname{Hom}}}(E^{(d)},F^{(d)}).$$ Since, the projective system $(H^0(U,\underline{{\operatorname{Hom}}}((E^{(d)})^{an},(F^{(d)})^{an})))$ satisfies the Mittag-Leffler condition for every open affine subset $U$, as in the proof of [@KS Lemma 1.1.6], we get the short exact sequences $$0{\rightarrow}R^1{\varprojlim}{\operatorname{Ext}}^{i-1}(E^{(d)},F^{(d)}) {\rightarrow}{\operatorname{Ext}}^i(E^{NC},F^{NC}) {\rightarrow}{\varprojlim}{\operatorname{Ext}}^i(E^{(d)},F^{(d)}) {\rightarrow}0.$$ Similarly, in the analytic case we have $$0{\rightarrow}R^1{\varprojlim}{\operatorname{Ext}}^{i-1}((E^{(d)})^{an},(F^{(d)})^{an}) {\rightarrow}{\operatorname{Ext}}^i(E^{nc},F^{nc}) {\rightarrow}{\varprojlim}{\operatorname{Ext}}^i((E^{(d)})^{an},(F^{(d)})^{an}) {\rightarrow}0.$$ As was shown above, the analytification morphism induces isomorphisms on the leftmost and the rightmost groups. Thus, the groups in the middle are also be isomorphic.
Let us denote by ${\operatorname{Per}}^{gl}(X^{NC})$ the global perfect derived category of finite complexes of locally free right ${{\cal O}}_X^{NC}$-modules of finite rank, and let ${\operatorname{Per}}^{gl}(X^{nc})$ be the similar category for ${{\cal O}}_X^{nc}$ (and the classical topology of $X^{an}$). Theorem \[ext-thm\] above shows that the analytification functor $${\operatorname{An}}: {\operatorname{Per}}^{gl}(X^{NC}) {\rightarrow}{\operatorname{Per}}^{gl}(X^{nc})$$ is fully-faithful. In the next theorem we will prove that ${\operatorname{An}}$ is also essentially surjective. Hence, the functor ${\operatorname{An}}$ is an equivalence of categories as claimed in Theorem \[nc-gaga-thm\].
\[nc-bundle-thm\] Let $(X,{{\cal O}}^{NC})$ be an algebraic NC-smooth thickening of a smooth projective variety $X$, and let $V$ be a locally free ${{\cal O}}^{nc}$-module of finite rank over its analytification $X^{an}$. Then there exists a locally free ${{\cal O}}^{NC}$-module $E^{NC}$ over $X$ such that its analytification $E^{nc}$ is isomorphic to $V$.
[[*Proof*]{}]{}. Since $V={\varprojlim}V^{(d)}$, it is enough to construct a projective system of locally free ${{\cal O}}^{(d)}$-modules $E^{(d)}$ whose analytification gives $(V^{(d)})$ (then we can set $E={\varprojlim}E^{(d)}$). We will construct the ${{\cal O}}^{(d)}$-modules $E^{(d)}$ inductively using the $I$-filtration. The existence of $E^{(0)}$ follows from the usual GAGA theorem. Assume that there exists a locally free ${{\cal O}}^{(d)}$-module $E^{(d)}$ whose analytification is isomorphic to $V^{(d)}$. We would like to construct a locally free ${{\cal O}}^{(d+1)}$-module $E^{(d+1)}$ whose analytification is isomorphic to $V^{(d+1)}$. For this consider the following exact sequence of sheaves of (nonabelian) groups in the Zariski topology of $X$: $$0\rightarrow {\operatorname{Mat}}_{r} \big((U{{\cal L}ie}_+(\Omega_X^1))_{d+1}\big)\rightarrow GL_r({{\cal O}}^{(d+1)}) \rightarrow GL_r({{\cal O}}^{(d)})\rightarrow 0.$$ According to the general formalism of nonabelian cohomology (see e.g., [@Serre]), the locally free ${{\cal O}}^{(d)}$-module $E^{(d)}$ corresponds to a class in $c\in H^1(X,GL_r({{\cal O}}^{(d)}))$, and there is a naturally defined obstruction $${\delta}(c)\in H^2\big(X,(U{{\cal L}ie}_+(\Omega_X^1))_{d+1}{\otimes}{\underline}{{\operatorname{End}}}(E)\big),$$ such that ${\delta}(c)=0$ if and only if $c$ lifts to a class in $H^1(X,GL_r({{\cal O}}^{(d+1)}))$ (note that here the sheaf ${\underline}{{\operatorname{End}}}(E)$ appears as the twisted version of the sheaf ${\operatorname{Mat}}_{r}({{\cal O}}_X)$). Furthermore, such liftings of $c$ form a homogeneous space for the natural action of $H^1\big(X,(U{{\cal L}ie}_+(\Omega_X^1))_{d+1}{\otimes}{\underline}{{\operatorname{End}}}(E)\big)$. Now we observe that by the usual GAGA, the analytification map induces isomorphisms $$H^i\big(X,(U{{\cal L}ie}_+(\Omega_X^1))_{d+1}{\otimes}{\underline}{{\operatorname{End}}}(E)\big)\to
H^i\big(X^{an},(U{{\cal L}ie}_+(\Omega_X^1)^{an})_{d+1}{\otimes}{\underline}{{\operatorname{End}}}(E^{an})\big)$$ for $i=1,2$. Since the analytification map is compatible with the above cohomological constructions, we derive the existence of $E^{(d+1)}$ with desired properties.
(of Theorem \[nc-gaga-thm\]). In the context of Theorem \[nc-bundle-thm\], the pair consisting of a locally free ${{\cal O}}^{NC}$-module $E^{NC}$ together with an isomorphism $E^{nc}\to V$, is unique up to a unique isomoprhism.
Analytic NC-Jacobian and $A_\infty$-structures {#ainf-sec}
==============================================
NC thickening of moduli space of line bundles.
----------------------------------------------
Let $Z$ be a compact complex manifold, and let $L$ be a holomorphic line bundle over $Z\times B$, where $B$ is another complex manifold. We assume that
($\star$) the Kodaira-Spencer map $\kappa: T_B \to H^1(Z,{{\cal O}}){\otimes}{{\cal O}}_B$ associated with the family $L$ is an isomorphism.
Set $E=\bigoplus_{i\ge 0}E^i:=H^\bullet(Z,{{\cal O}})$. We choose a Hermitian structure on $Z$, and use Hodge theory to obtain harmonic representatives of $E$ in the differential graded algebra $(\Omega^{0,*},{\overline{\partial}})$. Moreover, we have maps
E & & \^[0,\*]{}
such that $p\circ i={\operatorname{id}}$ and $i\circ p= {\operatorname{id}}+{\overline{\partial}}h+h {\overline{\partial}}$, where $h: \Omega^{0,*} \to \Omega^{0,*}$ is a homotopy operator. Using the Kontsevich-Soibelman’s tree formula (see [@KoSo]) we get a minimal $A_\infty$-algebra structure on $E$.
Let us consider the (trivial) graded vector bundle over $B$, $${{\cal E}}=\bigoplus_{i\ge 0}{{\cal E}}^i:=E\otimes_{{\Bbb C}}{{\cal O}}_B.$$ The $A_\infty$-structure on $E$ induces by extension of scalars an $A_\infty$-algebra structure on ${{\cal E}}$, i.e., we have ${{\cal O}}_B$-linear morphisms $$m_k: {{\cal E}}^{\otimes k} {\rightarrow}{{\cal E}}$$ that satisfy the $A_\infty$-identities (with $m_1=0$).
The bundle ${{\cal E}}$, being canonically trivialized, has a flat connection $\nabla$, and the maps $m_k$ are $\nabla$-horizontal. Let us denote by $m_i{\otimes}{\operatorname{id}}$ the operations on the holomorphic de Rham complex of ${{\cal E}}$, $${{\cal E}}_\Omega:={{\cal E}}\otimes_{{{\cal O}}_B}\Omega^*_B,$$ obtained from $m_i$ by extending scalars from ${{\cal O}}_B$ to $\Omega^*_B$. Let us set $$\tilde{m}_1=\nabla,\;\;\;\; \tilde{m}_k= m_k\otimes {\operatorname{id}}\mbox{\; for $k\geq 2$},$$ where in the first formula we extend $\nabla$ to a differential on the de Rham complex of ${{\cal E}}$ in the standard way. This is almost the $A_\infty$-structure on ${{\cal E}}_\Omega$ we want, but not quite: we are going to add an $m_0$ term.
For this we observe that there is a canonical element $\omega\in {{\cal E}}^1\otimes \Omega^1_B$. Indeed, by Assumption ($\star$), the bundle ${{\cal E}}^1=H^1(Z,{{\cal O}})\otimes_{{\Bbb C}}{{\cal O}}_B$ is canonically isomorphic to the tangent bundle of $B$. We let $\omega$ be the element corresponding to ${\operatorname{id}}\in T_B{\otimes}\Omega^1_B={\operatorname{End}}(T_B)$. In local Kuranishi coordinates (see for example [@Fukaya Section 1.5, 1.6]) $y_1,\ldots, y_n$ on $B$ corresponding to a basis $f_1,\ldots, f_n$ of $H^1(Z,{{\cal O}})$, the Kodaira-Spencer map is given by $$f_i \mapsto \partial/\partial y_i.$$ Thus, in these coordinates we have $\omega=\sum_{i=1}^n f_i{\otimes}dy_i$. Note that since $f_i$ are $\nabla$-horizontal, we have $$\label{nabla-omega-eq}
\nabla(\omega)=0.$$ Now we add a curvature term to the $A_\infty$-structure $\tilde{m}_k$ on ${{\cal E}}_\Omega$ by putting $\tilde{m}_0=\omega$.
The maps $\left\{\tilde{m}_k\right\}_{k=0}^{\infty}$ form a curved $A_\infty$-structure on the de Rham complex ${{\cal E}}_\Omega$, with the respect to the total grading on ${{\cal E}}_\Omega$.
[[*Proof*]{}]{}. For each $N\geq 0$, we need to prove the $A_\infty$-identity $$\label{A-infty}
\sum_{i+l+j=N, k=i+j, l\geq 0} (-1)^{i+lj}
\tilde{m}_{k+1}({\operatorname{id}}^i\otimes \tilde{m}_l \otimes {\operatorname{id}}^j)=0.$$ Since the maps $\tilde{m}_n$ for $n\ge 2$ are $\nabla$-horizontal, using the $A_\infty$-identity for $(m_n)$ we obtain $$\sum_{i+l+j=N, k=i+j, l\geq 1} (-1)^{i+lj}
\tilde{m}_{k+1}({\operatorname{id}}^i\otimes \tilde{m}_l \otimes {\operatorname{id}}^j)=0.$$ To prove it remains to check the identity $$\sum_{i+j=k} (-1)^i \tilde{m}_{k+1}({\operatorname{id}}^i{\otimes}\omega{\otimes}{\operatorname{id}}^j)=0.$$ In the case $k=0$ this identity follows from the equation . To deal with the cases $k\geq 1$ we use the fact that the differential graded algebra $\Omega^{0,*}$ is (super)commutative. Thus, by Cheng-Getzler’s $C_\infty$ transfer theorem [@CG Theorem 12] the structure maps $m_k$ obtained by Kontsevich-Soibelman’s tree formula in fact form a $C_\infty$-structure, i.e., an $A_\infty$-structure that vanishes on the image of the shuffle product. Now we observe that the operator $\sum \tilde{m}_{k+1}({\operatorname{id}}^i{\otimes}\omega{\otimes}{\operatorname{id}}^j)$, when applied to an element $\alpha_1\otimes\cdots{\otimes}\alpha_k$, is equal (up to a sign) to $$\tilde{m}_{k+1}\big((\alpha_1{\otimes}\cdots{\otimes}\alpha_k)\bullet\omega\big),$$ where $\bullet$ denotes the shuffle product map. So the above sum vanishes by Cheng-Getzler’s theorem.
In the case when $Z$ is Kähler, by [@DGMS], the dg-algebra $({\Omega}^{0,*},{\overline{\partial}})$ is formal. Hence, in this case we can assume that $m_k=0$ with $k>2$, so ${{\cal E}}_\Omega$ is a curved dg-algebra.
Thus, ${{\cal E}}_\Omega$ is a (curved) $A_\infty$-algebra with structure maps $\left\{\tilde{m}_k\right\}_{k=0}^{\infty}$. We view ${{\cal E}}_\Omega$ as a strictly unital $A_\infty$-algebra over $\Omega^*_B$ via the canonical embedding $$\Omega^*_B\to {{\cal E}}_\Omega=H^*(Z,{{\cal O}})\otimes_{{\Bbb C}}\Omega^*_B$$ defined by $\alpha \mapsto {\operatorname{id}}_{{\cal O}}\otimes \alpha$.
The canonical projection ${{\cal E}}_\Omega \to H^0(Z,{{\cal O}})\otimes \Omega_B^*=\Omega_B^*$ is an augmentation of ${{\cal E}}_\Omega$ as an $A_\infty$-algebra over $\Omega^*_B$.
[[*Proof*]{}]{}. This follows from the fact that the $A_\infty$-algebra $E=H^*(Z,{{\cal O}})$ is augmented, and that the curvature $\omega$ maps to zero under this projection map.
Let us denote by $\overline{E}:=\oplus_{k\geq 1}H^k(Z,{{\cal O}})$ the augmentation ideal. Similarly, we set $\overline{{{\cal E}}}:= \overline{E}\otimes_{{\Bbb C}}{{\cal O}}$.
Let ${{\cal B}}{{\cal E}}_\Omega$ be the Bar construction of the augmented $A_\infty$-algebra ${{\cal E}}_\Omega$ over $\Omega^*_B$. This is a differential graded coalgebra over $\Omega_B^*$. We define $({{\cal E}}_\Omega)^!$, the Koszul dual algebra of ${{\cal E}}_\Omega$ to be the dual of ${{\cal B}}{{\cal E}}_\Omega$, i.e. we have $$({{\cal E}}_\Omega)^!:= ({{\cal B}}{{\cal E}}_\Omega)^\vee=\hat{T}_\Omega (\Omega\otimes_{{\Bbb C}}\overline{E}^\vee[-1]),$$ where $\Omega=\Omega^*_B$.
We are going to describe the sheaf of algebras $\underline{H}^0\big( ({{\cal E}}_\Omega)^!\big)$ over $B$. In particular, we will see that if $H^2(Z,{{\cal O}})=0$ then the algebra $\underline{H}^0\big( ({{\cal E}}_\Omega)^!\big)$ is a smooth NC thickening of $B$. We first exhibit the differential graded algebra $({{\cal E}}_\Omega)^!$ as a double complex in the second quadrant. For this observe that the complex $({{\cal E}}_\Omega)^!$ is bigraded by ${{\Bbb Z}}^{\leq 0}\times {{\Bbb N}}$ where the first grading is induced from that of $E$, and the second grading is simply the degree of differential forms on $B$. More precisely, an element $\alpha\in H^i(Z,{{\cal O}})^\vee\otimes \Omega_B^j$ has bidegree $(1-i,j)$, so that the bigraded components of $({{\cal E}}_\Omega)^!$ are $${{\cal A}}^{i,j}:= \bigoplus_{k-(r_1+\ldots+r_k)=i, \;\; r_1,\ldots,r_k\geq 2} \big(\hat{T} {\otimes}({{\cal E}}^{r_1})^\vee {\otimes}\cdots {\otimes}({{\cal E}}^{r_k})^\vee{\otimes}\hat{T}\big)\otimes \Omega^{j},$$ where $\hat{T}=\hat{T}_{{\cal O}}(({{\cal E}}^1)^\vee)$. The complex $({{\cal E}}_\Omega)^!$ may be depicted in the $(i,j)$-plane as follows.
$$\minCDarrowwidth15pt\begin{CD}
@AAA @A D_\nabla AA @. @A D_\nabla AA @A D_\nabla AA @.\\
\cdots @>d_m>> {{\cal A}}^{i,j} @>>>\cdots @>>> {{\cal A}}^{-1,j} @>d_m>> {{\cal A}}^{0,j} @>>> 0\\
@AAA @A D_\nabla AA @. @A D_\nabla AA @A D_\nabla AA @.\\
\vdots @>d_m>> \vdots @>>> \cdots @>>> \vdots @>d_m>> \vdots @>>> 0\\
@AAA @A D_\nabla AA @. @A D_\nabla AA @A D_\nabla AA @.\\
\cdots @>d_m>> {{\cal A}}^{i,0} @>>> \cdots @>>> {{\cal A}}^{-1,0} @>d_m>> {{\cal A}}^{0,0} @>>> 0\\
\end{CD}$$ Here the differential $\nabla$ is the connection operator, $d_\omega$ is the Koszul dual differential defined by the curvature term $\omega$, and $d_m$ is the operator defined by the $A_\infty$-structure on $E$.
Since the double complex ${{\cal A}}^{*,*}$ is bounded vertically by the dimension of $B$, we can calculate the cohomology of the total complex using the spectral sequence associated to the vertical filtration of ${{\cal A}}^{*,*}$. Let us first consider the rightmost vertical complex $$\big( {{\cal A}}^{0,*}, D_\nabla \big).$$ We claim that its cohomology is concentrated in degree $0$. Indeed, note that ${{\cal A}}^{0,*}=\Omega^*\otimes \hat{T}_{{\cal O}}(({{\cal E}}^1)^\vee)$. By Assumption ($\star$), the dual of the Kodaira-Spencer map is an isomorphism $({{\cal E}}^1)^\vee\cong \Omega^1_B$. Moreover, under this identification the operator $d_\omega$ is the $\Omega^*$-linear derivation acting on generators by $1\otimes \alpha \mapsto \alpha {\otimes}1$. By results in Section \[connection-constr-sec\], we know this complex has cohomology concentrated in degree $0$, which is by definition a smooth NC thickening of $B$. As before we denote this $0$th cohomology by ${{\cal O}}^{nc}$.
Now consider the vertical complex ${{\cal A}}^{i,*}$ with $i<0$. Note that the operator $d_\omega$ acts on $({{\cal E}}^{r})^\vee$ by zero if $r\geq 2$. Hence, the vertical cohomology of $\nabla+d_\omega$ is also concentrated in degree zero, and is equal to $$\bigoplus_{k-(r_1+\ldots+r_k)=i, \;\; r_1,\ldots,r_k\geq 2} {{\cal O}}^{nc} {\otimes}_{{\Bbb C}}(E^{r_1})^\vee {\otimes}_{{\Bbb C}}\cdots {\otimes}_{{\Bbb C}}(E^{r_k})^\vee{\otimes}_{{\Bbb C}}{{\cal O}}^{nc}$$ at degree $(i,0)$. This implies that the first page of the spectral sequence is of the form $$\minCDarrowwidth15pt\begin{CD}
\cdots \bigoplus {{\cal O}}^{nc} {\otimes}_{{\Bbb C}}(E^{r_1})^\vee {\otimes}_{{\Bbb C}}\cdots {\otimes}_{{\Bbb C}}(E^{r_k})^\vee{\otimes}_{{\Bbb C}}{{\cal O}}^{nc} @> d_m>> \cdots @>d_m>> {{\cal O}}^{nc}{\otimes}(E^2)^\vee {\otimes}{{\cal O}}^{nc} @>>> {{\cal O}}^{nc}.
\end{CD}$$ Therefore, the spectral sequence degenerates on the next page. In particular, we get the following result.
Let $R$ be the image of $1\otimes E^2 \otimes 1$ under the morphism $d_m$ in ${{\cal O}}^{nc}$. Then we have $$\underline{H}^0\big(({{\cal E}}_\Omega)^!\big) \cong {{\cal O}}^{nc}/\langle R \rangle$$ where $\langle R \rangle$ is the two-sided ideal generated by $R$.
**A.** Assume the algebra $H^*(Z,{{\cal O}})$ is the exterior algebra in $H^1(Z,{{\cal O}})$, and the higher products $m_k$ vanish (e.g., this is true if $Z$ is a complex torus). Then $R$ is generated by elements of the form $e_i\otimes e_j-e_j\otimes e_i=[e_i,e_j]$, where $e_1,\ldots, e_n$ is a basis of $(H^1(Z,{{\cal O}}))^\vee$. Hence, we found that $$\underline{H}^0\big(({{\cal E}}_\Omega)^!\big) \cong {{\cal O}}^{nc}_{ab}={{\cal O}}_B.$$
**B.** Assume that $H^2(Z,{{\cal O}})=0$. Then $R=0$, so we get $$\underline{H}^0\big(({{\cal E}}_\Omega)^!\big)={{\cal O}}^{nc}.$$
In general the sheaf of algebras $\underline{H}^0\big(({{\cal E}}_\Omega)^!\big)$ might be quite complicated. Nevertheless, we record the following elementary property.
The ideal $\langle R\rangle$ is contained in $F^1{{\cal O}}^{nc}$. Thus, ${\underline}{H}^0\big(({{\cal E}}_\Omega)^!\big)$ is an NC-thickening of $B$, where the morphism $\underline{H}^0\big(({{\cal E}}_\Omega)^!\big)\twoheadrightarrow {{\cal O}}_B$ is induced by the projection ${{\cal O}}^{nc}\twoheadrightarrow {{\cal O}}_B$.
[[*Proof*]{}]{}. Let $r=d_m(t)$ for some $t\in (E^2)^\vee$. Then we have $r\in\hat{T}^{\geq 2} \big((E^1)^\vee\big)$ since $d_m$ is dual to the $A_\infty$-structure maps $m_2, m_3,\ldots$ which all have two or more inputs. Now the assertion follows from Proposition \[filtration-prop\](i).
Analytic construction of the NC Poincaré bundle.
------------------------------------------------
In this subsection we specialize to the case when $Z=C$ is a smooth curve, and $B=J(C)$ (or simply $J$) is the Jacobian variety of $C$. Let ${{\cal P}}$ be a universal line bundle over $C\times J$. We will give an analytic construction of an NC-thickening of ${{\cal P}}$, and compare it with the algebraic thickening constructed in Section \[NC-Poincare-sec\].
For a point $L\in J(C)$, we first give an analytic description of the universal line bundle ${{\cal P}}|_{C\times V}$ for some analytic neighborhood $V$ of $L$. The line bundle $L$ over $C$ has a resolution by a two term complex of the form $$R_L:=\Omega_C^{0,0} \stackrel{\overline{\partial}+ a_L}{\longrightarrow} \Omega_C^{0,1}$$ for some harmonic element $a_L\in {\Omega}_C^{0,1}$.
A local family of deformations of $L$ can be described as follows. We choose a basis $f_1,\ldots, f_n$ of of the vector space $H^1(C,{{\cal O}})$, and let $y_1,\ldots, y_n$ be the dual linear coordinates on this vector space. We consider $H^1(C,{{\cal O}})$ as a subspace of $\Omega_C^{0,1}$ consisting of harmonic forms. Let $V$ be a small neighborhood of the origin in $H^1(C,{{\cal O}})$. Let us consider the following family of two term complexes parametrized by $(y_1,\ldots, y_n)\in V$: $$\begin{CD}
R_{L(y)}:=\Omega_C^{0,0} @>\overline{\partial}+ a_L+\sum_i y_i f_i>> \Omega_C^{0,1}.
\end{CD}$$ Note that $R_{L(y)}$ is a resolution of a deformation $L(y)$ of $L$. To describe the bundle ${{\cal P}}|_{C\times V}$ we use the relative Dolbeault complex $$\begin{CD}
R_{{{\cal P}}|_{C\times V}}:= \Omega_\pi^{0,0} @>\overline{\partial}+ a_L+\sum_i y_i f_i>> \Omega_\pi^{0,1}
\end{CD}$$ where ${\Omega}^{0,*}_\pi$ are differential forms in $C$-direction, holomorphic in $V$-direction: $$\Omega^{0,*}_\pi:= \left\{ s\in q^*\Omega_C^{0,*} | \overline{\partial}_{y_i} s=0, \forall i=1,\ldots, n. \right\}$$ where $\pi: C\times V {\rightarrow}V$ and $q: C\times V{\rightarrow}C$ are the projections. Note that $\Omega^{0,*}_\pi$ is a sheaf of ${{\cal O}}_{C\times V}$-modules, and that it has a relative holomorphic flat connection $\nabla$ in the $V$-direction, i.e., an operator $$\nabla: \Omega^{0,*}_\pi {\rightarrow}\Omega^{0,*}_\pi\otimes \pi^*\Omega^1_V$$ satisfying the Leibniz rule and such that $\nabla^2=0$. However, $\nabla$ does not commute with the differential $\overline{\partial}+ a_L+\sum_i y_i f_i$ on $\Omega^{0,*}_\pi$, so it does not induce a relative flat connection on ${{\cal P}}|_{C\times V}$.
Note that the connection $\nabla$ does commute with $\overline{\partial}+a_L$, and that the complex $$\begin{CD}
R_{q^*L}:= \Omega_\pi^{0,0} @>\overline{\partial}+a_L>> \Omega_\pi^{0,1}
\end{CD}$$ is the Dolbeault resolution of the line bundle $q^*L$. The induced connection in $V$-direction on $q^*L$ coincides with the obvious one.
The relative flat connection $\nabla$, by the construction in subsection \[D-mod-sec\], induces a partial analytic $NC$-smooth thickening of $C\times V$ in the $V$-direction. By definition this partial thickening is $\underline{H}^0$ of a differential graded algebra $\pi^*{{\cal A}}_V$ where ${{\cal A}}_V=\Omega^*_V\otimes_{{{\cal O}}_V}\hat{T}(\Omega^1_V)$. Its differential is the (analytic) NC-connection $D_\nabla$ associated to the flat connection $\nabla$. Viewing $R_{q^*L}$ is a complex of $\pi^{-1}D_V$-modules, by the construction of Definition \[D-mod-defi\], we get a structure of a right dg-module over $\pi^*{{\cal A}}_V$ on the completed tensor product $$R_{q^*L}\hat{\otimes} \pi^*{{\cal A}}_V=R_{q*L} \hat{\otimes}_{{{\cal O}}_{C\times V}} \pi^*\big(\Omega^*_V\otimes_{{{\cal O}}_V}\hat{T}(\Omega^1_V)\big)$$ giving a thickening of $q^*L$. To obtain a thickening of ${{\cal P}}|_{C\times V}$, we replace the tensor product differential $\overline{\partial}+a_L+D_\nabla$ with $$\label{deformed-differential-eq}
\overline{\partial}+ a_L+D_\nabla+\sum_i (y_i-e_i) f_i,$$ where $e_i:=dy_i$ are horizontal sections of $\Omega^1_V{\subset}\hat{T}(\Omega^1_V)$, and the sum $\sum_i (y_i-e_i) f_i$ acts by left multiplication. Let us denote by $$R_{{{\cal P}}|_{C\times V}}\hat{{\otimes}}\pi^*{{\cal A}}_V$$ this complex with the differential given by .
The complex $R_{{{\cal P}}|_{C\times V}}\hat{{\otimes}}\pi^*{{\cal A}}_V$ is a right dg-module over $\Omega^{0,*}_\pi\hat{{\otimes}}\pi^*{{\cal A}}_V$.
[[*Proof*]{}]{}. This follows from the fact that $$\label{MC-element-eq}
{\alpha}=a_L+\sum_i (y_i-e_i) f_i$$ is a Maurer-Cartan element for the dg-algebra $$(\Omega^{0,*}_\pi\hat{{\otimes}}\pi^*{{\cal A}}_V, {\overline}{{\partial}}+D_\nabla).$$
\[vanishing-rem\] Theorem \[coh-vanishing\] in the relative setting implies that $\underline{H}^i(\pi^*{{\cal A}}_V)=0$ for $i>0$. Let us set ${{\cal O}}_{C\times V^{nc}}:= \underline{H}^0(\pi^*{{\cal A}}_V)$. This is an NC-thickening of ${{\cal O}}_{C\times V}$ in the $V$-direction.
\[analytic-constr-prop\] The dg-module $R_{{{\cal P}}\mid_{C\times V}}\hat{{\otimes}}\pi^*{{\cal A}}_V$ is locally trivial of rank one over the dg-algebra $\Omega^{0,*}_\pi\hat{{\otimes}}\pi^*{{\cal A}}_V$.
[[*Proof*]{}]{}. By definition, this dg-module is the rank one twisted complex associated with the Maurer-Cartan element ${\alpha}$ given by of $A:=\Omega^{0,*}_\pi\hat{{\otimes}}\pi^*{{\cal A}}_V$ whose differential is $\overline{\partial}+D_\nabla$. Thus, it is enough to prove that the Maurer-Cartan element $\alpha$ is locally gauge equivalent to zero. Let $U\subset C$ be a Stein open subset of $C$. We will prove that the Maurer-Cartan moduli space of the dg algebra $\Gamma(U\times V, A)$ is just a one point set. Thus every Maurer-Cartan element of it is gauge equivalent to zero.
For this, we make use of the following well-known result. See for example [@Getzler Theorem 2.1].
\[mc-lem\] Let $h=F^0 h\supset F^1 h \supset F^2 h \supset \ldots$ and $g=F^0g \supset F^1g\supset F^2g\supset \ldots$ be filtered dg Lie algebras (that is, $dF^ih\subset F^ih$ and $[F^ih,F^jh]\subset F^{i+j+1}h$, and likewise for $g$) such that $h$ and $g$ are complete with respect to the filtrations. Let $f: h{\rightarrow}g$ be a morphism of filtered dg Lie algebras which induces weak equivalences of the associated chain complexes $${\operatorname{gr}}^i f: F^ih/F^{i+1}h {\rightarrow}F^ig/F^{i+1}g.$$ Then $f$ induces a bijection between the associated Maurer-Cartan moduli spaces.
Note that for a dg algebra $A$, by definition, its Maurer-Cartan moduli space is the same as that of the dg Lie algebra $A^{{\operatorname{Lie}}}$. We apply the above Lemma to the case $h=\Gamma(U\times V, {{\cal O}}_{C\times V^{nc}})^{{\operatorname{Lie}}}$ and $g=\Gamma(U\times V, A)^{{\operatorname{Lie}}}$, both endowed with the $F$-filtration. One checks easily that the $F$-filtration define a filtered dg Lie algebra structure on $h$ and $g$. The morphism $f: h{\rightarrow}g$ is the canonical embedding. Moreover, by Remark \[vanishing-rem\], $f$ satisfies the property required in Lemma \[mc-lem\]. It follows that the Maurer-Cartan space of $g$ is a point since the degree one part of $h$ is only zero.
The sheaf $\underline{H}^0(R_{{{\cal P}}|_{C\times V}}\hat{{\otimes}}\pi^*{{\cal A}}_V)$ is a locally free ${{\cal O}}_{C\times V^{nc}}$-module of rank one. Moreover, it is an NC thickening of the line bundle ${{\cal P}}|_{C\times V}$.
[[*Proof*]{}]{}. The first assertion follows immediately from the previous proposition. For the second, we observe that the Maurer-Cartan element ${\alpha}=a_L+\sum_i (y_i-e_i) f_i$ maps down to $a_L+\sum_i y_i f_i$ under the canonical projection map $$\Omega^{0,*}_\pi\hat{{\otimes}}\pi^*{{\cal A}}_V{\rightarrow}\Omega^{0,*}_\pi.$$ Twisting $\Omega^{0,*}_\pi$ by the latter Maurer-Cartan element defines the line bundle ${{\cal P}}|_{C\times V}$.
The construction of an NC thickening of ${{\cal P}}|_{C\times V}$ can be made global over $C\times J$.
[**Construction**]{}. Recall that the Jacobian $J$ is isomorphic to $H^1({{\cal O}})/H^1(C,{{\Bbb Z}})$. Since we have chosen coordinates $y_1,\ldots,y_n$ on $H^1({{\cal O}})$, we can cover $J$ by open subsets $V_{\alpha}$ with coordinates on them that only differ by translations.
Let us first recall the gluing data of the usual Poincaré bundle over $C\times J$. Locally over $C\times V_{\alpha}$, with affine coordinates $y_i^\alpha$ centered at the point $L_\alpha$, the Poincaré bundle is isomorphic to $\underline{H}^0$ of the complex $$\big(\Omega^{0,*}_\pi, \overline{\partial}+a_{L_\alpha}+ \sum_i y^\alpha_i f_i\big).$$ Over the intersection $C\times (V_{\alpha}\cap V_\beta)$ there is a transition isomorphism $$\phi_{\alpha\beta}: \big(\Omega^{0,*}_\pi, \overline{\partial}+a_{L_\alpha}+ \sum_i y^\alpha_i f_i\big) {\rightarrow}\big(\Omega^{0,*}_\pi, \overline{\partial}+a_{L_\beta}+ \sum_i y^\beta_i f_i\big).$$ Since the two coordinates $y^\alpha$ and $y^\beta$ only differ by translation, the function $\phi_{\alpha\beta}$ is a function pulled back from $C$. The collection of transition isomorphisms $\left\{ \phi_{\alpha\beta} \right\}$ satisfies the cocycle condition.
In the NC case, recall that over $C\times V_\alpha$ we have the complex $$\big(\Omega_\pi^{0,*}\hat{{\otimes}}\pi^*{{\cal A}}_{V_\alpha}, \overline{\partial}+ a_{L_\alpha}+\nabla+d_\omega+\sum_i (y^\alpha_i-e_i) f_i\big).$$ Similarly on $C\times V_\beta$, the differential is $$\overline{\partial}+ a_{L_\beta}+\nabla+d_\omega+\sum_i (y^\beta_i-e_i) f_i.$$ The transition functions $\phi_{\alpha\beta}$ of the Poincaré bundle lift to $\phi_{\alpha\beta}{\otimes}{\operatorname{id}}$ which still satisfies the cocycle condition. Thus, the local dg-modules $R_{{{\cal P}}|_{C\times V_\alpha}}\hat{{\otimes}}\pi^*{{\cal A}}_{V_\alpha}$ can be pasted together to obtain a global dg $\pi^*{{\cal A}}_J$-module which we denote by $R_{{\cal P}}\hat{{\otimes}}\pi^*{{\cal A}}_J$. Taking $\underline{H}^0$ gives an NC thickening of the Poincaré bundle ${{\cal P}}$ over $C\times J$, which we denote by ${{\cal P}}^{nc}$.
Let $p\in C$ be a fixed point on $C$, and assume that ${{\cal P}}$ is trivialized over $p\times J$. Then ${{\cal P}}^{nc}$ is also naturally trivialized over $p\times J^{nc}$.
[[*Proof*]{}]{}. We defined the transition functions of ${{\cal P}}^{nc}$ to be simply $\left\{\phi_{\alpha\beta}{\otimes}{\operatorname{id}}\right\}$. It follows that if we start with the Poincaré bundle ${{\cal P}}$ which is trivialized over $p\times J$, then so will be ${{\cal P}}^{nc}$.
\[analytic-Poincare-id-prop\] There exists an automorphism $\phi: J^{nc} {\rightarrow}J^{nc}$, trivial on the abelianization, such that the analytic Poincaré bundle ${{\cal P}}^{nc}$ on $C\times J^{nc}$ constructed above is isomorphic to the analytification $({\operatorname{id}}_C\times\phi)^*{{\cal P}}^{NC, an}$, where ${{\cal P}}^{NC}$ is constructed in Section \[NC-Poincare-sec\].
[[*Proof*]{}]{}. Indeed, this follows from the universality of ${{\cal P}}^{NC}$ combined with the results of Section \[GAGA-sec\].
NC integral transforms.
-----------------------
By Proposition \[analytic-Poincare-id-prop\], we can rewrite (up to an automorphism of $J^{nc}$) the NC-Fourier-Mukai transform $F^{NC}$ from Section \[NC-Poincare-sec\] as the functor associated with the kernel $P^{nc}$ over $C\times J^{nc}$, $$\Phi_{P^{nc}}: D^b(C) {\rightarrow}{{\cal D}}(J^{nc}),$$ where for a ringed space we set ${{\cal D}}(X)=D({\operatorname{mod}}-{{\cal O}}_X)$. Explicitly, we have $$\Phi_{P^{nc}}(E):= R\pi_*\big(q^*E\otimes {{\cal P}}^{nc}\big).$$
Let us fix a line bundle $L\in J$. Consider the formal neighborhood of $L$ in $J$, which can be identified with the formal polydisc ${\Delta}={\operatorname{Spf}}{{\Bbb C}}[[y_1,\ldots, y_n]]$, using the affine coordinates $(y_i)$. The construction of the functor $\Phi_{P^{nc}}$ can be modified by replacing the NC manifold $J^{nc}$ with a formal one, ${\operatorname{Spf}}{{\Bbb C}}{\langle\langle}{y_1,\ldots, y_n}{\rangle\rangle}$ (where we take the formal spectrum of an NC-complete algebra in the sense introduced by Kapranov [@Kapranov Def. 2.2.2]). Namely, the formal scheme ${\Delta}={\operatorname{Spf}}{{\Bbb C}}[[y_1,\ldots,y_n]]$ has a torsion-free flat connection such that $\nabla(\partial/\partial y_i)=0$. This gives a differential graded resolution ${{\cal A}}_{{\Delta}}$ of the algebra ${{\Bbb C}}{\langle\langle}{y_1,\ldots, y_n}{\rangle\rangle}$ with differential $D_\nabla$. It remains to construct a kernel ${{\cal P}}^{nc}_L$ over $C\times {\operatorname{Spf}}{{\Bbb C}}{\langle\langle}{y_1,\ldots,y_n}{\rangle\rangle}$. As before, we denote by $q: C\times {\operatorname{Spf}}{{\Bbb C}}{\langle\langle}{y_1,\ldots, y_n}{\rangle\rangle}{\rightarrow}C$ and $\pi:C\times {\operatorname{Spf}}{{\Bbb C}}{\langle\langle}{y_1,\ldots, y_n}{\rangle\rangle}{\rightarrow}{\operatorname{Spf}}{{\Bbb C}}{\langle\langle}{y_1,\ldots, y_n}{\rangle\rangle}$ the natural projections. Similarly to the construction of ${{\cal P}}^{nc}$, we define ${{\cal P}}_L^{nc}$ as $\underline{H}^0$ of the complex $$\Omega^{0,*}_\pi\hat{{\otimes}}\pi^*{{\cal A}}_{{\Delta}}$$ endowed with the differential $\overline{\partial}+a_L+D_\nabla+\sum_i(y_i-e_i)f_i$. The kernel ${{\cal P}}_L^{nc}$ induces a Fourier-Mukai functor $$\Phi_{L}: D^b(C) {\rightarrow}{{\cal D}}({\operatorname{Spf}}{{\Bbb C}}{\langle\langle}{y_1,\ldots, y_n}{\rangle\rangle}).$$
In [@P-BN Sec. 2.2-2.4] the first named author constructed another functor $${{\cal F}}_L: D^b(C) {\rightarrow}{{\cal D}}({\operatorname{Spf}}{{\Bbb C}}{\langle\langle}{y_1,\ldots, y_n}{\rangle\rangle})$$ using the $A_\infty$-structure on $D^b(C)$. More precisely, ${{\cal F}}_L(E)=(H^*(C,E{\otimes}L){\otimes}{{\Bbb C}}{\langle\langle}{y_1,\ldots, y_n}{\rangle\rangle}, d)$ where the differential $d$ is constructed using the structure of an $A_{\infty}$-module on $H^*(C,E{\otimes}L)={\operatorname{Ext}}^*(E^\vee,L)$ over the $A_\infty$-algebra ${\operatorname{Ext}}^*_C(L,L)$, by passing to the (completed) Koszul dual (see [@P-BN (2.1.2)]). Since ${\operatorname{Ext}}^2_C(L,L)=0$, this $A_\infty$-algebra has trivial products[^4], so its Koszul dual is just ${{\Bbb C}}{\langle\langle}{y_1,\ldots, y_n}{\rangle\rangle}$.
Let $E\in D^b(C)$ be a bounded complex of locally free sheaves. Then there is a quasi-isomorphism $${{\cal F}}_L(E) {\rightarrow}\Phi_L(E).$$
[[*Proof*]{}]{}. To compute $R\pi_*$ we use the relative Dolbeault resolution of $q^*E\otimes {{\cal P}}_L^{nc}$. Thus, $\Phi_L(E)$ is represented by the complex $$\pi_*(q^*E{\otimes}{\Omega}^{0,*}_\pi\hat{{\otimes}} \pi^*{{\cal A}}_{{\Delta}})=\pi_*(q^*E\otimes\Omega^{0,*}_\pi) \hat{{\otimes}} {{\cal A}}_{{\Delta}}.$$ This complex is endowed with the differential $d_E+\overline{\partial}+a_L+D_\nabla+\sum_i(y_i-e_i)f_i$. We separate this differential into two parts by putting $$Q:=d_E+\overline{\partial}+a_L+D_\nabla, \mbox{\;\; and \;\;} \delta:=\sum_i(y_i-e_i)f_i.$$ We view $\delta$ as a perturbation added to the differential $Q$, and we are going to apply the Homological Perturbation Lemma (see [@HK (1.1)]) to a certain homotopy retraction of $Q$. Note that the differential $Q$ consists of two parts: the first part $d_E+\overline{\partial}+a_L$ acting on $\pi_*(q^*E\otimes\Omega^{0,*}_\pi)$, and the second part $D_\nabla$ acting on ${{\cal A}}_{{\Delta}}$. The complex $\pi_*(q^*E\otimes\Omega^{0,*}_\pi)$, as a vector space, is equal to $$\Omega^{0,*}(E){\otimes}_{{\Bbb C}}{{\Bbb C}}[[y_1,\ldots,y_n]].$$ The differential $d_E+\overline{\partial}+a_L$ only acts on $\Omega^{0,*}(E)$ and the corresponding cohomology is $H^*(C,E{\otimes}L)$. Thus, the cohomology of the complex $$\big( \Omega^{0,*}(E){\otimes}_{{\Bbb C}}{{\Bbb C}}[[y_1,\ldots,y_n]], d_E+\overline{\partial}+a_L\big)$$ is equal to $H^*(C,E{\otimes}L)\otimes_{{{\Bbb C}}} {{\Bbb C}}[[ y_1,\ldots, y_n]]$. Choosing Hermitian metrics and using harmonic representatives we obtain a homotopy retraction
H\^\*(C,EL) & & (\^[0,\*]{}(E), d\_E++a\_L)
such that $p\circ i={\operatorname{id}}$ and $i\circ p= {\operatorname{id}}+(d_E+\overline{\partial}+a_L)h+h(d_E+\overline{\partial}+a_L)$ where $h:\Omega^{0,*}(E){\rightarrow}\Omega^{0,*}(E)$ is a homotopy. Extending the coefficients to ${{\Bbb C}}[[ y_1,\ldots, y_n]]$ yields a homotopy retraction whose morphisms we still denote by $i$, $p$, and $h$:
H\^\*(C,EL)\_[[C]{}]{}[[C]{}]{}\[\[ y\_1,…, y\_n\]\] & & \^[0,\*]{}(E)\_[[C]{}]{}[[C]{}]{}\[\[y\_1,…,y\_n\]\]=\_\*(q\^\*E\^[0,\*]{}\_)
Moreover, since the maps $i$, $p$ and $h$ are obtained by extending coefficients to $ {{\Bbb C}}[[ y_1,\ldots, y_n]]$, these are morphisms of $D$-modules (in variables $y_1,\ldots,y_n$), where the left-hand side is equipped with the flat connection so that elements in $H^*(C,E{\otimes}L)\otimes 1$ are horizontal, and the $D$-module structure on the right-hand side is defined similarly. Thus, applying the construction in Theorem \[module-thm\], the maps $i$, $p$ and $h$ further induce a homotopy retraction between the corresponding dg-modules over ${{\cal A}}_{{\Delta}}$, $$\label{filtered-hom-retract}
\begin{diagram}
H^*(C,E{\otimes}L)\otimes_{{\Bbb C}}{{\cal A}}_{{\Delta}} &\pile{\rTo{i}\\ \lTo_p} & \pi_*(q^*E{\otimes}{\Omega}^{0,*}_\pi)\hat{{\otimes}} {{\cal A}}_{{\Delta}}.
\end{diagram}$$ The differential on the left-hand side is $D_\nabla$, while the differential on the right-hand side is exactly $Q=d_E+\overline{\partial}+a_L+D_\nabla$. Let us equip ${{\cal A}}_{{\Delta}}={\Omega}^\bullet_{{{\Bbb C}}[[y_1,\ldots,y_n]]/{{\Bbb C}}}{\langle\langle}e_1,\ldots,e_n{\rangle\rangle}$ with the decreasing filtration associated with the grading given by $\deg(y_i)=\deg(dy_i)=\deg(e_i)=1$. Then is a filtered homotopy retraction to which we can apply the Homological Perturbation Lemma: adding the perturbation $\delta=\sum_i(y_i-e_i)f_i$ to $Q$ yields a perturbed differential on $H^*(C,E{\otimes}L)\otimes_{{\Bbb C}}{{\cal A}}_{{\Delta}}$ given by $$D_\nabla+ p \delta i +p\delta h\delta i +p\delta h\delta h\delta i+\ldots.$$ Moreover, the Homological Perturbation Lemma also yields a perturbation $i'$ of the morphism $i$ so that it is still a quasi-isomorphism $$\big(H^*(C,E{\otimes}L)\otimes_{{\Bbb C}}{{\cal A}}_{{\Delta}}, D_\nabla+ p \delta i +\ldots\big) \stackrel{i'}{{\rightarrow}} \big(\pi_*(q^*E{\otimes}{\Omega}^{0,*}_\pi)\hat{{\otimes}} {{\cal A}}_{{\Delta}},Q+\delta\big)$$ between the perturbed complexes. To prove our assertion we rewrite the infinite series $p \delta i +p\delta h\delta i +p\delta h\delta h\delta i+\ldots$ in terms of the structure of an $A_\infty$-module over ${\operatorname{Ext}}^*(L,L)$ on $H^*(C,E{\otimes}L)={\operatorname{Ext}}^*(E^\vee,L)$, as in the definition of ${{\cal F}}_L(E)$ (see [@P-BN Section 2.4]). Here it is crucial that we use Kontsevich-Soibelman’s tree formula to form these $A_\infty$-structures, since as was shown in [@Markl], this tree formula agrees with the ordinary homological perturbation formula. This implies an identification of operators acting on $H^*(C,E{\otimes}L)\otimes_{{\Bbb C}}{{\cal A}}_{{\Delta}}$: $$\label{series-b-nc-eq}
p \delta i +p\delta h\delta i +p\delta h\delta h\delta i+\ldots = \rho_1(b^{nc})+\rho_2(b^{nc},b^{nc})+\ldots$$ where $$\rho_k: {\operatorname{Ext}}^*(L,L)^{{\otimes}k}\to {\operatorname{End}}(H^*(C,E{\otimes}L))$$ are the operators giving the $A_\infty$-module structure on $H^*(C,E{\otimes}L)$, extended to $H^*(C,E{\otimes}L)\otimes_{{\Bbb C}}{{\cal A}}_{{\Delta}}$ by ${{\cal A}}_{{\Delta}}$-linearity, and $b^{nc}:=\sum_i(y_i-e_i) f_i$. Recall (see Sec. \[NC-affine-space-sec\]) that the cohomology of $({{\cal A}}_{\Delta}, D_\nabla)$ is concentrated in degree zero and is given by the subalgebra $$K:={{\Bbb C}}{\langle\langle}y_1-e_1,\ldots,y_n-e_n{\rangle\rangle}{\subset}{{\Bbb C}}[[y_1,\ldots,y_n]]{\langle\langle}e_1,\ldots,e_n {\rangle\rangle}.$$ It is easy to see (using the filtration introduced above) that the embedding of a subcomplex $$\big(H^*(C,E{\otimes}L){\otimes}K, p \delta i +p\delta h\delta i +p\delta h\delta h\delta i+\ldots\big) {\hookrightarrow}\big(H^*(C,E{\otimes}L)\otimes_{{\Bbb C}}{{\cal A}}_{{\Delta}}, D_\nabla+ p \delta i +\ldots\big)$$ is a quasiisomorphism. It remains to observe that the right-hand side of gives the same differential on $H^*(C,E{\otimes}L){\otimes}{{\Bbb C}}{\langle\langle}y_1-e_1,\ldots,y_n-e_n{\rangle\rangle}$ as in the definition of ${{\cal F}}_L(E)$.
[99]{} D. Allcock, J. Carlson, D. Toledo, [*Complex Hyperbolic Structure for Moduli of Cubic Surfaces*]{}, C. R. Acad. Sci. Paris Sér. I, 326 (1998), 49–54. D. Arinkin, J. Bloch, T. Pantev, [*-Quantizations of Fourier-Mukai transforms*]{}, arXiv:1101.0626 M. Atiyah, [*Complex analytic connections in fibre bundles*]{}, Trans. AMS 85 (1957), 181–207. V. Baranovsky, V. Ginzburg, J. Pecharich, [*Deformation of line bundles on coisotropic subvarieties*]{}, Quarterly Journal of Mathematics, Volume 63, Number 3, 10 September 2012, pp. 525-537(13) O. Ben-Bassat, J. Bloch, T. Pantev, [*Non-commutative tori and Fourier-Mukai duality*]{}, Compositio Math. 143, 423–475, 2007. X.-Z. Cheng, E. Getzler, [*Transferring homotopy commutative algebraic structures*]{}, Journal of Pure and Applied Algebra, 212 (2008), 2535–2542. G. Cortiñas, [De Rham and infinitesimal cohomology in Kapranov’s model for noncommutative algebraic geometry]{}, Compositio Math. 136 (2003), 171–208. G. Cortiñas, [*The structure of smooth algebras in Kapranov’s framework for noncommutative geometry*]{}, J. Algebra 281 (2004), 679–694. K. Costello, [*A geometric construction of Witten genus, II*]{}, arXiv:1112.0816. J. Cuntz, D. Quillen, [*Algebra extensions and nonsingularity*]{}, J. AMS 8 (1995), 251-289. P. Deligne, P. Griffiths, J. Morgan, D. Sullivan, [*Real homotopy theory of Kähler manifolds*]{}, Invent. Math. 29 (1975), 245–274. B. V. Fedosov, [*A simple geometrical construction of deformation quantization*]{}, J. Diff. Geom. 40 (1994), 213–238. K. Fukaya, [*Deformation theory, Homological Algebra, and Mirror symmetry*]{}, Geometry and physics of branes Como 2001, 121–209, IOP, Bristol, 2003. E. Getzler, [*A Darboux theorem for Hamiltonian operators in the formal calculus of variations*]{}, Duke Math. J. 111 (2002), 535–560. A. Grothendieck, [*Sur quelques points d’algèbre homologique*]{}, Tohoku Math. J. 9 (1957), 119–221. J. Huebschmann, T. Kadeishvili, [*Small models for chain algebras*]{}, Math. Z. 207 (1991), 245–280. M. Kapranov, [*Noncommutative geometry based on commutator expansions*]{}, J. Reine Angew. Math. 505 (1998), 73–118. M. Kapranov, [*Rozansky-Witten invariants via Atiyah classes*]{}, Compositio Math. 115 (1999), 71–113. M. Kashiwara, P. Shapira, [*Deformation quantization modules*]{}, arXiv:1003.3304. M. Kontsevich, Y. Soibelman, [*Homological mirror symmetry and torus fibration*]{}, in [*Symplectic geometry and mirror symmetry (Seoul, 2000)*]{}, 203–263, World Sci. Publishing, River Edge, NJ, 2001. I. Kriz, J. P. May, [*Operads, algebras, modules, and motives*]{}, Astérisque (1995), no. 233, iv+145pp. G. Laumon, [*Transformation de Fourier generalisée*]{}, preprint alg-geom/9603004. L. Le Bruyn, G. Van de Weyer, [*Formal structures and representation spaces*]{}, J. Algebra 247 (2002), 616–635. M. Markl, [*Transferring $A_\infty$ (strongly homotopy associative) structures*]{}, Rend. Circ. Mat. Palermo (2) Suppl 79 (2006), 139–151. S. Mukai, [*Duality between $D(X)$ and $D(\hat{X})$ with its application to Picard sheaves*]{}, Nagoya Math. J. 81 (1981), 153–175. J. Pecharich, [*Deformations of vector bundles on coisotropic subvarieties via the Atiyah class*]{}, arXiv:1010.3671. A. Polishchuk, [*$A_\infty$-structures, Brill-Noether loci and the Fourier-Mukai transform*]{}, Compositio Math. 140 (2004), 459–481. J.-P. Serre, [*Géométrie algébrique et géométrie analytique*]{}, Ann. Inst. Fourier, Grenoble 6 (1955–1956), 1–42. J.-P. Serre, [*Galois cohomology*]{}, Springer Monographs in Mathematics, Translated from the French by Patrick Ion, Berlin, New York.
[^1]: A.P. is partially supported by the NSF grant DMS-1001364
[^2]: This grading is different from the one considered in [@Kapranov], cf. Remark \[filtrations-rem\].
[^3]: More precisely, we use the analytic version of Theorem \[twisted-dg-thm\] which is proved similarly.
[^4]: Even though ${\operatorname{Ext}}^*(L,L)$ has trivial $A_\infty$-algebra structure, $A_\infty$-modules over it are not the same as ordinary modules over an associative algebra.
|
---
abstract: 'We show how to train a quantum network of pairwise interacting qubits such that its evolution implements a target quantum algorithm into a given network subset. Our strategy is inspired by supervised learning and is designed to help the physical construction of a quantum computer which operates with minimal external classical control.'
author:
- |
Leonardo Banchi$^1$, Nicola Pancotti$^2$ and Sougato Bose$^1$ [^1]\
1- Department of Physics and Astronomy, University College London,\
Gower Street, London WC1E 6BT, United Kingdom\
2- Max-Planck-Institut für Quantenoptik,\
Hans-Kopfermann-Stra[ß]{}e 1, 85748 Garching, Germany
title: 'Supervised quantum gate “teaching” for quantum hardware design'
---
Quantum networks for computation
================================
A quantum computer is a device which uses peculiar quantum effects, such as superposition and entanglement, to process information [@nielsen2010quantum]. Although current quantum computers operate on few quantum bits ([*qubits*]{}) [@ladd2010quantum; @coffey2014incremental], it is known that large-scale quantum devices can run certain algorithms exponentially faster than the classical or probabilistic counterpart [@nielsen2010quantum].
Recently there has been many proposals to speed-up machine learning strategies using quantum devices [@wittek2014quantum]. Most of these proposals either use quantum algorithms to achieve faster learning [@rebentrost2014quantum] or exploit quantum fluctuations to escape from local minima in training the Boltzmann machine [@denil2011toward]. In this paper we consider a different perspective. Since the actual development of a general quantum computer is still in its infancy, rather than focusing on the advantages of quantum devices for machine learning we show how machine learning can help the construction of a quantum computer.
To keep the discussion realistic, we focus on a superconducting quantum computing architecture [@mariantoni2011implementing; @chen2014qubit] where each qubit is realized with a superconducting circuit cooled at low temperature. The pairwise coupling between two qubits is introduced by connecting them via a capacitor (or an inductor) whose strength can be tuned by design. Given the flexibility in wiring the different qubits, it is then possible to build a quantum network with tunable couplings.
In the next sections we introduce basic aspects of the physical simulation of quantum operations to set up the formalism and then we propose our strategy which is inspired by supervised learning.
Physical implementation of quantum gates
----------------------------------------
From the mathematical point of view each qubit is described by a two dimensional Hilbert space $\mathbb{C}^2$, while the Hilbert space of $N$ qubits is given the the tensor product $\mathbb{C}^2\otimes\mathbb{C}^2\dots=\mathbb{C}^{2^N}\equiv
\mathcal{H}_N$. The possible quantum states, like the state $00101$ for a classical 5-bit register, correspond to vectors of unit norm in the Hilbert space. An arbitrary operation, namely a quantum [*gate*]{}, corresponds to a unitary matrix $U$ acting on $\mathcal{H}_{N}$. In more physical terms, $U$ is the solution of the Schrödinger equation $i \frac{\partial U}{\partial t} = HU$, where $i$ is the imaginary unit, $t$ represents time, and $H$ is the Hamiltonian, a Hermitian $2^N\times
2^N$ matrix which describes the physical interactions between the qubits. If the qubits are unmodulated, namely there is no external time-dependent control so $H$ is independent on $t$, then after a certain time $t$ the operation on the qubits is given by $U=e^{-itH}$, being $e^{(\cdot)}$ the matrix exponential. In principle, for any given operation $U$ there are some corresponding interactions modeled by a Hamiltonian $H$ so that $U=e^{-itH}$. However, in physical implementations of quantum computers [@ladd2010quantum] the range of possible Hamiltonians is severely limited, thus drastically restricting the range of achievable quantum operations without external control. This problem is typically solved by switching on and off different interactions in time so the final operation is the product $U_1U_2\dots$ where $U_n=e^{-it_n
H_n}$, being $H_n$ a sequence of interaction Hamiltonians, each one switched on for a time $t_n$. As in classical computation, there is a minimal set of gates $\{U_n\}$ which enable [*universal*]{} quantum computation simply by concatenating at different times gates from this set [@barenco1995elementary]. Indeed, most quantum algorithms are nothing but a known sequence of universal operations. The implementation of this sequence however requires an outstanding experimental ability to perfectly switch on and off different physical couplings at given times. Possible errors or imperfections in this sequential process accumulate in time and may affect the outcome, if not tacked with error correcting codes [@nielsen2010quantum].
Given this experimental difficulty, it is worth asking whether the unitary $U$ which results from a recurring sub-sequence of the algorithm can be implemented directly in [*hardware*]{}. Indeed in the next section we propose a different strategy which exploits auxiliary qubits to implement quantum operations with physical interactions and no external control. This strategy could potentially allow an experimentalist to create a quantum device which, by simply “waiting” for the natural dynamics of the network, is able to implement transformations like the Quantum Fourier Transform which are ubiquitous in quantum algorithms, and may even provide an alternative paradigm for general quantum computation. Our method shares similar goals with a recent proposal by Childs [@childs2013universal], but it is completely different because it uses weighted networks which allow us to significantly lower the number ancillary qubits required for the operation.
Engineered unmodulated networks for computation
-----------------------------------------------
Our aim is to implement a quantum operation (a unitary[^2] matrix $U$) on a $N$ qubit register exploiting the physical interactions available for that hardware architecture, and avoiding to use external control fields. Since a generic $2^N\times 2^N$ gate $U$ is impossible to realize with $N$ qubits and pairwise interactions only, as described before, we consider a larger network of $N'>N$ qubits and we engineer the strength of the pairwise interactions between them to implement the operation, when possible.
This problem shares some similarities with supervised learning in artificial neural networks although, as we clarify in the following, is also very different in some aspects. In supervised learning, given a training set $\{I_k, O_k\}_{k=1,\dots,M}$ the goal is the find a functional approximation $O_k=f(I_k)$ which is also able to predict the output corresponding to unknown inputs missing from the training set. Even when there is no prior knowledge of $f$, it is possible to approximate the input/output relations with a neural network composed by input, output and hidden layers [@bishop2006pattern]. The learning procedure then consists in finding the optimal weights between nodes of different layers such that the desired input/output relation is reconstructed.
![Example of a quantum network composed of register and ancillary qubits. Each vertex is a qubit and each edge represents pairwise interactions. The target operation $U$ is implemented only on the register qubits. []{data-label="fig:qnet"}](network.eps){width="70.00000%"}
On the other hand, in our problem the functional relation between inputs and outputs, namely the gate $U$ is already known in advance. However, the corresponding Hamiltonian may contain simultaneous interactions between $3$ or more qubits which unlikely appear in physical implementations. To simulate $U$ using pairwise interactions only, we consider then a larger network as in Fig. \[fig:qnet\] with ancillary qubits playing the role of the hidden layers, and we train the weights, namely the physical couplings, such that the dynamics of the network reproduces $U$ in a given subset of qubits. To simplify the implementation we assume that the input and output registers are made of the same physical qubits, although this assumption may be removed. In the next section we show how to formally model the training procedure.
Supervised gate “teaching”
===========================
In supervised learning the training set is composed of data, whose input to output map $f$ is not known. On the other hand, we know the gate $U$ and we want find find the physical couplings, i.e. the weights of the network in Fig. \[fig:qnet\], such that the quantum network evolution implements $U$ in the register. Because of this difference we named our strategy “teaching” rather than learning. In our case we can build an arbitrary large training set by choosing random input states[^3] ${\left|{\psi_j}\right\rangle}\in\mathcal{H}_N$ and finding the corresponding output states $\mathcal{H}_N\ni{\left|{\psi_j'}\right\rangle}=U{\left|{\psi_j}\right\rangle}$, so the generated $M$-dimensional ($M$ being variable) training set is $$\mathcal T = \{ ({\left|{\psi_j}\right\rangle}, U{\left|{\psi_j}\right\rangle}) \quad : \quad j=1,\dots,M\}~.
\label{training}$$ In principle the optimal inputs may depend on the target gate $U$. However, for simplicity and generality, in each ${\left|{\psi_i}\right\rangle}$ is sampled from the Haar measure [@banchi2015quantum].
The quality of the implementation of the target gate $U$ into the dynamics of the quantum network can be measured by defining a [*cost*]{} function which, for any input state ${\left|{\psi_j}\right\rangle}\in\mathcal T$, measures the distance between the output state of the evolution and the expected output $U{\left|{\psi_j}\right\rangle}$. The similarity between two quantum states ${\left|{\psi}\right\rangle}$ and ${\left|{\phi}\right\rangle}$ is measured by the fidelity[^4] $|{\langle{\psi}|{\phi}\rangle}|^2$. Because of the Cauchy-Schwarz inequality, it is $0\le|{\langle{\psi}|{\phi}\rangle}|^2\le1$ and the upper value $|{\langle{\psi}|{\phi}\rangle}|=1$ is obtained only when ${\left|{\psi}\right\rangle}={\left|{\phi}\right\rangle}$. However, because of entanglement between the register and the ancillary qubits the state of the register is not exactly known, but it can be in different states ${\left|{\phi_j}\right\rangle}$ with probability $p_j$. Such a [*mixed*]{} state is mathematically described by the matrix [@nielsen2010quantum] $\rho=\sum_j p_j {\left|{\phi_j}\right\rangle}{\langle{\phi_j}|}$ and the fidelity between $\rho$ and $\psi$ becomes ${\langle{\psi}|}\rho{\left|{\psi}\right\rangle}\equiv\sum_j
p_j|{\langle{\psi}|{\phi_j}\rangle}|^2$.
Given an initial state ${\left|{\psi}\right\rangle}$ let us call $\mathcal E_w[{\psi}]$ the state of the register after the evolution generated by the interaction Hamiltonian $H(w)$ which depends on the weights $w$. This state can be constructed by (i) initializing the $N$ register qubits in the state ${\left|{\psi}\right\rangle}$; (ii) setting the remaining $N'-N$ ancillary qubits in a (fixed) state ${\left|{\alpha}\right\rangle}$; (iii) switching on the evolution described by $H(w)$ for a certain time $t$ – without loss of generality we set $t=1$, since $e^{-itH(w)}= e^{-iH(tw)}$; (iv) observing the state of the register[^5]. The goal is then to find the optimal weights $w$ (if they exist) such that the state $\mathcal E_w[\psi_j]$ is equal to the target state $U{\left|{\psi_j}\right\rangle}$ for each ${\left|{\psi_j}\right\rangle}\in\mathcal T$. Mathematically this corresponds to the maximization of the average fidelity $$F(w) = \frac{1}{M}\sum_{{\left|{\psi_j}\right\rangle}\in \mathcal T} {\langle{\psi_j}|}U^\dagger\mathcal E_w[\psi_j]
U{\left|{\psi_j}\right\rangle}~.
\label{fidelity}$$ The function $F(w)$ measures on average how the dynamics of the network reproduces in the register the target quantum gate $U$. It is non-convex in general and may have many local maxima with $F<1$. However, an optimal configuration $\tilde w$ is interesting for practical purposes only if the error $\epsilon=1-F(\tilde w)$ is smaller than the desired threshold [@Martinis2015] (say $10^{-3}$ – $10^{-4}$), so in the following we say that a solution exists if an optimal $\tilde w$ is found with $\epsilon<10^{-3}$. There are no known theoretical tools to establish in advance whether this high-fidelity solution $\tilde w$ can exist, so one has to rely on numerical methods. In the next section we discuss a simple algorithm for finding $\tilde w$.
A simple algorithm for network optimization
-------------------------------------------
The explicit form of the fidelity function ($1-F$ is a cost function) as a sum over the training set allows us to use the stochastic gradient descent training algorithm, which is widely used for in training artificial neural networks within the backpropagation algorithm. However, since our training set is variable and can be sampled from the Haar distribution of pure states we propose an adapted version of the stochastic gradient descent with online training:
Choose the initial weights $w$ (e.g. at random); choose an initial learning rate $\kappa$; generate a random ${\left|{\psi}\right\rangle}$ from the Haar measure; update the weights as $$\begin{aligned}
w\to w + \kappa\nabla_{w}{\langle{\psi}|} U^\dagger
\mathcal E_w \left[\psi\right]U{\left|{\psi}\right\rangle};
\label{e.grad}
\end{aligned}$$ decrease $\kappa$ (see below);
In the above algorithm, the weights are updated $L$ times before changing the state. The parameter $L$ defines the amount of deterministic steps in the learning procedure and it can be set to the minimum value $1$, so that after each iteration the state is changed, or to higher values. On the other hand, the learning rate $\kappa$ has to decrease [@spall2005introduction] in an optimal way to assure convergence, a common choice being $\kappa\propto s^{-1/2}$ where $s$ is the step counter. On physical grounds, as we discussed extensively in Ref. [@banchi2015quantum], the stochastic fluctuations given by choosing random quantum states at different steps enable the training procedure to escape from local maxima when the weights are far from the optimal point $\tilde w$.
Using the above algorithm, in Ref.[@banchi2015quantum] we considered pairwise interactions described by physically reasonable Hamiltonians and we found different quantum network configurations which implement different quantum operations, such as the quantum analogue of Toffoli and Fredkin gate.
Concluding remarks
==================
We are proposing an alternative strategy to the physical implementation of quantum operations which avoids time modulation and sophisticated control pulses. This strategy consists in enlarging the number of qubits and engineering the unmodulated pairwise interactions between them so that the desired operation is implemented into the register subset (see Fig. \[fig:qnet\]) by the natural physical evolution. Inspired by the analogy with the training of artificial neural networks, we then propose a simple algorithm that is suitable for finding few-qubits networks which implement some important quantum gates. In the long term, by finding more efficient training algorithms suitable for larger spaces, our strategy could potentially provide an alternative paradigm for computation where some quantum algorithms and/or many-qubit gates are obtained by simply “waiting” for the natural dynamics of a suitably designed network.
[10]{}
M. A. Nielsen and I. L. Chuang. . Cambridge University Press, 2000.
T. D. Ladd et al. Quantum computers. , 464(7285):45–53, 2010.
V. C. Coffey. The incremental quest for quantum computing. , 48(6):36–41, 2014.
P. Wittek. . Elsevier, Oxford, 2014.
P. Rebentrost, M. Mohseni, and S. Lloyd. Quantum support vector machine for big data classification. , 113(13):130503, 2014.
M. Denil and N. De Freitas. Toward the implementation of a quantum [RBM]{}. In [*NIPS Deep Learning and Unsupervised Feature Learning Workshop*]{}, 2011.
M. Mariantoni et al. Implementing the quantum von [N]{}eumann architecture with superconducting circuits. , 334(6052):61–65, 2011.
Y. Chen et al. Qubit architecture with high coherence and fast tunable coupling. , 113(22):220502, 2014.
A. Barenco et al. Elementary gates for quantum computation. , 52(5):3457, 1995.
A. M. Childs, D. Gosset, and Z. Webb. Universal computation by multiparticle quantum walk. , 339(6121):791–794, 2013.
C. M. Bishop. . Springer, 2006.
L. Banchi, N. Pancotti, S. Bose. Quantum gate learning in qubit networks: Toffoli gate without time-dependent control. , 2:16019, Jul 2016.
J. M. Martinis. Qubit metrology for building a fault-tolerant quantum computer. , 1:15005, 2015.
J. C. Spall. , volume 65. John Wiley & Sons, 2005.
[^1]: L.B. and S.B. acknowledge the financial support by the ERC under Starting Grant 308253 PACOMANEDIA.
[^2]: We consider a quantum gate, but our formalism can be easily extended to more general quantum channels [@nielsen2010quantum].
[^3]: In the “bra and ket” notation [@nielsen2010quantum] ${\left|{\psi}\right\rangle}$ refers to a normalized vector.
[^4]: The “bra” ${\langle{\psi}|}$ corresponds to the Hermitian conjugate ${\left|{\psi}\right\rangle}^\dagger$ so ${\langle{\psi}|{\phi}\rangle}$ is the inner product between the vectors ${\left|{\psi}\right\rangle}$ and ${\left|{\phi}\right\rangle}$.
[^5]: Formally, steps (i) and (ii) correspond to the preparation of the state ${\left|{\eta_0}\right\rangle}={\left|{\psi}\right\rangle}\otimes{\left|{\alpha}\right\rangle}$. Step (iii) gives the state ${\left|{\eta_t}\right\rangle}=e^{-itH(w)}{\left|{\eta_0}\right\rangle}$. While step (iv) produces a mixed state since the dynamics of a reduced system is obtained with the [*partial trace*]{} [@nielsen2010quantum], so $\mathcal E_w[\psi] = {\rm Tr}_{\rm ancilla} {\left|{\eta_t}\right\rangle}{\langle{\eta_t}|}$.
|
---
abstract: 'We performed elastic neutron scattering and magnetization measurements on Fe$_{1.07}$Te$_{0.75}$Se$_{0.25}$ and FeTe$_{0.7}$Se$_{0.3}$. Short-range incommensurate magnetic order is observed in both samples. In the former sample with higher Fe content, a broad magnetic peak appears around (0.46,0,0.5) at low temperature, while in FeTe$_{0.7}$Se$_{0.3}$ the broad magnetic peak is found to be closer to the antiferromagnetic (AFM) wave-vector (0.5,0,0.5). The incommensurate peaks are only observed on one side of the AFM wave-vector for both samples, which can be modeled in terms of an imbalance of ferromagnetic/antiferromagnetic correlations between nearest-neighbor spins. We also find that with higher Se (and lower Fe) concentration, the magnetic order becomes weaker while the superconducting temperature and volume increase.'
author:
- Jinsheng Wen
- Guangyong Xu
- Zhijun Xu
- Zhi Wei Lin
- Qiang Li
- 'W. Ratcliff'
- Genda Gu
- 'J. M. Tranquada'
title: 'Short-range incommensurate magnetic order near the superconducting phase boundary in Fe$_{1+\delta}$Te$_{1-x}$Se$_x$'
---
Since the recent discovery of Fe-based superconductors with high critical temperatures ($T_c$), [@hosono_2; @hosono4; @chen-2008-453; @ren-2008-25] extensive research has been carried out to study the magnetic structures in these materials, [@cruz; @qiu:257002; @huang:257003] as magnetic fluctuations are expected to play an important role in producing the unconventional superconductivity. [@ma:033111; @dong-2008-83; @mazin:057003] It is now well established that in LaFeAsO-(1:1:1:1) (Refs. ) and BaFe$_2$As$_2$-(1:2:2) (Refs. ) type compounds, the long-range magnetic order is suppressed with doping, while the superconductivity appears above a certain doping value. While there are some rare cases where superconductivity appears sharply after magnetic order disappears, [@luetkens-2008] in most systems short-range magnetic order coexists with superconductivity over some range of doping. [@zhao; @drew-2009; @rotter-2008-47; @chen-2009-85; @fang-2009b; @chu:014506]
In the more recently discovered system Fe$_{1+\delta}$Te$_{1-x}$Se$_x$ (1:1),[@hsu-2008; @yeh-2008; @fang-2008-78] it is found that: i) long-range magnetic order is present in non-superconducting Fe$_{1+\delta}$Te, [@fruchart-1975; @li-2009-79; @bao-2008] but only short-range magnetic order survives in superconducting samples with 33% (Ref. ) and 40% Se (Ref. ); ii) the observed magnetic order has a different propagation wave-vector from that of the other Fe-based systems. To describe the ordering, we consider a tetragonal unit cell containing two Fe atoms per plane, and specify wave vectors in reciprocal lattice units (r.l.u.) of $(a^*, b^*,
c^*)=(2\pi/a,2\pi/b,2\pi/c)$; the unit cell is rotated $45^\circ$ in the $a$–$b$ plane from that used for 1:1:1:1 and 1:2:2 systems. [@li-2009-79] In the latter systems, the spin-density-wave (SDW) order is commensurate, with propagation wave-vector (0.5,0.5,0.5), generally attributed to nesting of the Fermi surface. [@li-2009-79; @zhao:140504; @maier:020514; @yin:047001] In the 1:1 system, the SDW order propagates along (0.5,0,0.5), and can be either commensurate, or incommensurate, depending on the Fe content. [@li-2009-79; @bao-2008] Calculations using the local spin density approximation for hypothetical stoichiometric FeTe yield a commensurate magnetic ground state consistent with that seen experimentally[@ma-2009; @johannes-2009]; however, the (0.5,0.5,0.5) SDW order is calculated to have the lowest energy for FeSe.[@ma-2009]
To address the evolution of the magnetic correlations with Se concentration, we have performed elastic neutron scattering and magnetization measurements on high quality single crystals with different Fe and Se contents. We show that there is short-range incommensurate magnetic order in both Fe$_{1.07}$Te$_{0.75}$Se$_{0.25}$ and FeTe$_{0.7}$Se$_{0.3}$ at low temperature. Broad magnetic peaks appear at positions slightly displaced from the antiferromagnetic (AFM) wave-vector $(0.5,0,0.5)$ in both samples when cooled below $\alt40$ K. The peak intensity increases with further cooling and persists into the superconducting phase. The magnetic peak intensity drops with more Se and less Fe content, and with strengthening superconductivity.
Single crystals with nice (001) cleavage planes were grown by a unidirectional solidification method with nominal compositions of Fe$_{1.07}$Te$_{0.75}$Se$_{0.25}$ and FeTe$_{0.7}$Se$_{0.3}$ and respective masses of 4.7 and 7.2 g. Neutron scattering experiments were carried out on the triple-axis spectrometer BT-9 located at the NIST Center for Neutron Research. The scattering plane $(H0L)$ is defined by two vectors \[100\] and \[001\] in tetragonal notation. The lattice constants for both samples are $a=b=3.80(8)$ Å, and $c=6.14(7)$ Å.
![(Color online) (a) ZFC magnetization, and (b) background subtracted magnetic peak intensity measured along \[100\] (normalized to the sample mass) as a function of temperature for Fe$_{1.07}$Te$_{0.75}$Se$_{0.25}$, and FeTe$_{0.7}$Se$_{0.3}$. Error bars indicate one standard deviation assuming Poisson statistics. Lines through data are guides for the eyes.[]{data-label="fig:1"}](fig1.ps){width="0.8\linewidth"}
The bulk magnetization was characterized using a superconducting quantum interference device (SQUID) magnetometer. In the magnetization measurements, each sample was oriented so that the (001) plane was parallel to the magnetic field. The zero-field-cooling (ZFC) magnetization [*vs.*]{} temperature for each sample is shown in Fig. \[fig:1\](a), where one can see that the 25% Se sample only shows a trace of superconductivity, while the 30% Se sample clearly has a $T_c\sim13$ K. We estimate that the superconducting volume fraction for the latter sample is $\sim1$%. The inset of Fig. 1(a) shows that the paramagnetic magnetization grows on cooling, and is greater in the sample with less Se (and more Fe). The paramagnetic response does not follow simple Curie-Weiss behavior, so it is not possible to make a meaningful estimate of effective magnetic moments. For the 25% Se sample, there is a shoulder at $\sim60$ K which could be due to 2–3% of Fe$_{1+\delta}$Te as a second phase, which has a magnetic phase transition temperature of $\sim$65 K. [@li-2009-79]
In our elastic neutron scattering measurements, each sample was aligned on the (200) and (001) nuclear Bragg peaks with an accuracy and reproducibility in longitudinal wave vector of better than 0.005 r.l.u.. For the magnetic peaks, linear scans were performed along \[100\] and \[001\] directions at various temperatures. The temperature dependence of the peak intensity is summarized in Fig. \[fig:1\](b), and representative scans are shown in Fig. \[fig:2\]. No net peak intensity is observed at 60 K, but a weak magnetic peak appears at slightly lower temperature, growing in intensity with further cooling. For Fe$_{1.07}$Te$_{0.75}$Se$_{0.25}$, the magnetic structure is clearly incommensurate, and the peak position is determined to be ($0.5-\epsilon,0,0.5$), with $\epsilon=0.04$. From Fig. \[fig:2\](a), we did not observe a peak at ($0.5+\epsilon,0,0.5$). For FeTe$_{0.7}$Se$_{0.3}$, the magnetic peak center is at (0.48,0,0.5), although this differs from the commensurate position by less than the peak width. Our observations are qualitatively consistent with the previous result[@bao-2008] for Fe$_{1.08}$Te$_{0.67}$Se$_{0.33}$, where the magnetic peak is at (0.438,0,0.5); it appears that both the Fe and Se concentrations impact the ordering wave-vector. We have also searched for SDW order around (0.5,0.5,0.5) in the $(HHL)$ zone, but no evidence of magnetic peaks was found.
At 5 K, the peak width for Fe$_{1.07}$Te$_{0.75}$Se$_{0.25}$ \[100\] scan is 0.10 r.l.u., which corresponds to a correlation length of 6.1(1) Å. The width along \[001\] is 0.20 r.l.u., giving a correlation length of 4.9(1) Å. As can be seen from Fig. \[fig:2\], the peaks for FeTe$_{0.7}$Se$_{0.3}$ along \[100\] and \[001\] are broader than their counterparts for Fe$_{1.07}$Te$_{0.75}$Se$_{0.25}$, and the correlation lengths are determined to be 3.8(1) Å along \[100\] and 3.3(1) Å along \[001\]. Also, from Fig. \[fig:1\](b), one can see that the magnetic peak intensity for Fe$_{1.07}$Te$_{0.75}$Se$_{0.25}$ is always higher than the other one. Although the SDW order is short-ranged in both compounds, and starts at around the same temperature, $\sim40$ K, the order is apparently stronger in the 25% Se sample.
![(Color online) Short-range magnetic order in Fe$_{1+\delta}$Te$_{1-x}$Se$_x$. The left and right columns show the magnetic peak profiles for Fe$_{1.07}$Te$_{0.75}$Se$_{0.25}$ and FeTe$_{0.7}$Se$_{0.3}$, respectively. Top and bottom rows are scans along \[100\] and \[001\] respectively. (a), (b), and (c) are data taken at various temperatures. For the 30% Se sample, there is a temperature-independent spurious peak in the \[001\] scans, so in (d) we only plot 5 K data with the 60-K scan subtracted. All data are taken with 1 minute counting time and then normalized to the sample mass. Error bars represent the square root of the total counts. The lines are fits to the data using Lorentzian functions.[]{data-label="fig:2"}](fig2.ps){width="0.9\linewidth"}
![(a) Inset shows the commensurate magnetic unit cell within a single layer of Fe$_{1+\delta}$Te, with spin arrangements in $a$-$b$ plane; solid line shows the calculated scattered intensity assuming uniform exponential decay of spin correlations. (b) Dashed line shows the magnetic structure factor $|F|^2$ and solid line shows calculated intensity for exponential decay of correlations between ferromagnetic spin pairs (inset). (c) Same as (b) but for exponential decay of correlations between antiferromagnetic spin pairs.[]{data-label="fig:3"}](fig3.ps){width="0.8\linewidth"}
The magnetic structure of the parent compound Fe$_{1+\delta}$Te can be described by the schematic diagram in the inset of Fig. \[fig:3\](a), which is adopted from Refs. . Here the magnetic structure consists of two spin sublattices. The spins in both sublattices are found to be aligned along $b$-axis. Within each sublattice, the spins have an antiferromagnetic alignment along $a$ and $c$-axes, and ferromagnetic along the $b$-axis. The spins have a small out-of-plane component, but here, for simplicity, we are only considering the components in the $a$–$b$ plane. With low excess Fe, [@li-2009-79] this configuration gives rise to magnetic Bragg peaks at the commensurate AFM wave-vector (0.5,0,0.5). The extra Fe is considered to reside in the interstitial sites of the Te/Se atoms. [@bao-2008] With more excess Fe, the ordering wave-vector becomes incommensurate, which can be explained by a modulation of the ordered moment size and orientation, propagating along the $a$-axis. [@bao-2008] The connection between excess Fe and the transition from commensurate to incommensurate order has been modeled theoretically. [@fang-2009]
With Se doping, the magnetic order is depressed and becomes short ranged. It is intriguing that magnetic order can survive without a lowering of the lattice symmetry from tetragonal, although perhaps there are local symmetry reductions on the scale of the magnetic correlation length. The incommensurability is also interesting. A uniform sinusoidal modulation of the spin directions or magnitudes will give incommensurate peaks at $(0.5\pm\epsilon,0,0.5)$, whereas we see a peak only on the $-\epsilon$ side. One can model this with phase shifted modulations on the two sublattices, but the modulation length required to describe the incommensurability is much greater than the correlation length.
We have found that a simple description of the incommensurability can be obtained when the decay of correlations between ferromagnetic nearest-neighbor spins is different from that of antiferromagnetic spin neighbors. We will consider correlations only along the modulation direction within an $a$–$b$ plane, and assume that they are independent of correlations in the orthogonal directions. Let us break the spin system into perfectly correlated nearest-neighbor pairs, with exponential decay of the spin correlations from one pair to the next along the $a$-axis. The neutron scattering intensity can then be expressed as [@guinier-1994] $$\label{eq1}
I \propto |F|^2\frac{1-p^2}{1+p^2-2p\cos(2\pi h)},$$ where $F$ is the structure factor for the selected pair of spins, $h$ is the wave-vector component along the $a$-axis, and $$\label{eq2}
p = -e^{-a/\xi};$$ $p$ is the correlation function between neighboring pairs, where the negative sign suggests that the inter-pair corrlation is antiferromagnetic; and $\xi$ is the correlation length. (In all cases discussed below, we set $\xi=a$.)
Let us first consider the case of ferromagnetic spin pairs with exponentially decaying correlations between pairs, as illustrated in Fig. \[fig:3\](b). The structure factor for this case corresponds to $$\label{eq3}
|F|^2=4\cos^2({\textstyle\frac12}\pi h),$$ as indicated by the dashed line in Fig. \[fig:3\](b). Plugging this into Eq. (\[eq1\]) gives the solid line shown in Fig. \[fig:3\](b). Note that the calculated peaks are incommensurate, with the peak near $h=0.5$ shifted to lower $h$. Alternatively, we can start with an antiferromagnetic spin pair, in which case $$\label{eq4}
|F|^2=4\sin^2({\textstyle\frac12}\pi h).$$ This yields the result shown in Fig. \[fig:3\](c), with the peaks shifted in the opposite direction. If the decay of correlations is identical for ferromagnetic and antiferromagnetic nearest neighbors, then we can average over these two cases, obtaining $|F|^2=2$; the resulting commensurate peaks are shown in Fig. \[fig:3\](a).
Our experimental results look similar to Fig. \[fig:3\](b). This suggests that the ferromagnetic correlations are stronger than the antiferromagnetic ones. For the model illustrated in Fig. \[fig:3\](b), the incommensurability grows as the correlation length gets shorter. The trend in our two samples does not follow this relationship; however, one could describe a more general relationship between the ferromagnetic and antiferromagnetic correlations by taking a weighted average of Eqs. (\[eq3\]) and (\[eq4\]).
In summary, we have observed short-range magnetic order in Fe$_{1.07}$Te$_{0.75}$Se$_{0.25}$ and FeTe$_{0.7}$Se$_{0.3}$. In both samples, the magnetic order is incommensurate and only observed on one side of the commensurate wave-vector (0.5,0,0.5), which is likely a result of the imbalance of ferromagnetic/antiferromagnetic correlations between neighboring spins. The parent compound Fe$_{1+\delta}$Te is not superconducting, [@bao-2008; @li-2009-79] and the optimally doped sample with 50% Se has no static magnetic order [@qiu-2009; @wen-2009]. Our samples have Se content lying in the middle, where we see that with larger Se doping, the SDW order becomes weaker, while the superconductivity is enhanced. This could imply the coexistence and competition between SDW order and superconductivity in this system, similar to other Fe-based [@dong-2008-83; @zhao; @drew-2009; @rotter-2008-47; @chen-2009-85] and cuprate superconductors [@prl67202; @moodenbaugh; @lakefield]. Interestingly, in the Fe$_{1+\delta}$Te$_{1-x}$Se$_x$ system, the SDW order and superconductivity can be tuned not only by doping Se, but also by adjusting the Fe content. [@zhang:012506; @fang-2008-78; @mcqueen:014522] It has been reported that the excess Fe acts as a magnetic electron donor, [@zhang:012506] suppresses the superconductivity, and induces a weakly localized electronic state. [@liu] Our results are completely consistent with these results—with less Fe and more Se, the SDW order is weaker; with more excess Fe and less Se, superconductivity is weaker, but to really distinguish the role of Fe and Se, samples only varying one element are certainly required for future work. We also note that recent studies of superconducting FeTe$_{0.6}$Se$_{0.4}$ (Ref. ) and FeTe$_{0.5}$Se$_{0.5}$ (Ref. ) show evidence of a spin gap and resonance peak at the wave vector $(0.5,0.5,L)$. It should be interesting to study how the magnetic correlations evolve with Se concentration between 30% and 40%.
The work at Brookhaven National Laboratory was supported by the Office of Science, U.S. Department of Energy, under Contract No.DE-AC02-98CH10886.
[39]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{}
, , , , ****, ().
, , , , , , ****, ().
, , , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , , ****, ().
, ****, ().
, , , , , , , , , , , ****, ().
, , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , , , , .
, , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , ****, ().
, , , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , ****, ().
, ****, ().
, , , , , , ****, ().
, , , , , ****, ().
, ****, ().
, , , ****, ().
, ** (, , ), Chap. .
, , , , , , , , , , , ****, ().
, , , , , , , .
, , , ****, ().
, , , , , ****, ().
, , , , , , , , , , , ****, ().
, , , ****, ().
, , , , , , , , , , , ****, ().
, , , , , , , , , , , .
, , , , , , , , , , , .
|
@addtoreset
1 cm
[**Generalised T-duality and Integrable Deformations** ]{}
0.4in
[**Daniel C. Thompson**]{}$^1$0.1in [*[*${}^1$ Theoretische Natuurkunde, Vrije Universiteit Brussel,\
& The International Solvay Institutes,\
Pleinlaan 2, B-1050 Brussels, Belgium.\
0.1in* ]{}[[email protected]]{}*]{} .5in
**Abstract**
We review some recent developments in the construction of integrable $\eta$- and $\lambda$-deformations of the $AdS_5 \times S^5$ superstring. We highlight their link with Poisson-Lie T-duality. Proceedings of a talk at “The String Theory Universe, $21^{st}$ European String Workshop and $3^{rd}$ COST MP1210 Meeting”.
.4in
20 pt
Introduction
============
Integrability has been a game changer in the context of AdS/CFT over the past decade and gives previously unfeasible calculational power. The formidable technical machinery of integrability can sometimes obscure the true message – integrability is really a statement that a physical system is simple. In holography the classic example is that the dimensions of operators of ${\cal N}=4$ Super Yang-Mills (SYM) can be mapped to energy levels of an integrable spin chain [@Minahan:2002ve]. That integrability emerges in this way in a 4d gauge theory is remarkable.
One might wonder whether this integrability was just a consequence of the maximally (super)symmetric setting of ${\cal N}=4$ SYM. A very natural question we are led to ask is if we can relax some of the assumptions of (super)symmetry whilst preserving integrability? One reason for such hope is that even QCD itself exhibits some integrability in certain limits [@Faddeev:1994zg; @Lipatov:1994xy]. ${\cal N}=4$ SYM admits a class of marginal deformations known as (real) $\beta$-deformations, that modify the superpotential, preserve only the minimal amount of (conformal) supersymmetry but yet preserve integrability. From the holographic perspective the geometry describing these $\beta$-deformations can be obtained by the application of $T$-dualities in TsT transformations [@Lunin:2005jy]. Remarkably the string worldsheet in the $\beta$-deformed spacetime remains integrable [@Frolov:2005dj].
The past two years have seen the development of two new classes of integrable deformations to the $AdS_5\times S^5$ superstring called $\eta$- [@Delduc:2013fga; @Delduc:2013qra] and $\lambda$-deformations [@Sfetsos:2013wia; @Hollowood:2014qma]. At first sight these deformations are even more severe than the $\beta$-deformation; they preserve [*no*]{} supersymmetries, $\eta$-deformations preserve only some $U(1)$ isometries and $\lambda$-deformed spacetimes have no isometries at all. However, it has been conjectured that the deformed theories actually realise the missing symmetries in the sense of a quantum group. The S-matrix of the $\eta$ and $\lambda$ deformed $AdS_5\times S^5$ superstring is expected to be the unique quantum group deformed S-matrix [@Hoare:2011wr] with the quantum group parameter real in the case of $\eta$-deformations [@Delduc:2013qra] and a root of unity in the case of $\lambda$ [@Hollowood:2014qma].
These deformations provide a new angle on the rich interplay of holography, integrability and T-duality. We will see that though we derive the $\eta$ and $\lambda$ deformed theories two very different ways, they are related by a combination of analytic continuation and generalised T-dualities. The type of T-duality involved is known as a Poisson-Lie T-duality [@Klimcik:1995ux] and provides a notion of T-duality in a target space without isometries.
In this short talk (and proceedings) working with the entire $AdS_5 \times S^5$ superstring in a super-coset formulation would be rather unedifying so for pedagogical purpose we illustrate the developments in a simpler scenario. For the $\eta$-deformation we will consider a deformation of the Principal Chiral Model (PCM) of bosonic strings on a group space first introduced by Klimcik [@Klimcik:2002zj] by the name of Yang-Baxter deformations. For the $\lambda$-deformation we will deform a bosonic Wess-Zumino-Witten (WZW) model as in [@Sfetsos:2013wia].
$\eta$-Deformed Principal Chiral Model
======================================
Let us begin with a toy model; the PCM on the three-sphere. This is a theory of maps $g:\Sigma \rightarrow SU(2)$ with an action, $$\label{eq:PCM}
S_{PCM}[g]= \frac{\kappa^2}{4\pi} \int_\Sigma {\textrm{d}}^2 \sigma {\textrm{Tr}} \left( g^{-1} \partial_+ g g^{-1} \partial_- g \right)\, .$$ Bosonic PCMs such as this are not conformal but can appear as a subsector of a full string theory. Moreover, they are interesting in their own right since, as asymptotically free two-dimensional QFTs, they serve as good toy models to understand features of QCD.
The case at hand, Eq. , possesses a global $SU(2)_L\times SU(2)_R$ symmetry and its dynamics are classically integrable; the equations of motions and Bianchi identities for the currents, $J^a = {\textrm{Tr}} T^a g^{-1}{\textrm{d}} g$, can be written as a flatness condition for a Lax connection, $$\label{eq:Lax}
L(z) = \frac{1}{1-z^2} J + \frac{z}{1-z^2} \star J \, , \quad {\textrm{d}}L - L\wedge L = 0 \, .$$ From this Lax connection an $\infty$ tower of non-local conserved charges can be generated from $T(z) = P \exp \int d\sigma L_\sigma$.
It has been known since the work of Cherednik [@Cherednik:1981df] that this theory admits an integrable deformation, $$\label{eq:PCMdef}
S_C[g]= \frac{\kappa^2}{4\pi} \int_\Sigma {\textrm{d}}^2 \sigma {\textrm{Tr}} \left( g^{-1} \partial_+ g g^{-1} \partial_- g \right) + C J_+^3 J_-^3 \, .$$ This action defines a non-linear $\sigma$-model into a squashed three-sphere target space and whilst still integrable, the global symmetries are broken down to $SU(2)_L \times U(1)_R$. What is rather amazing [@Kawaguchi:2011pf] is that the broken symmetry is recovered from the non-local charges in the form of a semi-classical realisation of the quantum group ${\cal U}_q(\frak{sl}_2)$ which begins with, $$\label{eq:QG}
\{ Q^+_R , Q_R^- \}_{P.B.} = - i \frac{q^{Q^3_R} - q^{-Q^3_R} }{q - q^{-1}} \, , \quad q = \textrm{exp}\left(\frac{\sqrt{C}}{1+ C} \right) \, .$$
This idea can be generalised to the PCM on arbitrary groups, $G$, following the work of Klimcik [@Klimcik:2002zj]. A central rôle is played by a ${\cal R}$-matrix that solves a [*modified*]{} Yang-Baxter equation, $$\label{eq:mYB}
[{\cal R} A, {\cal R} B] - {\cal R}([{\cal R} A,B]+[A,{\cal R} B] ) = [A, B] \, ,$$ which should hold for all $A,B \in \frak{g}$ and where the modification is the presence of the term on the right hand side. For compact bosonic groups such a solution is unique e.g. for the case of $\frak{su}(2)$ one has ${\cal R}: ( \sigma^1, \sigma^2, \sigma^3) \mapsto (- \sigma^2 , \sigma^1, 0)$. Armed with this we defined the $\eta$-deformed PCM [@Klimcik:2002zj], $$\label{eq:Seta}
S_\eta[g] = \frac{1}{2\pi t} \int_\Sigma {\textrm{d}}^2 \sigma {\textrm{Tr}} \left( g^{-1} \partial_+ g, \frac{1}{1-\eta {\cal R}} g^{-1} \partial_- g \right) \, .$$ This theory is integrable and its Lax connection can be explicitly written down. As with the $SU(2)$ example above the global $G_R$ action is broken but is recovered through the Poisson algebra of non-local charges as a quantum group ${\cal U}_q(\frak{g})$ with $q =\textrm{exp}(\eta t \pi)$ [@Delduc:2013fga].
The work of Delduc [*et al*]{} extends this construction to first accommodate symmetric cosets [@Delduc:2013fga] and eventually semi-symmetric spaces (i.e. super-cosets) [@Delduc:2013qra] and in particular to the $PSU(2,2|4)$ case relevant for the $AdS_5\times S^5$ superstring. To gain some flavour for the $(AdS_5\times S^5)_\eta$ deformed spacetime we note that the isometries are broken down to a $U(1)^6$ and that supersymmetry is not preserved.
Based on the semi-classical quantum group symmetries of this theory it is conjectured [@Delduc:2013qra] that this provides a Lagrangian description for the quantum group deformation of the $AdS_5\times S^5$ S-matrix [@Hoare:2011wr] in the case where the quantum group parameter is real. A substantial piece of evidence in support of this has been the perturbative matching in the large tension limit [@Arutyunov:2013ega].
There remain a few important unresolved issues the most pressing of which is to establish more firmly the status of the $\eta$-deformed $PSU(2,2|4)$ superstring as a true CFT. As a first step one would like to show that the metric of the $(AdS_5\times S^5)_\eta$ deformed spacetime can be embedded into a full type IIB supergravity background. However attempts in this direction have thus far been somewhat inconclusive and indeed rather puzzling [@Lunin:2014tsa; @Arutyunov:2015qva].
[*[Note Added]{}*]{}: Recently further considerations of the supergravity interpretation have been provided in [@Arutyunov:2015mqj] in which it is suggested that “background” of the $\eta$-model is [*not*]{} a type IIB supergravity solution however the 2d worldsheet is UV finite and scale invariant. Surprisingly, the background does solve some rather natural modified supergravity equations. The meaning of this, and whether the $\eta$-model really defines a critical string theory is a focus of current efforts.
$\lambda$-Deformed WZW Model
============================
Now we turn to what will at first seem like a completely different construction making use of an idea introduced by Sfetsos [@Sfetsos:2013wia]. This simple recipe to construct integrable models goes as follows:
1. [**Double**]{}– First we shall double the degrees of freedom by combining two integrable theories namely the PCM with radius $\kappa$ as defined in Eq. and the WZW model at level $k$.
2. [**Gauge**]{}– To reduce back to the original number of degrees of freedom we gauge the $G_L$ action in the PCM and the anomaly free diagonal action of $G$ in the WZW with a common gauge field. Though the gauge fields are non-propagating in two-dimensions this ties the two integrable systems together.
3. [**Fix**]{}– We now make a gauge fixing choice such that all that is left of the gauged PCM is a quadratic term in the gauge fields.
4. [**Integrate Out**]{}– Now the gauge fields can be integrated out to result in a deformation of the WZW model.
This procedure, which somewhat resembles a Buscher T-dualisation, defines the $\lambda$-deformed theory with action, $$\label{eq:Slambda}
S_\lambda[g]= S_{WZW}[g] + \frac{k}{2\pi}\int {\textrm{Tr}} \left( g^{-1} \partial_+ g, \frac{1}{\lambda^{-1} - \textrm{Ad}_g } \partial_- g g^{-1} \right) \, ,$$ where the parameter $\lambda$ measures the ratio of the PCM radius to WZW level, $$\lambda = \frac{k}{\kappa^{2}+ k } \ .$$ The result remains integrable!
To understand this $\lambda$-deformed model let us first consider the limit of $\lambda\rightarrow 0$ which yields a deformation of a WZW model by a marginally relevant current bilinear, $$S_{\lambda} |_{\lambda \rightarrow 0} \approx k S_{WZW} + \frac{k}{\pi}\int \lambda J_{+}^{a} J_{-}^{a} + {\cal O}( \lambda^{2} ) \ .$$ On the other hand upon taking the limit $\lambda \rightarrow 1$ (which must be done with some care) one finds that the gauged WZW model degenerates into a Lagrange multiplier enforcing a flat connection. Then integration out of gauge fields is now exactly the T-duality Buscher procedure applied to a non-Abelian group of isometries. So in this limit–as was shown in [@Sfetsos:2013wia]–we recover the non-Abelian T-dual of the PCM, $$S_{\lambda} |_{\lambda \rightarrow 1} \approx \frac{1}{\pi} \int \partial_{+} X^{a} (\delta_{ab} + f_{ab}{}^{c}X_{c})^{-1} \partial_{-} X^{b} + {\cal O}(k^{-1}) \ .$$ This result was my initial interest in the subject; one can think that the $\lambda$-deformation gives some sort of ‘regularised’ version of non-Abelian T-duality.
It was rather quickly realised how to generalise this $\lambda$-deformations to symmetric cosets [@Hollowood:2014rla] and semi-symmetric super-cosets (for which this gauging procedure gives the correct bosonic sector but needs some amendment in the fermionic directions)[@Hollowood:2014qma]. For the case based on the $PSU(2,2|4)$ super-coset, it was conjectured [@Hollowood:2014qma] that like $\eta$, this $\lambda$-deformation also provides a Lagrangian realisation of a quantum group deformed S-matrix but with the parameter $q$ this time a root unity $q = e^{i \frac{\pi}{k} }$. Indeed, this is supported by the underlying structure of non-local charges in these models [@Itsios:2014vfa],[@Hollowood:2015dpa].
As with the case of $\eta$ deformations we should like to understand if these theories are conformal (at least at one-loop). The first observation is that the for a compact group the $\lambda$ deformation is a marginally relevant operator however for a non-compact group one has a marginally irrelevant operator. Thus if we combine a $\lambda$ deformation for a compact group with that of a non-compact group one has some hope that the two may provide equally and opposite cancelling contributions to the dilaton $\beta$-function. This was show in [@Sfetsos:2014cea] for some low dimensional examples. Further it was shown that these $\lambda$-deformations can be given embeddings as complete ten-dimensional type IIB supergravity solutions for cases relevant to $AdS_3\times S^3 \times T^4$ [@Sfetsos:2014cea] and $AdS_5 \times S^5$ [@Demulder:2015lva]. In this approach a simple form for the RR sector fields is “boot-strapped” from just knowledge of the bosonic NS sector using techniques developed for non-Abelian T-duality in [@Sfetsos:2010uq]. A more direct approach is to compute from first principles the one-loop $\beta$ function for the coupling $\lambda$. This was done for the bosonic case in [@Itsios:2014lca] and for the supercoset case in [@Appadu:2015nfa]. In fact the answer is very simple, $$\mu \frac{\mathrm{d}\lambda }{ \mathrm{d} \mu } = \frac{c_2( G)}{k} \lambda \, ,$$ where we note $c_2(G)$, the quadratic Casimir in the adjoint, vanishes for $PSU(2,2|4)$.
A further recent line of development in this direction has been the construction of multi-parameter integrable $\lambda$-deformations [@Sfetsos:2015nya] in which $\lambda$ entering into the action Eq. is promoted to a matrix $\lambda_{ab}$.
The $\eta$-$\lambda$ connection
===============================
Now lets put $\eta$ and $\lambda$ together. Though their Lagrangian construction is quite different they are actually closely related via a generalisation of T-duality called Poisson-Lie (PL) T-duality introduced in [@Klimcik:1995ux] and used in the present context in [@Vicedo:2015pna; @Hoare:2015gda; @Sfetsos:2015nya; @Klimcik:2015gba].
To explain this remember that the $\eta$-deformations broke the $G_{R}$ symmetry. So the corresponding currents ${\cal J}_{a}$–which are dressed versions of the Maurer-Cartan forms–are no-longer conserved. However the algebraic structure is very delicate and means that these currents obey a modified conservation law of the form, $$d\star {\cal J}_{a} = \tilde{f}^{bc}{}_{a} {\cal J}_{b} \wedge {\cal J}_{c} \, .$$
If the right-hand side was zero, we could simply perform T-duality by gauging these currents. It turns out even with this non-zero right hand side there is still a notion of a T-dual introduced in the late 1990’s by Klimcik and Severa [@Klimcik:1995ux]. What makes it work is that the $\tilde{f}$ are very special; they are structure constants for a second Lie-algebra $\tilde{\frak{g}}$ built using the Yang-Baxter ${\cal R}$-matrix to define a bracket, $$[ A , B]_{{\cal R}} = [{\cal R} A , B]- [A,{\cal R} B]\, .$$ Mathematically speaking, the sum $ \frak{g} \oplus \tilde{\frak{g}} \cong \frak{g}^{\mathbb{C}}$ defines a Drinfel’d Double equipped with an inner product given in an obvious notation by, $$\langle T_a , T_b \rangle = \langle \tilde T^a , \tilde T^b \rangle= 0 \ , \quad \langle T_a , \tilde T^b \rangle = \delta_a^b\, .$$
PL T-duality is an equivalence between two $\sigma$-models, $$\begin{aligned}
&S[g] =\frac{1}{2\pi t\,\eta} \int \mathrm{d}^2 \sigma J_+^T (M - \Pi )^{-1} J_-\,, \quad g \in \frak{g} \, , \nonumber \\
&\tilde{S}[\tilde{g}] =\frac{1}{2\pi t\,\eta} \int \mathrm{d}^2 \sigma \tilde{J}_+^T (M^{-1} - \tilde\Pi )^{-1} \tilde{J}_-\, , \quad \tilde g \in \frak{\tilde g} \, .\nonumber
\end{aligned}$$ where $M$ can be an arbitrary constant matrix and the group theoretic matrix $\Pi$ is defined via, $$a_a{}^b = \langle g^{-1} T_a g , \tilde{T}^b \rangle \ , \quad b^{ab} = \langle g^{-1} \tilde T^a g , \tilde{T}^b \rangle \ , \quad \Pi = b^T a\, ,$$ with similar for $\tilde\Pi$.
The $\lambda$-deformation takes exactly the form of the first of these PL dual pairs with the identification $ M= \eta^{-1} - {\cal R} $ and hence can be T-dualised in this fashion. This is not quite the $\lambda$-theory; it needs to be supplemented by analytic continuation of some of the Euler angles defining the group element and also a relation between the deformation parameters[^1] and tensions of the models, $$\eta \rightarrow \frac{i (1- \lambda)}{(1+\lambda)}, \quad t \rightarrow \frac{(1+\lambda)}{ k(1-\lambda)} \ .$$ Doing the combination of PL duality, analytic continutations and applying this relation maps exactly the $\eta$ theory of Eq. to the $\lambda$ theory of Eq. . Notice that acting on the parameter $q$ we have, $$q = e^{\eta t \pi} \leftrightarrow q = e^{\frac{i\pi}{k}} \, ,$$ showing indeed $q$ real gets mapped to $q$ root unity.
Outlook
=======
To summarise, we’ve seen two exciting new classes of integrable models, the $\eta$ and the $\lambda$ deformations. They are connected by generalised T-dualities. Their symmetries include quantum groups for the case of $q$ a real and $q$ a root unity respectively. These approaches extend to superstrings in $AdS_{5}\times S^{5}$ and are opening a new window onto integrable deformations in the AdS/CFT conjecture.\
There are many nice angles still to study:
- What are the implications of Poisson-Lie duality for Double Field Theory?
- Are these true consistent CFTs?
- Can this be extended to new scenarios e.g. $AdS_{4}\times CP^{3}$?
- Most important; and most difficult; what are the implications and interpretation of this the on gauge theory side?\
[*[I would like to thank the organisers and supporters of the conference and my collaborators on this topic K. Sfetsos and K. Siampos. Due to limitations of space in these proceedings this is a far from exhaustive bibliography and we apologise to the authors of the many excellent papers that are omitted. ]{}*]{}
[0]{}
<span style="font-variant:small-caps;">J.A. Minahan</span> , and <span style="font-variant:small-caps;">K. Zarembo</span>, [*The Bethe ansatz for N=4 superYang-Mills*]{}, **0303**, 013 (2003). [](http://arxiv.org/abs/hep-th/0212208)
<span style="font-variant:small-caps;">L.D. Faddeev</span>, and <span style="font-variant:small-caps;">G.P. Korchemsky</span>, [*High-energy QCD as a completely integrable model*]{}, **B342**, 311 (1995). [](http://arxiv.org/abs/hep-th/9404173).
<span style="font-variant:small-caps;">L.N. Lipatov</span> [*Asymptotic behavior of multicolor QCD at high energies in connection with exactly solvable spin models*]{}, **59**, 596 (1994).
<span style="font-variant:small-caps;">O. Lunin</span>, and <span style="font-variant:small-caps;">J.M. Maldacena</span>, [*Deforming field theories with $U(1) \times U(1)$ global symmetry and their gravity duals*]{}, **0505**, 033 (2005). [](http://arxiv.org/abs/hep-th/0502086).
<span style="font-variant:small-caps;">S. Frolov</span>, [*Lax pair for strings in Lunin-Maldacena background*]{}, **0505**, 069 (2005). [](http://arxiv.org/abs/hep-th/0503201).
<span style="font-variant:small-caps;">F. Delduc</span>, <span style="font-variant:small-caps;">M. Magro</span>, and <span style="font-variant:small-caps;">B. Vicedo</span>, [*On classical $q$-deformations of integrable sigma-models*]{}, **1311**, 192 (2013). [](http://arxiv.org/abs/1308.3581)
<span style="font-variant:small-caps;">F. Delduc</span>, <span style="font-variant:small-caps;">M. Magro</span>, and <span style="font-variant:small-caps;">B. Vicedo</span>, [*An integrable deformation of the $AdS_5 \times S^5$ superstring action*]{}, **112**, 5 (2014). [](http://arxiv.org/abs/1309.5850)
<span style="font-variant:small-caps;">K. Sfetsos</span>, [*Integrable interpolations: From exact CFTs to non-Abelian T-duals*]{}, **B880**, 225 (2014). [](http://arxiv.org/abs/1312.4560)
<span style="font-variant:small-caps;">T.J. Hollowood</span>, <span style="font-variant:small-caps;">J.L. Miramontes</span> and <span style="font-variant:small-caps;">D.M. Schmidtt</span>, [*An Integrable Deformation of the $AdS_5 \times S^5$ Superstring*]{}, **47** 49 (2014) [](http://arxiv.org/abs/1409.1538) <span style="font-variant:small-caps;">B. Hoare</span>, <span style="font-variant:small-caps;">T. J. Hollowood</span> and <span style="font-variant:small-caps;">J. L. Miramontes</span>, [*$q$-Deformation of the $AdS_5 x S^5$ Superstring S-matrix and its Relativistic Limit*]{}, **1203** 015 (2012) [](http://arxiv.org/abs/1112.4485)
<span style="font-variant:small-caps;">C. Klimcik</span> and <span style="font-variant:small-caps;">P. Severa</span>, [*Dual nonAbelian duality and the Drinfeld double*]{}, Phys. Lett. B [**351**]{} (1995) 455 [](http://arxiv.org/abs/hep-th/9502122).
<span style="font-variant:small-caps;">C. Klimcik</span>, [*Yang-Baxter sigma models and dS/AdS T duality*]{}, **0212**, 051 (2002). [](http://arxiv.org/abs/hep-th/0210095) <span style="font-variant:small-caps;">I. V. Cherednik</span>, [*Relativistically Invariant Quasiclassical Limits of Integrable Two-dimensional Quantum Models*]{}, **47**, 422 (1981).
<span style="font-variant:small-caps;">I. Kawaguchi</span>, and <span style="font-variant:small-caps;">K. Yoshida</span>, [*Hybrid classical integrability in squashed sigma models*]{}, **705** 251 (2011) [](http://arxiv.org/abs/1107.3662)
<span style="font-variant:small-caps;">G. Arutyunov</span>, <span style="font-variant:small-caps;">R. Borsato</span>, and <span style="font-variant:small-caps;">S. Frolov</span>, [*S-matrix for strings on $\eta$-deformed AdS5 x S5*]{}, **1404** 002 (2014) [](http://arxiv.org/abs/1312.3542) <span style="font-variant:small-caps;">O. Lunin</span>, <span style="font-variant:small-caps;">R. Roiban</span>, and <span style="font-variant:small-caps;">A. A. Tseytlin</span>, [*Supergravity backgrounds for deformations of AdS$_{n} \times S^n$ supercoset string models*]{}, **891** 106 (2015) [](http://arxiv.org/abs/1411.1066)
<span style="font-variant:small-caps;">G. Arutyunov</span>, <span style="font-variant:small-caps;">R. Borsato</span>, and <span style="font-variant:small-caps;">S. Frolov</span>, [*Puzzles of eta-deformed $AdS_5 \times S^5$*]{}, [](http://arxiv.org/abs/1507.04239) <span style="font-variant:small-caps;">T.J. Hollowood</span>, <span style="font-variant:small-caps;">J.L. Miramontes</span> and <span style="font-variant:small-caps;">D.M. Schmidtt</span>, [*Integrable Deformations of Strings on Symmetric Spaces*]{}, **1411** 009 (2014) [](http://arxiv.org/abs/1407.2840)
<span style="font-variant:small-caps;">G. Itsios</span>, <span style="font-variant:small-caps;">K. Sfetsos</span>, <span style="font-variant:small-caps;">K. Siampos</span>, and <span style="font-variant:small-caps;">A. Torrielli</span>, [*The classical Yang-Baxter equation and the associated Yangian symmetry of gauged WZW-type theories*]{}, 64 (2014) [](http://arxiv.org/abs/1409.0554) <span style="font-variant:small-caps;">T.J. Hollowood</span>, <span style="font-variant:small-caps;">J.L. Miramontes</span> and <span style="font-variant:small-caps;">D.M. Schmidtt</span>, [*S-Matrices and Quantum Group Symmetry of k-Deformed Sigma Models*]{}, [](http://arxiv.org/abs/1506.06601) <span style="font-variant:small-caps;">K. Sfetsos</span>, and <span style="font-variant:small-caps;">D. C. Thompson</span>, [*Spacetimes for $\lambda$-deformations*]{}, 164 (2014) [](http://arxiv.org/abs/1410.1886)
<span style="font-variant:small-caps;">S. Demulder</span>, <span style="font-variant:small-caps;">K. Sfetsos</span>, and <span style="font-variant:small-caps;">D. C. Thompson</span>, [*Integrable $\lambda$-deformations: Squashing Coset CFTs and $AdS_5\times S^5$*]{}, 019 (2015) [](http://arxiv.org/abs/1504.02781)
<span style="font-variant:small-caps;">K. Sfetsos</span>, and <span style="font-variant:small-caps;">D. C. Thompson</span>, [*On non-abelian T-dual geometries with Ramond fluxes*]{}, 21 (2011) [](http://arxiv.org/abs/1012.1320) <span style="font-variant:small-caps;">G. Itsios</span>, <span style="font-variant:small-caps;">K. Sfetsos</span>, and <span style="font-variant:small-caps;">K. Siampos</span>, [*The all-loop non-Abelian Thirring model and its RG flow*]{}, **733** 265 (2014) [](http://arxiv.org/abs/1404.3748) <span style="font-variant:small-caps;">C. Appadu</span>, and <span style="font-variant:small-caps;">T. J. Hollowood</span>, [*$\beta$ Function of k Deformed $AdS_5 \times S^5$ String Theory*]{}, JHEP [**1511**]{} (2015) 095 [](http://arxiv.org/abs/1507.05420) <span style="font-variant:small-caps;">K. Sfetsos</span>, <span style="font-variant:small-caps;">K. Siampos</span> and <span style="font-variant:small-caps;">D. C. Thompson</span>, [*Generalised integrable $\eta$- and $\lambda$-deformations and their relation*]{}, **899** 489 (2015) [](http://arxiv.org/abs/1506.05784)
<span style="font-variant:small-caps;">B. Vicedo</span>, [*Deformed integrable $\sigma$-models, classical R-matrices and classical exchange algebra on Drinfel’d doubles*]{}, J. Phys. A [**48**]{} (2015) 35, 355203 [](http://arxiv.org/abs/1504.06303) <span style="font-variant:small-caps;">B. Hoare</span> and <span style="font-variant:small-caps;">A. A. Tseytlin</span>, [*On integrable deformations of superstring sigma models related to $AdS_n \times S^n$ supercosets*]{}, Nucl. Phys. B [**897**]{} (2015) 448 [](http://arxiv.org/abs/1504.07213)
<span style="font-variant:small-caps;">C. Klimcik</span>, [*$\eta$ and $\lambda$ deformations as ${\cal E}$-models*]{}, Nucl. Phys. B [**900**]{} (2015) 259 [](http://arxiv.org/abs/1508.05832) <span style="font-variant:small-caps;">G. Arutyunov et al</span>. [*Scale invariance of the $\eta$-deformed $AdS_5 \times S^5$ superstring, T-duality and modified type II equations*]{}, [](http://arxiv.org/abs/1511.05795)
[^1]: In comparison to other instances in the literature one needs to take some care, here we have inherited definitions of $\lambda$ and $\eta$ that are most natural from bosonic $\sigma$-model point of view but in the context of superstrings different definitions $\lambda_s$ and $\eta_s$ are more natural and sometimes used. These are related by $\lambda = \lambda_s^2$ and $\eta = \frac{2 \eta_s}{1-\eta_s^2}$.
|
---
abstract: 'In cake-cutting, strategy-proofness is a very costly requirement in terms of fairness: for $n=2$ it implies a dictatorial allocation, whereas for $n\geq 3$ it requires that one agent receives no cake. We show that a weaker version of this property recently suggested by Troyan and Morril, called [*not-obvious manipulability*]{}, is compatible with the strong fairness property of [*proportionality*]{}, which guarantees that each agent receives $1/n$ of the cake. Both properties are satisfied by the [*leftmost leaves*]{} mechanism, an adaptation of the Dubins - Spanier moving knife procedure. Most other classical proportional mechanisms in literature are obviously manipulable, including the original moving knife mechanism. Not-obvious manipulability explains why leftmost leaves is manipulated less often in practice than other proportional mechanisms.'
address:
- 'Queen’s Management School, Queen’s University Belfast, UK.'
- 'Center for European Economic Research, Mannheim, Germany.'
- 'Department of Computer Science, Ariel University, Israel.'
author:
- 'Josué Ortega[^1]'
- 'Erel Segal-Halevi'
bibliography:
- 'bibliog.bib'
title: 'Obvious Manipulations in Cake-Cutting'
---
cake-cutting ,not-obvious manipulability ,maximin strategy-proofness ,leftmost leaves ,prior-free mechanism design.\
[*JEL Codes:*]{} D63, D82.
Introduction {#sec:introduction}
============
The division of a single good among several agents who value different parts of it distinctly is one of the oldest fair division problems, going as far back as the division of land between Abram and Lot (Genesis 13). Since its formalization as the cake-cutting problem [@steinhaus1948], this research question has inspired a large interdisciplinary literature which has proposed mechanisms that produce fair allocations without giving agents incentives to misrepresent their preferences over the cake. Unfortunately, [@branzei2015dictatorship] show that any deterministic strategy-proof mechanism suggests a dictatorial allocation for $n=2$, and that one agent must receive no cake at all for $n\geq 3$, documenting a strong tension between fairness and incentives properties.
Nevertheless, recent results in applied mechanism design have shown that, even if mechanisms can be manipulated in theory, they are not always manipulated in practice. Some manipulations are more likely to be observed than others, particularly those which are salient or require less computation. Based on this observation, [@troyan2019obvious] have proposed a weaker version of strategy-proofness for direct mechanisms, called [*not-obvious manipulability*]{} (NOM). They define a manipulation as obvious if it yields a higher utility than truth-telling in either the best- or worst-case scenarios. A mechanism is NOM if it admits no obvious manipulation. Their notion of NOM is a compelling one, since it does not require prior beliefs about other agents’ types, and compares mechanisms only based on two scenarios which are particularly salient and which require less cognitive effort to compute. They show that NOM accurately predicts the level of manipulability that different mechanisms experience in practice in school choice and auctions.
In this paper, we provide a natural extension of NOM to indirect mechanisms, and show that the stark conflict between fairness and truth-telling in cake-cutting disappears if we weaken strategy-proofness to NOM. In particular, NOM is compatible with the strong fairness property of [*proportionality*]{}, which guarantees each agent $1/n$ of the cake. Both properties are satisfied by a discrete adaptation of the moving knife mechanism [@dubins1961], in which all agents cut the cake and the agent with the smallest cut receives all the cake to the left of his cut and leaves. This procedure is also procedurally fair and easy to implement in practice. NOM is violated by most other classical proportional mechanisms, even by the original Dubins-Spanier procedure, which shows that theoretically equivalent mechanisms may have different “obvious” incentive properties for boundedly rational agents. NOM partially explains why leftmost leaves is manipulated less frequently than other cake-cutting mechanisms in practice [@kyropoulou2019fair].
Related Literature
==================
The cake-cutting problem has been studied for decades, given its numerous applications to the division of land, inheritances, and cloud computing [@brams1996; @moulin2004]. Most of the cake-cutting literature studies indirect revelation mechanisms, in which agents are asked to reveal their valuation over the cake via specific messages such as cuts and evaluations. In particular, the computer science literature focuses on the so called *Robertson–Webb mechanisms* [@robertson1998], in which agents can only use two types of messages: either an agent *cuts* a piece of the cake having a specific value, or he *evaluates* an existing piece by revealing his utility for it. Most well-known mechanisms in the literature, such as cut and choose, can be expressed as a combination of these two operations.
The main result relating fairness and incentive properties in cake-cutting was put forward by [@branzei2015dictatorship]. They show that with three or more agents, every strategy-proof Robertson–Webb mechanism assigns no cake to at least one agent. Their result builds on a weaker result by [@kurokawa2013cut] regarding mechanisms that have a bounded number of messages.
To allow for some degree of fairness and truth-telling, the literature has considered four research avenues.
The first one is to use *randomized mechanisms*. For example, [@mossel2010] show that truth-telling in expectation and approximate proportionality can be obtained with probabilistic mechanisms. In this paper we consider deterministic mechanisms only.
The second is to restrict the set of possible valuations over the cake. In this line, [@chen2013] provide a complex deterministic mechanism that is both strategy-proof and proportional for a restricted class of utilities called *piecewise linear*. Their mechanism may waste pieces of cake which remain unassigned, something that never occurs with the proportional and NOM mechanism we identify.
A third research avenue studies allocation mechanisms in which an agent can only increase his utility by $\epsilon$ by misrepresenting his preferences, compared to the utility he obtains from truth-telling. [@menon2017deterministic] and [@kyropoulou2019fair] obtain bounds on the amount of extra utility that agents can guarantee by lying in proportional cake-cutting mechanisms.
In this paper, we explore a fourth way to escape the conflict between fairness and truth-telling, which is relaxing the strategy-proofness definition to the one of NOM. NOM is a stronger version of an existing weak truth-telling property in the cake-cutting literature called *maximin strategy-proofness*, proposed by [@brams2006; @brams2008proportional]. We discuss the relationship between these two concepts in detail in the next section.
Model
=====
Outcomes, agents and types
--------------------------
In a general mechanism design problem, there is a set of possible outcomes $\mathcal X$, a set of agents $N$, and a set of possible agent-types $\Theta= \Pi_{i \in N} \Theta_i$. In the particular case of cake cutting, there is an interval $[0,1]$ called the *cake*. A union of subintervals of $[0,1]$ is called a *piece* of the cake. The outcome-set $\mathcal X$ is the set of *allocations* — ordered partitions of $[0,1]$ into $n$ disjoint pieces. $X\in \mathcal X$ denotes an arbitrary allocation and $X_i$ denotes the piece allocated to agent $i$.
The type-set $\Theta$ is the set of possible *utility functions*, where a utility function is a non-atomic measure — an additive function that maps each piece to a non-negative number. The utility of the entire cake is normalized to $1$. The utility of agent $i$ with type $\theta_i$ is denoted by $u_i$. That $u_i$ is non-atomic allows us to ignore the boundaries of intervals. Another implication of nonatomicity is that $u_i$ is divisible, i.e. for every subinterval $[x, y]$ and $0\leq \lambda \leq 1$, there exists a point $z \in [x, y]$ such that $u_i([x, z];\theta_{i}) = \lambda \, u_i([x, y];\theta_{i})$. [^2]
Extensive forms and mechanisms
------------------------------
An *extensive form* is an arborescence $A$ that consists of a set of labelled nodes $H$ and a set of directed edges $E$.[^3] The root node is $h_0$. Each terminal node is labelled with an allocation of the cake. Each non-terminal node $h$ is labelled with a non-empty subset of agents $N(h)$ who have to answer a query about their type. $N(h)$ is said to be the set of players *active at $h$*.
In a general extensive form, the query may be arbitrary, for example asking agents to fully reveal their type. In a *Robertson–Webb extensive form*, only two types of queries are allowed:
1. *Cut query*: the query cut$(i;x,\alpha)$ asks agent $i$ the minimum point $y \in [0; 1]$ such that $u_i([x,y];\theta_i)=\alpha$; where $\alpha \in [0,1]$ and $x$ is an existing cut point or 0 or 1. The point $y$ becomes a cut point.
2. *Eval query*: the query eval$(i; x, y)$ asks agent $i$ for its value for the interval $[x, y]$, that is, eval$(i; x, y) = u_i([x, y];\theta_i)$ where $x,y$ are a previously made cut points or 0 or 1.
All agents must reply to the queries and their answers must be dynamically consistent, meaning that there must be a possible type for which such a sequence of answers is truthful.[^4] However, agents’ answers may be untruthful. For every possible combination of agents’ answers to the queries at node $h$, there is an edge from $h$ to another node $h'$. Thus, answers to queries and edges have a one-to-one relationship. Note that, since the valuations are real numbers, there are uncountably many edges emanating from each node. Figure \[fig:extensiveform\] illustrates a small subset of an extensive form $A$.
=\[draw=black, ellipse, minimum width=100pt,align=center\] (h0) at (0,1) [cut$(1;0,0.5)$]{}; (h1) at (-2.5,-2) ; (h11) at (-4,-2) ; (h2) at (.5,-2) [eval$(2; 0, 0.6)$]{}; (h3) at (2.7,-2) ; (h33) at (4.2,-2) ; (h7) at (-3.5,-4) [$[0,0.6),[0.6,1]$]{}; (h8) at (3.5,-4) [$[0.6,1],[0,0.6)$]{}; (h0) edge node (h1) (h0) edge node [0.01]{} (h11) (h0) edge node [0.6]{} (h2) (h0) edge node [0.9]{} (h33) (h0) edge node [0.7]{} (h3) (h2) edge node [0.2 ]{} (h7) (h2) edge node [0.8]{} (h8);
(h0) – (h1); (h0) – (h11); (h0) – (h2); (h0) – (h3); (h0) – (h33); (h2) – (h7); (h2) – (h8);
(-1.2,-1)–(0,-1); (.5,-1)–(1.7,-1); (-1.1,-3.3)–(1.4,-3.3);
We make the following two standard informational assumptions [@moore1988subgame]. First, at each node $h$ all agents know the entire history of the play. Second, if more than one agent is active at node $h$, they answer their corresponding queries simultaneously.
The indirect mechanism $M$ corresponding to extensive form $A$ takes as input the answers to each query in $A$ and returns the allocation obtained at the corresponding terminal node. This is, the input to the mechanism consists of a path from the root node to a terminal node labelled with an allocation. At node $h$, the edge corresponding to a truthful answer by all agents in $N(h)$ with a type profile $\theta$ is denoted by $e^h(\theta)$, and $E(\theta)=\{e \in E : e^h(\theta) \text{ for some } h \in H\}$. This is, $E(\theta)$ is the set of all edges corresponding to truthful reports. The set $E(\theta)$ is an edge cover, i.e is a set of edges incident to every node in $A$.[^5] An alternative edge cover in which all agents except $i$ answer all queries truthfully, and agent $i$ answers the queries as if he was of type $\theta_i'$, is denoted by $E(\theta_i',\theta_{-i})$. By our assumption of dynamically consistent answers, each possible set of untruthful answers by each agent is associated to a possible type in $\Theta_i$. $M_i(E(\theta_i,\theta_{-i}))$ denotes agent $i$’s individual allocation in mechanism $M$ when agents’ answers correspond to the set of edges $E(\theta_i,\theta_{-i})$. A special case of our definition is a direct revelation mechanism, when there is a unique non-terminal node — the root $h_0$ —, $N(h_0)=N$ and the query for each agent is to fully reveal his type.
A mechanism $M$ is called [*proportional*]{} if it guarantees to a truthful agent a utility of $1/n$ regardless of what every other agent reports, i.e. $u_i(M_i(E(\theta_i,\theta'_{-i});\theta_i))\geq 1/n$ for all $i \in N$, for all $\theta_i \in \Theta_i$, and all $\theta'_{-i} \in \Theta_{-i}$.
Manipulations
-------------
A *manipulation* by agent $i$ is a set of (partially) untruthful answers that satisfy the consistency requirement, i.e., correspond to some type $\theta_i' \neq \theta_i$.
A manipulation is called *profitable* (for mechanism $M$ and type $\theta_i$) if $u_i(M_i(E(\theta'_i,\theta_{-i}));\theta_i) > u_i(M_i(E(\theta_i,\theta_{-i}));\theta_i)$. If some type of some agent has a profitable manipulation, then mechanism $M$ is called *manipulable*.
A mechanism is called [*strategy-proof*]{} (SP) if it is not manipulable, that is, $u_i(M_i(E(\theta_i,\theta_{-i}));\theta_i) \geq u_i(M_i(E(\theta'_i,\theta_{-i}));\theta_i)$ for all $i \in N$, for all $\theta_i, \theta'_i \in \Theta_i$, and all $\theta_{-i} \in \Theta_{-i}$. This definition of SP is very demanding: it requires a truthful report from *every* agent, *every* time he is asked to answer a query.[^6] Even if just one type of an agent has an incentive to give a non-truthful answer to a single query, the mechanism is no longer strategy-proof.
@troyan2019obvious suggest a weaker version of SP for direct-revelation mechanisms, which only compares the best and worst case scenarios from both truthful and untruthful behavior. We extend their definition to indirect mechanisms as follows. A mechanism $M$ is [*not-obviously manipulable*]{} (NOM) if, for any profitable manipulation of agent $i$, corresponding to pretending being a type $\theta_i'$, the following two conditions hold:[^7] $$\begin{aligned}
\label{eq:nom-inf}
\inf_{\theta_{-i}} u_i(M_i(E(\theta'_i,\theta_{-i}));\theta_i) \leq \inf_{\theta_{-i}} u_i(M_i(E(\theta_i,\theta_{-i}));\theta_i)
\\
\label{eq:nom-sup}
\sup_{\theta_{-i}} u_i(M_i(E(\theta'_i,\theta_{-i}));\theta_i) \leq \sup_{\theta_{-i}} u_i(M_i(E(\theta_i,\theta_{-i}));\theta_i)\end{aligned}$$
If any of the previous two conditions do not hold for some $\theta'_i$, then $\theta'_i$ is said to be an *obvious manipulation* for agent $i$ with type $\theta_i$; and the mechanism $M$ is *obviously manipulable*. In other words, a manipulation is obvious if it either makes the agent better off in the worst-case, or if it makes him better off in the best-case. This is a strengthening of the *maximin strategy-proofness* defined by [@brams2006; @brams2008proportional], who only impose condition (1). Brams, Jones and Klamler write: [*“We assume that players try to maximize the minimum-value pieces (maximin pieces) that they can guarantee for themselves, regardless of what the other players do. In this sense, the players are risk-averse and never strategically announce false measures if it does not guarantee them more-valued pieces"*]{}.
Both relaxations of strategy-proofness have the advantages that agents do not require beliefs about other agents’ actions, and that comparing best- and worst-cases scenarios requires less cognitive effort than comparing expected values using an arbitrary distribution over agents’ types. However, maximin strategy-proofness is a mild property that is satisfied by a very large class of mechanisms.[^8] On the other hand, NOM is a property that most classical proportional mechanisms in the literature fail, with the [*leftmost leaves*]{} mechanism being a remarkable exception (Theorem \[thm:leftmost-leaves\]).
Before we proceed to present some examples, we clarify a difference between the notion of indirect mechanisms that we use (standard in the computer science literature) and the definition in economics [@moore1988subgame]. In economics, each non-terminal node is labelled not with a query, but with an action. Thus, to define leftmost leaves one could just specify that, at each period $t$, an agent’s possible actions are to cut a cake at any point, and the one who provides the smallest cut exits with the leftmost part. There is no question where to cut: agents simply cut the cake wherever they want. But, if one does not specify the query asked to each player, it is not [at all]{} clear what truth-telling behavior is. Where should an agent with uniform valuation over the cake cut when dividing a cake against 3 agents if he is not asked a specific question? The computer science definition emphasizes that the mechanism designer can make a mechanism hard to manipulate by cleverly choosing which queries to ask at each node in the extensive form. Indeed, if we slightly modify the queries asked in *leftmost leaves*, the mechanism becomes obviously manipulable, as we show after the proof of Theorem \[thm:leftmost-leaves\].
Results
=======
We first show, using an example, that many classic cake-cutting procedures are obviously manipulable.
\[thm:om\] The cake-cutting mechanisms known as cut-and-choose, cut-middle, Banach-Knaster last diminisher and Dubins-Spanier moving knife are all obviously manipulable.
For simplicity, consider a cake-cutting problem with piecewise uniform valuations, i.e. agents either like or dislike certain intervals, and each desirable interval of the same length has the same value. One agent, called Blue, has valuations as in Figure \[fig:example1\].
(0,0) rectangle (1,1); (9,0) rectangle (10,1); (4,0) rectangle (6,1); (0,0) grid (10,1); in [1,2,3,4,5,6,7,8,9]{} (cm,1pt) – (cm,1pt) node\[anchor=north\] [$.\x$]{};
[.45]{}
(0,.5) rectangle (1,1); (9,.5) rectangle (10,1); (4,.5) rectangle (6,1); (0,0) rectangle (4,.5);
(4,2) – (4,1.2); (0,0) grid (10,1); (4,0) rectangle (10,1); in [1,2,3,4,5,6,7,8,9]{} (cm,1pt) – (cm,1pt) node\[anchor=north\] [$.\x$]{};
[.45]{}
(0,.5) rectangle (1,1); (9,.5) rectangle (10,1); (4,.5) rectangle (6,1); (0,0) grid (10,1);
(0,0) rectangle (.5,.5); (.625,0) rectangle (10,1); (1,2) – (1,1.2);
in [1,2,3,4,5,6,7,8,9]{} (cm,1pt) – (cm,1pt) node\[anchor=north\] [$.\x$]{};
In the well-known [*cut-and-choose*]{} mechanism, Blue (the cutter) is asked $\text{cut}(\text{blue};0,1/2)$. Truthful behavior requires him to cut at $0.5$, guaranteeing a utility of $0.5$ in all cases. If Blue chooses a profitable manipulation instead, say to cut at $0.4$, the best case is that the other agent chooses the left piece of the cake, leaving to Blue a utility of $0.75$. Thus, in inequality , the supremum at the left-hand side is at least $0.75$ while the supremum at the right-hand side is $0.5$. Cut-and-choose is therefore obviously manipulable.
In the [*cut-middle*]{} mechanism, both agents are asked $\text{cut}(i;0,1/2) = x_i$ simultaneously, and the cake is divided at $\frac{x_1+x_2}{2}$, with each agent obtaining the part of the cake which contains his cut. If Blue is truthful and cuts at $0.5$, the best case is that the other agent cuts at $\epsilon$ (or $1-\epsilon$) and thus the cut point becomes $\frac{0.5+\epsilon}{2}\approx 0.25$, so Blue receives $0.75$ utility. Nevertheless, if Blue chooses a profitable manipulation such as cutting at $\epsilon$, the best that could happen is that the other agent cuts at $\eta < \epsilon$, and thus Blue receives a utility almost equal to $1$. In inequality , the supremum at the left-hand side is $1$ and the supremum at the right-hand side is $0.75$.
We conclude that cut-middle, too, is obviously manipulable.
Cut-and-choose and cut-middle are mechanisms for dividing a cake among two agents. We now turn to mechanisms that can be used to divide cake among two or more agents.
First, consider the [*Banach-Knaster last diminisher*]{} mechanism [@steinhaus1948], in which agents are assigned a fixed order. The first agent is asked the point $x_1=\text{cut}(1;0,1/n)$, and is tentatively assigned the piece $[0, x_1]$. Then, the second agent has an option to “diminish” this piece: he is asked the point $x_2=\text{cut}(2;0,1/n)$, and if $x_2<x_1$, then the previous tentative assignment is revoked, and agent 2 is now tentatively assigned the piece $[0, x_2]$. This goes on up to agent $n$. Then, the tentative assignment becomes final: some agent $k$ receives the piece $[0,x_k]$ and the other agents recursively divide the remaining cake $[x_k,1]$.
This procedure is obviously manipulable even with two agents. For example, consider Figure \[fig:example1\](a). Suppose agent 1 answers truthfully $x_1 = 0.5$. If agent 2 answers truthfully $x_2 = 0.2$, then he gets a value of exactly $0.5$. A profitable manipulation for agent 2 is to answer $x_2' = x_1 - \epsilon$: it yields a utility of $1$. This is true in both the best- and worst-case scenarios, which are the same in this case.
We turn to the [*Dubins-Spanier moving knife*]{} [@dubins1961], in which a knife moves from 0 to 1 until the first agent stops it at the point $x$ for which $u_i([0,x];\theta_i)=0.5$, receiving himself the interval $[0,x]$. The best that can happen to a truthful agent is that the other agent cuts the cake at $\epsilon$, so that he receives all the remaining cake which gives him almost 1 utility. However, once the moving knife has reached point $x$, a truthful agent should stop the knife, implying that he gets $0.5$ with certainty, whereas a profitable manipulation would yield a higher utility in the best-case scenario (one example is to stop the knife at $1-\epsilon$). Thus, Dubins-Spanier moving knife is also obviously manipulable.
We now present a slight variation of the Dubins-Spanier moving knife mechanism, in which, in each period, all agents are asked the cut query simultaneously rather than sequentially. We call this variant **leftmost leaves**. The leftmost leaves mechanism works as follows:
- In period $1$ all agents are asked $\text{cut}(i;0,1/n)$. The agent who cuts the cake at the smallest point (denoted $x^1$) leaves with the interval $[0,x^1]$. In case of a tie, the agent with the smallest index of all those who cut at $x^1$ leaves with $[0,x^1]$.
- In period 2, every remaining agent $i$ is asked $\text{cut}(i;x^1,\frac{u_i([x^{1},1];\theta_i)}{n-1})$. The agent who submits the smallest point (denoted $x^2$) leaves with the interval $(x^1,x^2]$. In case of a tie, the agent with the smallest index of all those who cut at $x^2$ leaves with $(x^1,x^2]$.
- In period $t$, all remaining agents are asked $\text{cut}(i;x^{t-1},\frac{u_i([x^{t-1},1];\theta_i)}{n-t+1})$. The procedure continues until only one agent remains. That agent receives $(x^{n-1},1]$.
Despite leftmost leaves being equivalent (in a sense in which we describe below) to the Dubins-Spanier moving knife mechanism, they differ in terms of obvious manipulability.
\[thm:leftmost-leaves\] The leftmost leaves mechanism is proportional and not-obviously-manipulable.
Before presenting the proof, we present a few remarks about this result and its possible extensions.
First, note that leftmost leaves is theoretically equivalent to Dubins-Spanier moving knife mechanism, in that when applied to truthful agents with the same types, both mechanisms always suggest the same allocation. How can mechanisms that are theoretically equivalent, such as the two we just presented, rank differently in terms of incentives? This idea goes back to [@li2017obviously], who shows that two equivalent mechanisms, such as the ascending auction and the second price auction, in which bidding truthfully is a weakly dominant strategy, are different in terms of incentive properties for boundedly rational agents. The intuition in both results is similar: both in the second price auction and in leftmost leaves, agents have no restriction in the prior about their opponents’ types when they reveal their type through either their bids or their cuts; whereas in both the ascending auction and the moving-knife procedure, the fact that the knife or the clock has reached some point tells the agents’ something about their opponents’ types, and thus modifies what to expect in the best- and worst-case scenarios.
Second, the leftmost leaves mechanism satisfies several other desiderata that make it a good candidate to divide goods in practice. Is not only proportional and NOM, but also procedurally fair (up to tie-breaks) [@crawford1977; @nicolo2008], since agents’ identities do not affect the allocation produced. It also generates an assignment of a connected piece of cake for each agent, a desirable property for applications such as the division of land [@segal2017fair].
Third, a follow-up question to Theorem \[thm:leftmost-leaves\] is whether leftmost leaves is the only proportional and NOM mechanism in cake-cutting. The answer is no, as leftmost leaves can be slightly modified in several ways retaining both properties. One of such modifications is to start cutting the cake from the left instead of from the right.[^9] Another less trivial one is a modification of the protocol of [@even1984], which works exactly the same as leftmost leaves for $n=2$ but requires fewer queries for larger values of $n$ (the proof is an almost verbatim copy of the proof of Theorem \[thm:leftmost-leaves\], so we omit it).[^10] Obtaining a characterization of all NOM and proportional mechanisms remains an interesting, albeit challenging, open question.
Proof of Theorem \[thm:leftmost-leaves\]
========================================
We denote the leftmost leaves mechanism by ${M^{ll}}$. We show that ${M^{ll}}$ is not-obviously manipulable. Since ${M^{ll}}$ is an anonymous mechanism in which the identity of the agents does not play a role, it is necessary to check conditions and only for one agent, denoted by $i$.
First, we show that no manipulation yields a higher utility in the worst-case scenario. We use the following lemma.
\[lem:inf\] At period $t$, $$\inf_{\theta_{-i}} u_i({M^{ll}}(E(\theta_i,\theta_{-i}));\theta_i) = \frac{u_i([x^{t-1},1];\theta_i)}{n-t+1}$$
If $x_i^t$ is the smallest cut at period $t$, then the result is immediate. Otherwise, $$u_i([x^{t-1},x^t];\theta_i)\leq \frac{u_i([x^{t-1},1];\theta_i)}{n-t+1}$$
A direct implication is that the remainder of the cake $[x^t,1]$ must be worth at least $\frac{n-t}{n-t+1}$ of $u_i((x^{t-1},1);\theta_i)$, i.e. $$u_i([x^t,1];\theta_i) \geq \frac{n-t}{n-t+1} u_i([x^{t-1},1];\theta_i)$$
Dividing both sides by $n-t$ $$\frac{u_i([x^t,1];\theta_i)}{n-t} \geq \frac{ u_i([x^{t-1},1];\theta_i)}{n-t+1}$$ Note that the left-hand side of the previous expression is the utility that the truthful agent $i$ would receive if he cut the cake at the smallest point in period $t+1$. If his cut was not the smallest at period $t+1$, a recursive formulation shows that he would receive a share of the cake that he values even more in period $t+2$. Thus, the worst that can happen to a truthful agent in period $t$ is to obtain a utility of $\frac{u_i([x^{t-1},1];\theta_i)}{n-t+1}$.
Setting $t=1$ in Lemma \[lem:inf\] shows that leftmost leaves is proportional:
$$\inf_{\theta_{-i}} u_i({M^{ll}}(E(\theta_i,\theta_{-i}));\theta_i) = \frac{u_i([0,1];\theta_i)}{n}$$
Now we show that any manipulation at period $t$, $\hat x_i^t \neq x_i^t$, yields a utility weakly smaller than $\frac{u_i([x^{t-1},1];\theta_i)}{n-t+1}$ in the worst-case scenario.
If $\hat x_i^t < x_i^t$, in the worst-case scenario $\hat x_i^t$ would be the smallest cut, and agent $i$ would therefore receive the allocation $[x^{t-1},\hat x_i^t]$, which by construction yields a weakly lower utility than the allocation $[x^{t-1},x^t_i]$.
In the other case, if $\hat x_i^t > x_i^t$, in the worst-case scenario the smallest cut in period $t$, $x^t$, would be such that $x_i^t< x^t<\hat x_i^t$. Thus, the rest of the cake $[x^t,1]$ is such that $$u_i([x^t,1];\theta_i) < \frac{n-t}{n-t+1} u_i([x^{t-1},1];\theta_i)$$
In period $t+1$, agent $i$ cannot choose a manipulation $\hat x_i^{t+1}\leq x_i^{t+1}$, as per the previous step he would receive a smaller utility, so agent $i$ must choose a manipulation $\hat x_i^{t+1}>x_i^{t+1}$.
But in the worst-case scenario, the smallest cut in period $t+1$ is $x^{t+1}$ such that $x_i^{t+1} < x^{t+1} < \hat x_i^{t+1}$, so that the rest of the cake $[x^{t+1},1]$ is worth less than $$u_i([x^{t+1},1];\theta_i) < \frac{n-t-1}{n-t} u_i([x^{t},1];\theta_i)$$
Following this argument, we see that agent $i$’s only alternative is to receive the last piece of cake $[x^{n-1},1]$, which is, by construction $$u_i([x^{n-1},1];\theta_i) < u_i([x^{n-2},x_i^{n-1}];\theta_i) < \ldots < u_i([x^{t-1},x_i^{t}];\theta_i) =\frac{u_i([x^{t-1},1];\theta_i)}{n-t+1}$$
This concludes the proof that no manipulation yields a higher utility than truth-telling in the worst-case scenario, so inequality is satisfied.
Next, we show that no manipulation yields a higher utility in the best-case scenario. We use the following lemma.
At period $t$, $$\sup_{\theta_{-i}} \, u_i({M^{ll}}(E(\theta_i,\theta_{-i}));\theta_i) = u_i([x^{t-1},1];\theta_i)$$ This is, agent $i$, by being truthful in period $t$, can expect (in the best-case scenario) to obtain the whole cake available in period $t$.
Let us remember that truthful agent $i$ cuts the cake at a point $x_i^t$ such that $u_i([x^{t-1},1];\theta_i)=\frac{u_i([x^{t-1},1];\theta_i)}{n-t+1}$. In the best-case scenario, the smallest cut at period $t$ is $x^t<x_i^t$ such that $x^t=x^{t-1}+\epsilon$. This guarantees that $u_i([x^{t-1},1];\theta_i) + \epsilon' \approx u_i([x^{t},1];\theta_i)$ for an arbitrarily small $\epsilon'$, i.e. no valuable piece of cake for agent $i$ has been allocated. That we can find an $\epsilon$ so small comes directly from the standard assumption that the utilities are divisible.
The best-case scenario is that all further smallest cake cuts $x^{t+1},x^{t+2},\ldots$ are epsilon increments of $x^t$, such that in the end agent $i$ receives $X_i=[x^{n-1},1]$, which gives him a utility arbitrarily close to $u_i([x^t,1])$, i.e. the utility of eating all of the remaining cake available.
Since this is the maximum utility attainable, inequality in the NOM definition is satisfied. This concludes the proof that no manipulation gives a higher utility to a truthful agent in the best-case scenario.
We conclude that no manipulation is better than truth-telling in either the best or the worst-case scenario, thus no manipulation is obvious and leftmost leaves is NOM.
A minor modification of leftmost leaves, sometimes used in the literature [@procaccia2016], destroys the NOM of the procedure. In this modification, at each period $t$, agents are asked $\text{cut}(i ; x^{t-1},1/n)$.
To see that this variant is obviously manipulable, consider the case of an agent who has uniform preferences over the whole $[0,1]$ interval and has to cut the cake against 4 other agents. Her first cut must be $x^1_i=0.2$, which guarantees a utility of 0.2. However, suppose the lowest cut (submitted by someone else) was instead $x^1=0.1$. In period $t=2$, the remaining cake is $[0.1,1]$ and if agent $i$ is truthful, she should cut the cake at $x_i^2=0.3$ so that $u_i([0.1,0.3];\theta_i)=1/5=0.2$. However, there is a manipulation that is guaranteed to yield a higher utility in the worst-case scenario. If she cuts the cake instead at $\hat x_i^2=0.325$ (i.e. the point at which $u_i([0.1,\hat x_i^2];\theta_i)=1/4$), in the worst-case scenario, in which her cut is the lowest, she would guarantee herself a utility of $0.225 > 0.2$. If her cut was not the lowest, by continuing to cut the cake at the point $\hat x_i^t$ such that $x^t_i=\frac{([x^{t-1},1])}{n-t+1}$ for all subsequent $t$, she could make sure to receive a utility of at least $0.225$ (Lemma \[lem:inf\]), which is larger than the worst-case scenario utility received by being truthful, 0.2. We conclude that the modified version of leftmost leaves is obviously manipulable, and thus the choice of which query to make at each step of the mechanism is crucial to make the leftmost leaves procedure not-obviously manipulable.
Direct-revelation mechanisms {#sec:envyfreeness}
============================
While leftmost leaves is proportional and connected, it does not satisfy other desirable properties such as *envy-freeness* (no agent prefers the piece of cake received by someone else over his own piece) and *Pareto-optimality* (no other allocation is better for one agent and not worse for the others).
In the Robertson-Webb model, we could not yet find NOM mechanisms satisfying these properties. For example, the classic mechanism of Selfridge-Conway for three agents (see [@brams1996] for a detailed description) is envy-free, but it is obviously-manipulable. In this mechanism, the first agent cuts the cake into three pieces of equal worth. A truthful agent knows that one of those pieces will never belong to him, and thus he can achieve a maximum utility of at most 0.67. However, a lying agent can cut the cake in one piece of value $1-\epsilon$, and two pieces of almost no value at all. In the best case scenario, he will keep the most valued piece entirely, showing that the Selfridge-Conway procedure is obviously manipulable.
Interestingly, NOM is easier to achieve in the direct-revelation model.
\[lem:direct-inf\] Every direct-revelation mechanism that always returns proportional allocations satisfies inequality .
By proportionality, a truthful agent always receives a utility of at least $1/n$. Consider now an untruthful agent $i$ who reports a type $\theta_i'\neq \theta_i$ (equivalently, reports a utility function $u_i'\neq u_i$). Consider the case when all other $n-1$ agents have a utility of $u_i$ (the true utility of agent $i$). A proportional mechanism must give each of these $n-1$ agents a piece with a value, by the function $u_i$, of at least $1/n$. Hence, the piece remaining for agent $i$ has a value, by the function $u_i$, of at most $1/n$. Hence, in inequality , the infimum is at most $1/n$ at the left and at least $1/n$ at the right, and the inequality holds.
\[lem:direct-sup\] Every direct-revelation mechanism that always returns Pareto-optimal allocations satisfies inequality .
Consider the case when agent $i$ is truthful, all other $n-1$ agents assign a positive value only to a tiny fraction of the cake, and assign a value of $0$ to the rest of the cake. A Pareto-optimal mechanism must assign almost all the cake to agent $i$. Hence, in inequality , the supremum in the right-hand side equals $1$ and the inequality holds.
\[thm:nash\] There exists a NOM direct-revelation mechanism that finds envy-free and Pareto-optimal allocations.
The *Nash-optimal mechanism* is a direct-revelation mechanism that, given $n$ utility functions, selects an allocation that maximizes the product of utilities. Such an allocation is known to be Pareto-optimal and envy-free [@segal2019monotonicity], hence it is also proportional. Hence, by lemmas \[lem:direct-inf\] and \[lem:direct-sup\], the mechanism it is NOM.
When the utility functions are *picewise-constant*, the Nash optimal mechanism can be computed by an efficient algorithm described by [@aziz2014cake].
In contrast to the leftmost leaves rule, the Nash-optimal rule may return disconnected pieces. Moreover, it is known that *any* Pareto-optimal envy-free rule may have to return disconnected pieces (see Example 5.1 in @segal2018resource). Since Pareto-optimality is crucial in the proof of Theorem \[thm:nash\], it remains an open question whether there exists a NOM mechanism that is both envy-free and connected.
A related question is whether there exists an algorithm that finds proportional, Pareto-optimal and connected allocations. If such an algorithm exists, then by Lemmas \[lem:direct-inf\] and \[lem:direct-sup\], it is NOM.
Nevertheless, NOM and approximate envy-freeness are satisfied by a direct revelation implementation of the Simmons-Su mechanism [@edward1999rental], based on Sperner’s lemma.[^11] Formally,
Conclusion and Experimental Evidence
====================================
Although it is impossible to cut a cake in a strategy-proof manner that is not completely unfair to some agent, we can divide a cake in a fair, proportional way that cannot be obviously manipulated using an easily implementable mechanism called leftmost leaves.
Troyan and Morril’s notion of NOM not only allows us to escape the tradeoff between fairness and incentives in cake-cutting, but also helps us to better understand real-life behavior when dividing an heterogeneous good. In the first lab experiment comparing cake-cutting mechanisms, [@kyropoulou2019fair] report truthful behavior in NOM cake-cutting mechanisms in 44% of the cases, whereas the respective number for OM ones is of 31% (the difference is statistically significant with a p-value of 0.0000). In particular, leftmost leaves was significantly less manipulated than Banach-Knaster last diminisher when agents played against 2 opponents (difference of 29 percentage points, p-value of 0.0000) and 3 opponents (difference of 16 percentage points, p-value of 0.0000). The Even-Paz modification of leftmost leaves (which is NOM) was also significantly less manipulated than Banach-Knaster (difference of 35 percentage points, p-value of 0.0000).
Although in general NOM gives us testable predictions that map relatively well to observed behavior, it is intriguing that the Selfridge-Conway procedure also reports high rates of truth-telling, comparable to those of NOM mechanisms. Explaining this puzzling phenomenon remains an open problem which we leave for future research.
References {#references .unnumbered}
==========
[^1]: We are indebted to Thayer Morril, Hervé Moulin and Gabriel Ziegler for their helpful comments and to Sarah Fox, Jacopo Gambato and Fabian Spühler for proof-reading this paper. Any errors are our own.
[^2]: Both $u_i$ and $\theta_i$ are equivalent — the type of an agent is the agent’s utility, so our notation is slightly redundant. Nevertheless, we use it to make the comparison with [@troyan2019obvious] straightforward.
[^3]: An arborescence is a directed, rooted tree in which all edges point away from the root.
[^4]: This is a standard requirement [@branzei2015dictatorship]. For example, if agent $i$ is asked the query eval$(i; 0.3, 1)$ and replies a value of $0$, then if later asked the query cut$(i;0,0.5)$ his answer must be in the interval $[0,0.3)$.
[^5]: Edge covers represent a complete contingent plan of answers.
[^6]: Strategy-proofness is a notion more commonly used for direct-revelation mechanisms. This extension to indirect mechanisms follows the definition of @kurokawa2013cut.
[^7]: They present this definition using maximum and mininum, which may not exists with a continuous cake; instead we consider the supremum and infimum.
[^8]: [@chen2013] call maximin SP a “strikingly weak notion of truthfulness”.
[^9]: *Rightmost leaves* is the only mechanism to divide cake among two agents that is weakly Pareto optimal, proportional and resource monotonic under specific restrictions on agents’ utilities [@segal2018resource].
[^10]: The “leftmost leaves Even-Paz” mechanism works as follows (for clarity let $n$ be a power of 2). Given a cake $[y,z]$, all agents choose cuts $x_i$ such that $u_i([y,x_i];\theta_i)=u_i([y, z];\theta_i)/2$. Let $x^*$ be the median of the $n$ cuts, rounded to the nearest smallest integer. Then the procedure breaks the cake-cutting problem into two: all agents who choose cuts $x_i \leq x^*$ are to divide the cake $[y,x^*)$, whereas all agents who chose cuts above $x^*$ are to divide the cake $[x^*,z]$. Each half is divided recursively among the $n/2$ partners assigned to it. When the procedure is called with a singleton set of agents $\{i\}$ and an interval $I$ it assigns $X_i = I$. For example, if $n=4$, agents cut the cake in two equivalent pieces and the cake is cut at the second smallest cut. Then the two agents with the smallest (largest) cuts play *leftmost leaves* on the left (right) side of the cake.
[^11]: Envy-freeness is quite difficult to satisfy in itself by mechanisms with a bounded number of queries. [See [@aziz2016discrete] for the most recent developments in this issue.]{}
|
---
abstract: 'Bounding the best achievable error probability for binary classification problems is relevant to many applications including machine learning, signal processing, and information theory. Many bounds on the Bayes binary classification error rate depend on information divergences between the pair of class distributions. Recently, the Henze-Penrose (HP) divergence has been proposed for bounding classification error probability. We consider the problem of empirically estimating the HP-divergence from random samples. We derive a bound on the convergence rate for the Friedman-Rafsky (FR) estimator of the HP-divergence, which is related to a multivariate runs statistic for testing between two distributions. The FR estimator is derived from a multicolored Euclidean minimal spanning tree (MST) that spans the merged samples. We obtain a concentration inequality for the Friedman-Rafsky estimator of the Henze-Penrose divergence. We validate our results experimentally and illustrate their application to real datasets.'
author:
- |
Salimeh Yasaei Sekeh, Member, IEEE, Morteza Noshad, Kevin R. Moon,\
Alfred O. Hero, Fellow, IEEE \
\
$^{1}$Department of EECS, University of Michigan, Ann Arbor, MI, U.S.A\
$^{2}$ Departments of Genetics and Applied Math,Yale University, New Haven, CT, U.S.A
- |
Salimeh Yasaei Sekeh, Morteza Noshad, Kevin R. Moon, ,\
and Alfred O. Hero, [^1][^2]
bibliography:
- 'refs.bib'
title: Convergence Rates for Empirical Estimation of Binary Classification Bounds
---
Classification, Bayes error rate, Henze-Penrose divergence, Friedman-Rafsky test statistic, convergence rates, bias and variance trade-off, concentration bounds, minimal spanning trees.
Conclusion
==========
We derived a bound on the MSE convergence rate for the Friedman-Rafsky estimator of the Henze-Penrose divergence assuming the densities are sufficiently smooth. We employed a partitioning strategy to derive the bias rate which depends on the number of partitions, the sample size $m+n$, the Hölder smoothness parameter $\eta$, and the dimension $d$. However by using the optimal partition number, we derived the MSE convergence rate only in terms of $m+n$, $\eta$, and $d$. We validated our proposed MSE convergence rate using simulations and illustrated the approach for the meta-learning problem of estimating the HP-divergence for three real-world data sets. We also provided concentration bounds around the median and mean of the estimator. These bounds explicitly provide the rate that the FR statistic approaches its median/mean with high probability, not only as a function of the number of samples, $m$, $n$, but also in terms of the dimension of the space $d$. By using these results we explored the asymptotic behavior of a variance-like rate in terms of $m$, $n$, and $d$.
\[sec:refs\]
[^1]: S. Yasaei Sekeh, M. Noshad and A. Hero are with the Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, USA e-mail: {salimehy,noshad,hero}$@$umich.edu
[^2]: K. R. Moon is with the Departments of Genetics and Applied Math, Yale University, New Haven, CT, USA e-mail: kevin.moon$@$yale.edu
|
---
abstract: 'A critique of the singularity theorems of Penrose, Hawking, and Geroch is given. It is pointed out that a gravitationally collapsing black hole acts as an ultrahigh energy particle accelerator that can accelerate particles to energies inconceivable in any terrestrial particle accelerator, and that when the energy $E$ of the particles comprising matter in a black hole is $\sim 10^{2} GeV$ or more, or equivalently, the temperature $T$ is $\sim 10^{15} K$ or more, the entire matter in the black hole is converted into quark-gluon plasma permeated by leptons. As quarks and leptons are fermions, it is emphasized that the collapse of a black-hole to a space-time singularity is inhibited by Pauli’s exclusion principle. It is also suggested that ultimately a black hole may end up either as a stable quark star, or as a pulsating quark star which may be a source of gravitational radiation, or it may simply explode with a mini bang of a sort.'
author:
- 'R. K. Thakur'
date: 'Received: date / Accepted: date'
title: 'Can a Black Hole Collapse to a Space-time Singularity? '
---
[example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
Introduction
============
When all the thermonuclear sources of energy of a star are exhausted, the core of the star begins to contract gravitationally because, practically, there is no radiation pressure to arrest the contraction, the pressure of matter being inadequate for this purpose. If the mass of the core is less than the Chandrasekhar limit ($\sim 1.44 \msol$), the contraction stops when the density of matter in the core, $\rho > 2 \times 10^{6} \gcmcui$; at this stage the pressure of the relativistically degenerate electron gas in the core is enough to withstand the force of gravitation. When this happens, the core becomes a stable white dwarf. However, when the mass of the core is greater than the Chandrasekhar limit, the pressure of the relativistically degenerate electron gas is no longer sufficient to arrest the gravitational contraction, the core continues to contract and becomes denser and denser; and when the density reaches the value $\rho \sim 10^{7}\gcmcui$, the process of neutronization sets in; electrons and protons in the core begin to combine into neutrons through the reaction
$p + e^{-} \rightarrow n + \nu_{e}$
The electron neutrinos $\nu_{e}$ so produced escape from the core of the star. The gravitational contraction continues and eventually, when the density of the core reaches the value $\rho \sim 10^{14} \gcmcui$, the core consists almost entirely of neutrons. If the mass of the core is less than the Oppenheimer-Volkoff limit ($\sim 3\msol$), then at this stage the contraction stops; the pressure of the degenerate neutron gas is enough to withstand the gravitational force. When this happens, the core becomes a stable neutron star. Of course, enough electrons and protons must remain in the neutron star so that Pauli’s exclusion principle prevents neutron beta decay
$n \rightarrow p + e^{-} + \overline \nu_{e}$
Where $\overline \nu_{e}$ is the electron antineutrino (Weinberg 1972a). This requirement sets a lower limit $\sim 0.2 \msol$ on the mass of a stable neutron star.\
If, however, after the end of the thermonuclear evolution, the mass of the core of a star is greater than the Chandrasekhar and Oppenheimer-Volkoff limit, the star may eject enough matter so that the mass of the core drops below the Chandrasekhar and Oppenheimer-Volkoff limit as a result of which it may settle as a stable white dwarf or a stable neutron star. If not, the core will gravitationally collapse and end up as a black hole.
As is well known, the event horizon of a black hole of mass $M$ is a spherical surface located at a distance $r = r_{g} = 2GM/c^{2}$ from the centre, where $G$ is Newton’s gravitational constant and $c$ the speed of light in vacuum; $r_{g}$ is called gravitational radius or Schwarzschild radius. An external observer cannot observe anything that is happening inside the event horizon, nothing, not even light or any other electromagnetic signal can escape outside the event horizon from inside. However, anything that enters the event horizon from outside is swallowed by the black hole; it can never escape outside the event horizon again.
Attempts have been made, using the general theory of relativity (GTR), to understand what happens inside a black hole. In so doing, various simplifying assumptions have been made. In the simplest treatment (Oppenheimer and Snyder 1939; Weinberg 1972b) a black hole is considered to be a ball of dust with negligible pressure, uniform density $\rho = \rho(t)$, and at rest at $t=0$. These assumptions lead to the unique solution of the Einstein field equations, and in the comoving co-ordinates the metric inside the black hole is given by
$$\begin{aligned}
ds^2 = dt^2 -R^2(t)\bbs \frac{dr^2}{1-k\,r^2} + r^2 d\theta^2 + r^2\sin^2\theta\,d\phi^2\ebs\end{aligned}$$
in units in which speed of light in vacuum, c=1, and where $k$ is a constant. The requirement of energy conservation implies that $\rho(t)R^3(t)$ remains constant. On normalizing the radial co-ordinate $r$ so that
$$\begin{aligned}
R(0) = 1\end{aligned}$$
one gets
$$\begin{aligned}
\rho(t) = \rho(0)R^{-3}(t)\end{aligned}$$
The fluid is assumed to be at rest at $t=0$, so
$$\begin{aligned}
\dot{R}(0) = 0\end{aligned}$$
Consequently, the field equations give
$$\begin{aligned}
k = \frac{8\pi\,G}{3} \rho(0)\end{aligned}$$
Finally, the solution of the field equations is given by the parametric equations of a cycloid :
$$\begin{aligned}
\nonumber
t = \bb \frac{\psi + \sin\,\psi}{2\sqrt{k}} \eb \\
R = \frac{1}{2} \bb 1 + \cos\,\psi \eb\end{aligned}$$
From equation $(6)$ it is obvious that when $\psi = \pi $. i.e., when
$$\begin{aligned}
t = t_{s} = \frac{\pi}{2\sqrt{k}} = \frac{\pi}{2} \bb \frac{3}{8\pi\,G \rho(0)} \eb^{1/2}\end{aligned}$$
a space-time singularity occurs; the scale factor $R(t)$ vanishes. In other words, a black hole of uniform density having the initial values $\rho(0)$, and zero pressure collapses from rest to a point in $3$ - subspace, i.e., to a $3$ - subspace of infinite curvature and zero proper volume, in a finite time $t_{s}$; the collapsed state being a state of infinite proper energy density. The same result is obtained in the Newtonian collapse of a ball of dust under the same set of assumptions (Narlikar 1978).
Although the black hole collapses completely to a point at a finite co-ordinate time $t=t_{s}$, any electromagnetic signal coming to an observer on the earth from the surface of the collapsing star before it crosses its event horizon will be delayed by its gravitational field, so an observer on the earth will not see the star suddenly vanish. Actually, the collapse to the Schwarzschild radius $r_{g}$ appears to an outside observer to take an infinite time, and the collapse to $R=0$ is not at all observable from outside the event horizon.
The internal dynamics of a non-idealized, real black hole is very complex. Even in the case of a spherically symmetric collapsing black hole with non-zero pressure the details of the interior dynamics are not well understood, though major advances in the understanding of the interior dynamics are now being made by means of numerical computations and analytic analyses. But in these computations and analyses no new features have emerged beyond those that occur in the simple uniform-density, free-fall collapse considered above (Misner,Thorne, and Wheeler 1973). However, using topological methods, Penrose (1965,1969), Hawking (1996a, 1966b, 1967a, 1967b), Hawking and Penrose (1970), and Geroch (1966, 1967, 1968) have proved a number of singularity theorems purporting that if an object contracts to dimensions smaller than $r_{g}$, and if other reasonable conditions - namely, validity of the GTR, positivity of energy, ubiquity of matter and causality - are satisfied, its collapse to a singularity is inevitable.
A critique of the singularity theorems
======================================
As mentioned above, the singularity theorems are based, inter alia, on the assumption that the GTR is universally valid. But the question is : Has the validity of the GTR been established experimentally in the case of strong fields ? Actually, the GTR has been experimentally verified only in the limiting case of week fields, it has not been experimentally validated in the case of strong fields. Moreover, it has been demonstrated that when curvatures exceed the critical value $C_{g} = 1/L_{g}^4$, where $L_{g} = \bb \hbar\,G/c^{3} \eb^{1/2} = 1.6 \times 10^{-33}
\cm $ corresponding to the critical density $\rho_{g} = 5 \times 10^{93} \gcmcui$, the GTR is no longer valid; quantum effects must enter the picture (Zeldovich and Novikov 1971). Therefore, it is clear that the GTR breaks down before a gravitationally collapsing object collapses to a singularity. Consequently, the conclusion based on the GTR that in comoving co-ordinates any gravitationally collapsing object in general, and a black hole in particular, collapses to a point in 3-space need not be held sacrosanct, as a matter of fact it may not be correct at all.
Furthermore, while arriving at the singularity theorems attention has mostly been focused on the space-time geometry and geometrodynamics; matter has been tacitly treated as a classical entity. However, as will be shown later, this is not justified; quantum mechanical behavior of matter at high energies and high densities must be taken into account. Even if we regard matter as a classical entity of a sort, it can be easily seen that the collapse of a black hole to a space-time singularity is inhibited by Pauli’s exclusion principle. As mentioned earlier, a collapsing black hole consists, almost entirely, of neutrons apart from traces of protons and electrons; and neutrons as well as protons and electrons are fermions; they obey Pauli’s exclusion principle. If a black hole collapses to a point in 3-space, all the neutrons in the black hole would be squeezed into just two quantum states available at that point, one for spin up and the other for spin down neutron. This would violate Pauli’s exclusion principle, according to which not more than one fermion of a given species can occupy any quantum state. So would be the case with the protons and the electrons in the black hole. Consequently, a black hole cannot collapse to a space-time singularity in contravention to Pauli’s exclusion principle.
Besides, another valid question is : What happens to a black hole after $t > t_{s}$, i.e., after it has collapsed to a point in 3-space to a state of infinite proper energy density, if at all such a collapse occurs? Will it remain frozen forever at that point? If yes, then uncertainties in the position co-ordinates of each of the particles - namely, neutrons, protons, and electrons - comprising the black hole would be zero. Consequently, according to Heisenberg’s uncertainty principle, uncertainties in the momentum co-ordinates of each of the particles would be infinite. However, it is physically inconceivable how particles of infinite momentum and energy would remain frozen forever at a point. From this consideration also collapse of a black hole to a singularity appears to be quite unlikely.
Earlier, it was suggested by the author that the very strong ’hard-core’ repulsive interaction between nucleons, which has a range $l_{c} \sim 0.4 \times 10^{-13} \cm $, might set a limit on the gravitational collapse of a black hole and avert its collapse to a singularity (Thakur 1983). The existence of this hard-core interaction was pointed out by Jastro (1951) after the analysis of the data from high energy nucleon-nucleon scattering experiments. It has been shown that this very strong short range repulsive interaction arises due to the exchange of isoscalar vector mesons $\omega$ and $\phi$ between two nucleons ( Scotti and Wong 1965). Phenomenologically, that part of the nucleon-nucleon potential which corresponds to the repulsive hard core interaction may be taken as
$$\begin{aligned}
V_{c}(r) = \infty~~~~~~~~~for~~~ r < l_{c}\end{aligned}$$
where $r$ is the distance between the two interacting nucleons. Taking this into account, the author concluded that no spherical object of mass M could collapse to a sphere of radius smaller than $R_{min} =
1.68 \times 10^{-6} M^{1/3} \cm$, or of the density greater than $\rho_{max} = 5.0 \times 10^{16} \gcmcui$. It was also pointed out that an object of mass smaller than $M_{c} \sim 1.21 \times 10^{33}
\gm$ could not cross the event horizon and become a black hole; the only course left to an object of mass smaller than $M_{c}$ was to reach equilibrium as either a white dwarf or a neutron star. However, one may not regard these conclusions as reliable because they are based on the hard core repulsive interaction (8) between nucleons which has been arrived at phenomenologically by high energy nuclear physicists while accounting for the high energy nucleon-nucleon scattering data; but it must be noted that, as mentioned above, the existence of the hard core interaction has been demonstrated theoretically also by Scotti and Wong in 1965. Moreover, it is interesting to note that the upper limit $M_{c} \sim 1.21 \times 10^{33} \g = 0.69 \msol$ on the masses of objects that cannot gravitationally collapse to form black holes is of the same order of magnitude as the Chandrasekhar and the Oppenheimer- Volkoff limits.
Even if we disregard the role of the hard core, short range repulsive interaction in arresting the collapse of a black hole to a space-time singularity in comoving co-ordinates, it must be noted that unlike leptons which appear to be point-like particles - the experimental upper bound on their radii being $10^{-16} \cm$ (Barber 1979) -nucleons have finite dimensions. It has been experimentally demonstrated that the radius $r_0$ of the proton is about $10^{-13}
\cm$(Hofstadter & McAllister 1955). Therefore, it is natural to assume that the radius $r_0$ of the neutron is also about $10^{-13} \cm$. This means the minimum volume $v_{min}$ occupied by a neutron is $\frac{4\pi}{3}{r_0}^3$. Ignoring the “mass defect” arising from the release of energy during the gravitational contraction (before crossing the event horizon), the number of neutrons $N$ in a collapsing black hole of mass $M$ is, obviously, $\frac{M}{m_{n}}$ where $m_{n}$ is the mass of the neutron. Assuming that neutrons are impregnable particles, the minimum volume that the black hole can occupy is $V_{min} = Nv_{min} = v_{min} \frac{M}{m_{n}} $, for neutrons cannot be more closely packed than this in a black hole. However, $V_{min} = \frac{4\pi R_{min}^3}{3}$ where $R_{min}$ is the radius of the minimum volume to which the black hole can collapse. Consequently, $R_{min} = r_{0} {\bb\frac{M}{m_{n}}\eb}^{1/3}$. On substituting $10^{-13} \cm$ for $r_{0}$ and $1.67 \times 10^{-24} \g$ for $m_n$ one finds that $R_{min} = 8.40 \times 10^{-6} M^{1/3}$. This means a collapsing black hole cannot collapse to a density greater than $\rho_{max} = \frac{M}{V_{min}} =
\frac{Nm_{n}}{4/3 \pi r_{0}^{3} N} = 3.99 \times 10^{14}
\gcmcui$. The critical mass $M_{c}$ of the object for which the gravitational radius $R_{g} = R_{min}$ is obtained from the equation $$\begin{aligned}
\frac{2GM_{c}}{c^{2}} = r_{0} \bb \frac{M_{c}}{m_{n}} \eb^{1/3}\end{aligned}$$ This gives $$\begin{aligned}
M_{c} = 1.35 \times 10^{34} \g = 8.68 \msol\end{aligned}$$ Obviously, for $M > M_{c}, R_{g} > R_{min} $, and for $M < M_{c},
R_{g} < R_{min} $.
Consequently, objects of mass $M < M_{c}$ cannot cross the event horizon and become a black hole whereas those of mass $M > M_{c}$ can. Objects of mass $M < M_{c}$ will, depending on their mass, reach equilibrium as either white dwarfs or neutron stars. Of course, these conclusions are based on the assumption that neutrons are impregnable particles and have radius $r_{0} = 10^{-13} cm $ each. Also implicit is the assumption that neutrons are [ *fundamental* ]{} particles; they are not composite particles made up of other smaller constituents. But this assumption is not correct; neutrons as well as protons and other hadrons are [*not fundamental*]{} particles; they are made up of smaller constituents called [*quarks* ]{} as will be explained in section 4. In section 5 it will be shown how, at ultrahigh energy and ultrahigh density, the entire matter in a collapsing black hole is eventually converted into quark-gluon plasma permeated by leptons.
Gravitationally collapsing black hole as a particle accelerator
===============================================================
We consider a gravitationally collapsing black hole. On neglecting mutual interactions the energy E of any one of the particles comprising the black hole is given by $E^2 = p^2 + m^2 >
p^2$, in units in which the speed of light in vacuum $c= 1$, where $p$ is the magnitude of the 3-momentum of the particle and $m$ its rest mass. But $p = \frac{h}{\lambda}$, where $\lambda$ is the de Broglie wavelength of the particle and $h$ Planck’s constant of action. Since all lengths in the collapsing black hole scale down in proportion to the scale factor $R(t)$ in equation $(1)$, it is obvious that $\lambda \propto R(t) $. Therefore it follows that $p \propto R^{-1}(t)$, and hence $p = a\,R^{-1}(t)$, where [*a*]{} is the constant of proportionality. From this it follows that $E > a/R$. Consequently, $E$ as well as $p$ increases continually as R decreases. It is also obvious that $E$ and $p$, the magnitude of the 3-momentum, $\rightarrow \infty$ as $R \rightarrow 0$. Thus, in effect, we have an ultra-high energy particle accelerator, [*so far inconceivable in any terrestrial laboratory*]{}, in the form of a collapsing black hole, which can, in the absence of any physical process inhibiting the collapse, accelerate particles to an arbitrarily high energy and momentum without any limit.
What has been concluded above can also be demonstrated alternatively, without resorting to GTR, as follows. As an object collapses under its selfgravitation, the interparticle distance $s$ between any pair of particles in the object decreases. Obviously, the de Broglie’s wavelength $\lambda$ of any particle in the object is less than or equal to $s$, a simple consequence of Heisenberg’s uncertainty principle. Therefore, $s \geq h/p $, where $h$ is Planck’s constant and $p$ the magnitude of 3-momentum of the particle. Consequently, $p \geq h/s $ and hence $E \geq h/s $. Since during the collapse of the object $s$ decreases, the energy $E$ as well as the momentum $p$ of each of the particles in the object increases. Moreover, from $E \geq h/s $ and $p \geq h/s $ it follows that $E$ and $p \rightarrow \infty$ as $ s \rightarrow 0$. Thus, any gravitationally collapsing object in general, and a black hole in particular, acts as an ultrahigh energy particle accelerator.
It is also obvious that $\rho$, the density of matter in the black hole, increases as it collapses. In fact, $\rho \propto R^{-3} $, and hence $\rho \rightarrow \infty$ as $R \rightarrow 0$.
Quarks: The building blocks of matter
=====================================
In order to understand eventually what happens to matter in a collapsing black hole one has to take into account the microscopic behavior of matter at high energies and high densities; one has to consider the role played by the electromagnetic, weak, and strong interactions - apart from the gravitational interaction - between the particles comprising the matter. For a brief account of this the reader is referred to Thakur(1995), for greater detail to Huang(1992), or at a more elementary level to Hughes(1991).
As has been mentioned in Section 2, unlike leptons, hadrons are not point-like particles, but are of finite size; they have structures which have been revealed in experiments that probe hadronic structures by means of electromagnetic and weak interactions. The discovery of a very large number of [*apparently elementary (fundamental)*]{} hadrons led to the search for a pattern amongst them with a view to understanding their nature. This resulted in attempts to group together hadrons having the same baryon number, spin, and parity but different strangeness $S$ ( or equivalently hypercharge $Y = B + S$, where $B$ is the baryon number) into I-spin (isospin) multiplets. In a plot of $Y$ against $I_{3}$ (z- component of isospin I), members of I-spin multiplets are represented by points. The existence of several such hadron (baryon and meson) multiplets is a manifestation of underlying internal symmetries.
In 1961 Gell-Mann, and independently Neémann, pointed out that each of these multiplets can be looked upon as the realization of an irreducible representation of an internal symmetry group $SU(3)$ ( Gell-Mann and Neémann 1964). This fact together with the fact that hadrons have finite size and inner structure led Gell-Mann, and independently Zweig, in 1964 to hypothesize that hadrons [*are not elementary particles*]{}, rather they are composed of more elementary constituents called [*quarks ($q$)*]{} by Gell-Mann (Zweig called them [*aces*]{}). Baryons are composed of three quarks ($q\,q\,q$) and antibaryons of three antiquarks ($\overline q\,\overline q\, \overline q$) while mesons are composed of a quark and an antiquark each. In the beginning, to account for the multiplets of baryons and mesons, quarks of only three flavours, namely, u(up), d (down), and s(strange) were postulated, and they together formed the basic triplet $\left( \begin{array}{c}u\\d\\ s \end{array} \right)$ of the internal symmetry group $SU(3)$. All these three quarks u, d, and s have spin 1/2 and baryon number 1/3. The u quark has charge $2/3\,e$ whereas the d and s quarks have charge $-1/3\,e$ where $e$ is the charge of the proton. The strangeness quantum number of the u and d quarks is zero whereas that of the s quark is -1. The antiquarks ($\overline u\,,\overline d\,,\overline s$) have charges $-2/3\,e, 1/3\,e,1/3\,e$ and strangeness quantum numbers 0, 0, 1 respectively. They all have spin 1/2 and baryon number -1/3. Both u and d quarks have the same mass, namely, one third that of the nucleon, i.e., $\simeq 310 MeV/c^2 $ whereas the mass of the $s$ quark is $\simeq 500 MeV/c^2$. The proton is composed of two up and one down quarks (p: uud) and the neutron of one up and two down quarks (n: udd).
Motivated by certain theoretical considerations Glashow, Iliopoulos and Maiani (1970) proposed that, in addition to $u$, $d$, $s$ quarks, there should be another quark flavour which they named [*charm*]{} $(c)$. Gaillard and Lee (1974) estimated its mass to be $\simeq 1.5 GeV/c^{2}$. In 1974 two teams, one led by S.C.C. Ting at SLAC (Aubert 1974) and another led by B. Richter at Brookhaven (Augustin 1974) independently discovered the $J/\Psi$, a particle remarkable in that its mass ($3.1 GeV/c^{2}$) is more than three times that of the proton. Since then, four more particles of the same family, namely, $\psi (3684)$, $\psi (3950)$, $\psi
(4150)$, $\psi (4400)$ have been found. It is now established that these particles are bound states of [*charmonium*]{} ($\overline c c$), $J/\psi$ being the ground state. On adopting non-relativistic independent quark model with a linear potential between $c$ and $\overline c$, and taking the mass of $c$ to be approximately half the mass of $J/\psi$, $1.5 GeV/c^{2}$, one can account for the $J/\psi$ family of particles. The $c$ has spin $1/2$, charge $2/3$ $e$, baryon number $1/3$, strangeness $-1$, and a new quantum number charm $(c)$ equal to 1. The $u$, $d$, $s$ quarks have $c=0$. It may be pointed out here that charmed mesons and baryons, the bound states like ($c\overline d$), and ($cdu$) have also been found. Thus the existence of the $c$ quark has been established experimentally beyond any shade of doubt.
The discovery of the $c$ quark stimulated the search for more new quarks. An additional motivation for such a search was provided by the fact that there are three generations of lepton [*weak*]{} [*doublets*]{}: ${\nu_{e}\choose e}$, ${\nu_{\mu}\choose \mu},$ and ${\nu_{\tau}\choose \tau}$ where $\nu_{e}$, $\nu_{\mu},$ and $\nu_{\tau}$ are electron ($e$), muon ($\mu$), and tau lepton ($\tau$) neutrinos respectively. Hence, by analogy, one expects that there should be three generations of quark [*weak*]{} [*doublets*]{} also: ${u \choose d}$, ${c \choose s}$, and ${? \choose ?}$. It may be mentioned here that weak interaction does not distinguish between the upper and the lower members of each of these doublets. In analogy with the isopin $1/2$ of the [*strong doublet*]{} ${p \choose n}$, the [*weak doublets*]{} are regarded as possessing [*weak isopin*]{} $I_{W} = 1/2$, the third component $(I_{W})_{3}$ of this [*weak isopin*]{} being + 1/2 for the upper components of these doublets and - 1/2 for the lower components. These statements apply to the left-handed quarks and leptons, those with negative helicity (with the spin antiparallel to the momentum) only. The right-handed leptons and quarks, those with positive helicity (with the spin parallel to the momentum), are [*weak singlets*]{} having [*weak isopin*]{} zero.
The discovery, at Fermi Laboratory, of a new family of vector mesons, the upsilon family, starting at a mass of $9.4 GeV/c^{2}$ gave an evidence for a new quark flavour called [*bottom*]{} or [*beauty*]{} $(b)$ (Herb 1997; Innes 1977). These vector mesons are in fact, bound states of bottomonium $(\overline b b)$. These states have since been studied in detail at the Cornell electron accelerator in an electron-positron storage ring of energy ideally matched to this mass range. Four such states with masses $9.46,
10.02, 10.35,$ and $10.58$ $GeV/c^{2}$ have been found, the state with mass $9.46 GeV/c^{2}$ being the ground state (Andrews 1980). This implies that the mass of the $b$ quark is $\simeq 4.73 GeV/c^{2}$. The $b$ quark has spin $1/2$ and charge $-1/3\ e$. Furthermore, the $b$ flavoured mesons have been found with exactly the expected properties (Beherend 1983).
After the discovery of the $b$ quark, the confidence in the existence of the sixth quark flavour called [*top*]{} or [*truth*]{} $(t)$ increased and it became almost certain that, like leptons, the quarks also occur in three generations of weak isopin doublets, namely, ${u
\choose d}$, ${c \choose s}$, and ${t\choose b}$. In view of this, intensive search was made for the $t$ quark. But the discovery of the $t$ quark eluded for eighteen years. However, eventually in 1995, two groups, the CDF (Collider Detector at Fermi lab) Collaboration (Abe 1995) and the $D\phi$ Collaboration (Abachi 1995) succeeded in detecting [*toponium*]{} $\overline t t$ in very high energy $\overline p p$ collisions at Fermi Laboratory’s $1.8
TeV$Tevetron collider. The [*toponium*]{} $\overline t t$ is the bound state of $t$ and $\overline t$. The mass of $t$ has been estimated to be $176.0\pm2.0 GeV/c^{2}$, and thus it is the most massive elementary particle known so far. The $t$ quark has spin $1/2$ and charge $2/3\
e$.
Moreover, in order to account for the apparent breaking of the spin-statistics theorem in certain members of the $J^{p}=\frac{3^{+}}{2}$ decuplet (spin 3/2,parity even), , $\bigtriangleup^{++}$ $(uuu)$, and $\Omega^{-}$ $(sss)$, Greenberg $(1964)$ postulated that quark of each flavour comes in three [*colours*]{}, namely, [*red*]{}, [*green*]{}, and [*blue*]{}, and that real particles are always [*colour singlets*]{}. This implies that real particles must contain quarks of all the three colours or colour-anticolour combinations such that they are overall [*white*]{} or [*colourless*]{}. [*White*]{} or [*colourless*]{} means all the three primary colours are equally mixed or there should be a combination of a quark of a given colour and an antiquark of the corresponding anticolour. This means each baryon contains quarks of all the three colours(but not necessarily of the same flavour) whereas a meson contains a quark of a given colour and an antiquark having the corresponding anticolour so that each combination is overall white. Leptons have no colour. Of course, in this context the word ‘colour’ has nothing to do with the actual visual colour, it is just a quantum number specifying a new internal degree of freedom of a quark.
The concept of colour plays a fundamental role in accounting for the interaction between quarks. The remarkable success of quantum electrodynamics (QED) in explaining the interaction between electric charges to an extremely high degree of precision motivated physicists to explore a similar theory for strong interaction. The result is quantum chromodynamics (QCD), a non-Abelian gauge theory (Yang-Mills theory), which closely parallels QED. Drawing analogy from electrodynamics, Nambu (1966) postulated that the three quark colours are the charges (the Yang-Mills charges) responsible for the force between quarks just as electric charges are responsible for the electromagnetic force between charged particles. The analogue of the rule that like charges repel and unlike charges attract each other is the rule that like colours repel, and colour and anticolour attract each other. Apart from this, there is another rule in QCD which states that different colours attract if the quantum state is antisymmetric, and repel if it is symmetric under exchange of quarks. An important consequence of this is that if we take three possible pairs, red-green. green-blue, and blue-red, then a third quark is attracted only if its colour is different and if the quantum state of the resulting combination is antisymmetric under the exchange of a pair of quarks thus resulting in red-green-blue baryons. Another consequence of this rule is that a fourth quark is repelled by one quark of the same colour and attracted by two of different colours in a baryon but only in antisymmetric combinations. This introduces a factor of 1/2 in the attractive component and as such the overall force is zero, i.e., the fourth quark is neither attracted nor repelled by a combination of red-green-blue quarks. In spite of the fact that hadrons are overall colourless, they feel a residual strong force due to their coloured constituents.
It was soon realized that if the three colours are to serve as the Yang-Mills charges, each quark flavour must transform as a triplet of $SU_{c}(3)$ that causes transitions between quarks of the same flavour but of different colours ( the SU(3) mentioned earlier causes transitions between quarks of different flavours and hence may more appropriately be denoted by $SU_{f}(3)$). However, the $SU_{c}(3)$ Yang-Mills theory requires the introduction of eight new spin 1 gauge bosons called [*gluons*]{}. Moreover, it is reasonable to stipulate that the gluons couple to [*left-handed*]{} and [*right-handed*]{} quarks in the same manner since the strong interactions do not violate the law of conservation of parity. Just as the force between electric charges arise due to the exchange of a photon, a massless vector (spin 1) boson, the force between coloured quarks arises due to the exchange of a gluon. Gluons are also massless vector (spin 1) bosons. A quark may change its colour by emitting a gluon. For example, a [*red*]{} quark $q_{R}$ may change to a blue quark $q_{B}$ by emitting a gluon which may be thought to have taken away the [ *red (R) colour* ]{} from the quark and given it the [*blue (B)*]{} colour, or, equivalently, the gluon may be thought to have taken away the [*red (R)*]{} and the [*antiblue ($\overline B$)*]{} colours from the quark. Consequently, the [*gluon $G_{RB}$*]{} emitted in the process $q_{R} \rightarrow q_{B}$ may be regarded as the composite having the [*colour R $\overline B$* ]{} so that the emitted gluon $G_{RB} =
q_{R}\overline q_{B}$. In general, when a quark $q_{i}$ of [*colour i*]{} changes to a quark $q_{j}$ of [*colour j*]{} by emitting a gluon $G_{ij}$, then $G_{ij}$ is the composite state of $q_{i}$ and $\overline
q_{j}$, i.e., $G_{ij} = q_{i} \overline q_{j}$. Since there are three [*colours*]{} and three[*anticolours*]{}, there are $3 \times 3 = 9$ possible combinations ([*gluons*]{})of the form $G_{ij} = q_{i}
\overline q_{j}$. However, one of the nine combinations is a special combination corresponding to the [*white colour*]{}, namely, $G_{W} =
q_{R} \overline q_{R} = q_{G} \overline q_{G} = q_{B} \overline
q_{B}$. But there is no interaction between a [*coloured*]{} object and a [*white (colourless)*]{} object. Consequently, gluon $G_{W}$ may be thought not to exist. This leads to the conclusion that only $9 - 1 = 8$ kinds of gluons exist. This is a heuristic explanation of the fact that $SU_{c}(3)$ Yang-Mills gauge theory requires the existence of eight gauge bosons, i.e., the gluons. Moreover, as the gluons themselves carry colour, gluons may also emit gluons. Another important consequence of gluons possessing colour is that several gluons may come together and form [*gluonium*]{} or [*glue balls*]{}. Glueballs have integral spin and no colour and as such they belong to the meson family.
Though the actual existence of quarks has been indirectly confirmed by experiments that probe hardronic structure by means of electromagnetic and weak interactions, and by the production of various quarkonia ($\overline q q $) in high energy collisions made possible by various particle accelerators, no [*free*]{} quark has been detected in experiments at these accelerators so far. This fact has been attributed to the [*infrared slavery*]{} of quarks, i.e., to the nature of the interaction between quarks responsible for their [*confinement*]{} inside hadrons. Perhaps enormous amount of energy , much more than what is available in the existing terrestrial accelerators, is required to liberate the quarks from confinement. This means the force of attraction between quarks increases with increase in their separation. This is reminiscent of the force between two bodies connected by an elastic string.
On the contrary, the results of deep inelastic scattering experiments reveal an altogether different feature of the interaction between quarks. If one examines quarks at very short distances ($ < 10^{-13}$ cm ) by observing the scattering of a nonhadronic probe, e.g., an electron or a neutrino, one finds that quarks move almost freely inside baryons and mesons as though they are not bound at all. This phenomenon is called the [ *asymptotic freedom*]{} of quarks. In fact Gross and Wilczek (1973 a,b) and Politzer (1973) have shown that the running coupling constant of interaction between two quarks vanishes in the limit of infinite momentum (or equivalently in the limit of zero separation).
Eventually what happens to matter in a collapsing black hole?
=============================================================
As mentioned in Section 3 the energy $E$ of the particles comprising the matter in a collapsing black hole continually increases and so does the density $\rho$ of the matter whereas the separation $s$ between any pair of particles decreases. During the continual collapse of the black hole a stage will be reached when $E$ and $\rho$ will be so large and $s$ so small that the quarks confined in the hadrons will be liberated from the [*infrared slavery*]{} and will enjoy [*asymptotic freedom*]{}, i.e., the quark [*deconfinement*]{} will occur. In fact, it has been shown that when the energy $E$ of the particle $\sim 10^{2}$ GeV ($s \sim
10^{-16}$ cm) corresponding to a temperature $T \sim 10^{15} K$ all interactions are of the Yang-Mills type with $SU_{c}(3) \times
SU_{I_W}(2) \times U_{Y_W}(1)$ gauge symmetry, where c stands for colour, $I_{W}$ for weak isospin, and $Y_{W}$ for weak hypercharge, and at this stage quark deconfinement occurs as a result of which matter now consists of its fundamental constituents : spin 1/2 leptons, namely, the electrons, the muons, the tau leptons, and their neutrinos, which interact only through the electroweak interaction(i.e., the unified electromagnetic and weak interactions); and the spin 1/2 quarks, u, d, s, c, b, t, which interact eletroweakly as well as through the colour force generated by gluons(Ramond, 1983). In other words, when $E \geq 10^{2}$ GeV ($s \leq 10^{-16}$ cm) corresponding to $T \geq 10^{15} K$, the entire matter in the collapsing black hole will be in the form of qurak-gluon plasma permeated by leptons as suggested by the author earlier (Thakur 1993).
Incidentally, it may be mentioned that efforts are being made to create quark-gluon plasma in terrestrial laboratories. A report released by CERN, the European Organization for Nuclear Research, at Geneva, on February 10, 2000, said that by smashing together lead ions at CERN’s accelerator at temperatures 100,000 times as hot as the Sun’s centre, i.e., at $T \sim 1.5 \times 10^{12} K$, and energy densities never before reached in laboratory experiments, a team of 350 scientists from institutes in 20 countries succeeded in isolating tiny components called quarks from more complex particles such as protons and neutrons. “A series of experiments using CERN’s lead beam have presented compelling evidence for the existence of a new state of matter 20 times denser than nuclear matter, in which quarks instead of being bound up into more complex particles such as protons and neutrons, are liberated to roam freely ” the report said. However, the evidence of the creation of quark gluon plasma at CERN is indirect, involving detection of particles produced when the quark-gluon plasma changes back to hadrons. The production of these particles can be explained alternatively without having to have quark-gluon plasma. Therefore, Ulrich Heinz at CERN is of the opinion that the evidence of the creation of quark-gluon plasma at CERN is not enough and conclusive. In view of this, CERN will start a new experiment, ALICE, soon (around 2007-2008) at its Large Hadron Collider (LHC) in order to definitively and conclusively creat QGP.
In the meantime the focus of research on quark-gluon plasma has shifted to the Relativistic Heavy Ion Collider (RHIC), the worlds newest and largest particle accelerator for nuclear research, at Brookhaven National Laboratory in Upton, New York. RHIC’s goal is to create and study quark-gluon plasma. RHIC’s aim is to create quark-gluon plasma by head-on collisions of two beams of gold ions at energies 10 times those of CERN’s programme, which ought to produce a quark-gluon plasma with higher temperature and longer lifetime thereby allowing much clearer and direct observation. RHIC’s quark-gluon plasma is expected to be well above the transition temperature for transition between the ordinary hadronic matter phase and the quark-gluon plasma phase. This will enable scientists to perform numerous advanced experiments in order to study the properties of the plasma. The programme at RHIC began in the summer of 2000 and after two years Thomas Kirk, Brookhaven’s Associate Laboratory Director for High Energy Nuclear Physics, remarked, “It is too early to say that we have discovered the quark-gulon plasma, but not too early to mark the tantalizing hints of its existence.” Other definitive evidence of quark-gluon plasma will come from experimental comparisons of the behavior in hot, dense nuclear matter with that in cold nuclear matter. In order to accomplish this, the next round of experimental measurements at RHIC will involve collisions between heavy ions and light ions, namely, between gold nuclei and deuterons.
Later, on June 18, 2003 a special scientific colloquium was held at Brcokhaven Natioal Laboratory (BNL) to discuss the latest findings at RHIC. At the colloquium, it was announced that in the detector system known as STAR ( Solenoidal Tracker AT RHIC ) head-on collision between two beams of gold nuclei of energies of 130 GeV per nuclei resulted in the phenomenon called “jet quenching“. STAR as well as three other experiments at RHIC viz., PHENIX, BRAHMS, and PHOBOS, detected suppression of “leading particles“, highly energetic individual particles that emerge from nuclear fireballs, in gold-gold collisions. Jet quenching and leading particle suppression are signs of QGP formation. The findings of the STAR experiment were presented at the BNL colloquium by Berkeley Laboratory’s NSD ( Nuclear Science Division ) physicist Peter Jacobs.
Collapse of a black hole to a space-time singularity is inhibited by Pauli’s exclusion principle
================================================================================================
As quarks and leptons in the quark-gluon plasma permeated by leptons into which the entire matter in a collapsing black hole is eventually converted are fermions, the collapse of a black hole to a space-time singularity in a finite time in a comoving co-ordinate system, as stipulated by the singularity theorems of Penrose, Hawking and Geroch, is inhibited by Pauli’s exclusion principle. For, if a black hole collapses to a point in 3-space, all the quarks of a given flavour and colour would be squeezed into just two quantum states available at that point, one for spin up and the other for spin down quark of that flavour and colour. This would violate Pauli’s exclusion principle according to which not more than one fermion of a given species can occupy any quantum state. So would be the case with quarks of each distinct combination of colour and flavour as well as with leptons of each species, namely, $e, \mu, \tau, \nu_{e}, \nu_{\mu}$ and $\nu_{\tau}$. Consequently, a black hole cannot collapse to a space-time singularity in contravention to Pauli’s exclusion principle. Then the question arises : If a black hole does not collapse to a space-time singularity, what is its ultimate fate? In section 7 three possibilities have been suggested.
Ultimately how does a black hole end up?
========================================
The pressure $P$ inside a black hole is given by $$\begin{aligned}
P = P_{r} + \sum_{i,j}P_{ij} + \sum_{k}P_{k} + \sum_{i,j} \overline P_{ij} + \sum_{k} \overline P_{k} \end{aligned}$$ where $P_{r}$ is the radiation pressure, $P_{ij}$ the pressure of the relativistically degenerate quarks of the $i^{th}$ flavour and $j^{th}$ colour, $P_{k}$ the pressure of the relativistically degenerate leptons of the $k^{th}$ species, $\overline P_{ij}$ the pressure of relativistically degenerate antiquarks of the $i^{th}$ flavour and $j^{th}$ colour, $\overline P_{k}$ that of the relativistically degenerate antileptons of the $k^{th}$ species. In equation (11) the summations over $i$ and $j$ extend over all the six flavours and the three colours of quarks, and that over $k$ extend over all the six species of leptons. However, calculation of these pressures are prohibitively difficult for several reasons. For example, the standard methods of statistical mechanics for calculation of pressure and equation of state are applicable when the system is in thermodynamics equilibrium and when its volume is very large, so large that for practical purpose we may treat it as infinite. Obviously, in a gravitationally collapsing black hole, the photon, quark and lepton gases cannot be in thermodynamic equilibrium nor can their volume be treated as infinite. Moreover, at ultrahigh energies and densities, because of the $SU_{I_W}$(2) gauge symmetry, transitions between the upper and lower components of quark and lepton doublets occur very frequently. In addition to this, because of the $SU_{f}$(3) and $SU_{c}$(3) gauge symmetries transitions between quarks of different flavours and colours also occur. Furthermore, pair production and pair annihilation of quarks and leptons create additional complications. Apart from these, various other nuclear reactions may as well occur. Consequently, it is practically impossible to determine the number density and hence the contribution to the overall pressure $P$ inside the black hole by any species of elementary particle in a collapsing black hole when $E \geq 10^2$ Gev ($s \leq 10^{-16}$ cm), or equivalently, $T \geq 10^{15} K$. However, it may not be unreasonable to assume that, during the gravitational collapse, the pressure $P$ inside a black hole increases monotonically with the increase in the density of matter $\rho$. Actually, it might be given by the polytrope, $P = k\rho^{\frac{(n+1)}{n}}$, where $K$ is a constant and $n$ is polytropic index. Consequently, $P \rightarrow \infty$ as $ \rho
\rightarrow \infty$, i.e., $P \rightarrow \infty$ as the scale factor $R(t) \rightarrow 0$ (or equivalently $s \rightarrow 0$). In view of this, there are three possible ways in which a black hole may end up.
1\. During the gravitational collapse of a black hole, at a certain stage, the pressure $P$ may be enough to withstand the gravitational force and the object may become gravitationally stable. Since at this stage the object consists entirely of quark-gluon plasma permeated by leptons, it means it would end up as a stable quark star. Indeed, such a possibility seems to exist. Recently, two teams - one led by David Helfand of Columbia University, NewYork (Slane, Helfand, and Murray 2002) and another led by Jeremy Drake of Harvard-Smithsonian Centre for Astrophysics, Cambridge, Mass. USA (Drake 2002) studied independently two objects, 3C58 in Cassiopeia, and RXJ1856.5-3754 in Corona Australis respectively by combining data from the NASA’s Chandra X-ray Observatory and the Hubble Space Telescope, that seemed, at first, to be neutron stars, but, on closer look, each of these objects showed evidence of being an even smaller and denser object, possibly a quark star.
2\. Since the collapse of a black hole is inhibited by Pauli’s exclusion principle, it can collapse only upto a certain minimum radius, say, $r_{min}$. After this, because of the tremendous amount of kinetic energy, it would bounce back and expand, but only upto the event horizon, i.e., upto the gravitational (Schwarzschild ) radius $r_g$ since, according to the GTR, it cannot cross the event horizon. Thereafter it would collapse again upto the radius $r_{min}$ and then bounce back upto the radius $r_g$. This process of collapse upto the radius $r_{min}$ and bounce upto the radius $r_g$ would occur repeatedly. In other words, the black hole would continually pulsate radially between the radii $r_{min}$ and $r_g$ and thus become a pulsating quark star. However, this pulsation would cause periodic variations in the gravitational field outside the event horizon and thus produce gravitational waves which would propagate radially outwards in all directions from just outside the event horizon. In this way the pulsating quark star would act as a source of gravitational waves. The pulsation may take a very long time to damp out since the energy of the quark star (black hole) cannot escape outside the event horizon except via the gravitational radiation produced outside the event horizon. However, gluons in the quark-gluon plasma may also act as a damping agent. In the absence of damping, which is quite unlikely, the black hole would end up as a perpetually pulsating quark star.
3\. The third possibility is that eventually a black hole may explode; a [*mini bang*]{} of a sort may occur, and it may, after the explosion, expand beyond the event horizon though it has been emphasized by Zeldovich and Novikov (1971) that after a collapsing sphere’s radius decreases to $r < r_g$ in a finite proper time, its expansion into the external space from which the contraction originated is impossible, even if the passage of matter through infinite density is assumed.
Notwithstanding Zeldovich and Novikov’s contention based on the very concept of event horizon, a gravitationally collapsing black hole may also explode by the very same mechanism by which the big bang occurred, if indeed it did occur. This can be seen as follows. At the present epoch the volume of the universe is $\sim 1.5 \times 10^{85}
\cm^3$ and the density of the galactic material throughout the universe is $\sim 2 \times 10^{-31} \gcmcui$ (Allen 1973). Hence, a conservative estimate of the mass of the universe is $\sim 1.5 \times
10^{85} \times 2 \times 10^{-31} \g = 3 \times 10^{54} \g $. However, according to the big bang model, before the big bang, the entire matter in the universe was contained in an [*ylem*]{} which occupied very very small volume. The gravitational radius of the ylem of mass $3
\times 10^{54} g $ was $ 4.45 \times 10^{21} \km$ (it must have been larger if the actual mass of the universe were taken into account which is greater than $3 \times 10^{54} \g $). Obviously, the radius of the ylem was many orders of magnitude smaller than its gravitational radius, and yet the ylem exploded with a big bang, and in due course of time crossed the event horizon and expanded beyond it upto the present Hubble distance $c/H_0 \sim 1.5 \times 10^{23} \km$ where $c$ is the speed of light in vacuum and $H_0$ the Hubble constant at the present epoch. Consequently, if the ylem could explode in spite of Zeldovich and Novikov’s contention, a gravitationally collapsing black hole can also explode, and in due course of time expand beyond the event horizon. The origin of the big bang, i.e., the mechanism by which the ylem exploded, is not definitively known. However, the author has, earlier proposed a viable mechanism (Thakur 1992) based on supersymmetry/supergravity. But supersymmetry/supergravity have not yet been validated experimentally.
Conclusion
==========
From the foregoing three inferences may be drawn. One, eventually the entire matter in a collapsing black hole is converted into quark-gluon plasma permeated by leptons. Two, the collapse of a black hole to a space - time singularity is inhibited by Pauli’s exclusion principle. Three, ultimately a black hole may end up in one of the three possible ways suggested in section 7.
The author thanks Professor S. K. Pandey, Co-ordinator, IUCAA Reference Centre, School of Studies in Physics, Pt. Ravishankar Shukla University, Raipur, for making available the facilities of the Centre. He also thanks Sudhanshu Barway, Mousumi Das for typing the manuscript.
Abachi S., , 1995, PRL, 74, 2632 Abe F., , 1995, PRL, 74, 2626 Allen C. W., 1993, [**]{}, The Athlone Press, University of London, 293 Andrew D., , 1980, PRL, 44, 1108 Aubert J. J., , 1974, PRL, 33, 1404 Augustin J. E., , 1974, PRL, 33, 1406 Barber D.P.,,1979, PRL, 43, 1915 Beherend S., , 1983, PRL, 50, 881 Drake J. , 2002, ApJ, 572, 996 Gaillard M. K., Lee B. W., 1974, PRD, 10, 897 Gell-Mann M., Néeman Y., 1964, [*The Eightfold Way*]{}, W. A. Benjamin, NewYork Geroch R. P., 1966, PRL, 17, 445 Geroch R. P., 1967, [*Singularities in Spacetime of General Relativity : Their Defination, Existence and Local Characterization,*]{} Ph.D. Thesis, Princeton University Geroch, R. P., 1968, Ann. Phys., 48, 526 Galshow S. L., Iliopoulos J., Maiani L., 1970, PRD, 2,1285 Greenberg O. W., 1964, PRL, 13, 598 Gross D. J., Wilczek F., 1973a, PRL, 30, 1343 Gros, D. J., Wilczek F., 1973b, PRD, 8, 3633 Hawking S. W., 1966a, Proc. Roy. Soc., 294A, 511 Hawking S. W., 1966b, Proc. Roy. Soc., 295A, 490 Hawking S. W., 1967a, Proc. Roy. Soc., 300A, 187 Hawking S. W., 1967b, Proc. Roy. Soc., 308A, 433 Hawking S. W., Penrose R., 1970, Proc. Roy. Soc., 314A, 529 Herb S. W., , 1977, PRL, 39, 252 Hofstadter R., McAllister R. W., PR, 98, 217 Huang K., 1982, [*Quarks, Leptons and Gauge Fields*]{}, World Scientific, Singapore Hughes I. S., 1991, [*Elementry Particles*]{}, Cambridge Univ. Press, Cambridge Innes W. R., , 1977, PRL, 39, 1240 Jastrow R., 1951, PR, 81, 165 Misner C. W., Thorne K. S., Wheeler J. A., 1973, Gravtitation, Freemon, NewYork, 857 Nambu Y., 1966, in A. de Shalit (Ed.), [*Preludes in Theoretical Physics*]{}, North-Holland, Amsterdam Narlikar J. V., 1978,[*Lectures on General Relativity and Cosmology*]{}, The MacMillan Company of India Limited, Bombay, 152 Oppenheimer,J. R., Snyder H., 1939, PR 56, 455 Penrose R., 1965, PRL, 14, 57 Penrose R., 1969, [*Riv. Nuoro Cimento*]{}, 1, Numero Speciale, 252 Politzer H. D., 1973, PRL, 30, 1346 Ramond P., 1983, Ann. Rev. Nucl. Part. Sc., 33, 31 Scotti A., Wong D. W., 1965, PR, 138B, 145 Slane P. O., Helfand D. J., Murray, S. S., 2002, ApJL, 571, 45 Thakur R. K., 1983, Ap&SS, 91, 285 Thakur R. K., 1992, Ap&SS, 190, 281 Thakur R. K., 1993, Ap&SS, 199, 159 Thakur R. K., 1995, [*Space Science Reviews*]{}, 73, 273 Weinberg S., 1972a, [*Gravitation and Cosmology*]{}, John Wiley & Sons, New York, 318 Weinberg S., 1972b, [*Gravitation and Cosmology*]{}, John Wiley & Sons, New York. 342-349 Zeldovich Y. B., Novikov I. D., 1971, [*Relativistic Astrophysics*]{}, Vol. I, University of Chicogo Press, Chicago,144-148 Zweig G., 1964, Unpublished CERN Report
|
---
abstract: 'Coevolutionary arms races form between interacting populations that constitute each other’s environment and respond to mutual changes. This inherently far-from-equilibrium process finds striking manifestations in the adaptive immune system, where highly variable antigens and a finite repertoire of immune receptors coevolve on comparable timescales. This unique challenge to the immune system motivates general questions: How do ecological and evolutionary processes interplay to shape diversity? What determine the endurance and fate of coevolution? Here, we take the perspective of responsive environments and develop a phenotypic model of coevolution between receptors and antigens that both exhibit cross-reactivity (one-to-many responses). The theory predicts that the extent of asymmetry in cross-reactivity is a key determinant of repertoire composition: small asymmetry supports persistent large diversity, whereas strong asymmetry yields long-lived transients of quasispecies in both populations. The latter represents a new type of Turing mechanism. More surprisingly, patterning in the trait space feeds back on population dynamics: spatial resonance between the Turing modes breaks the dynamic balance, leading to antigen extinction or unrestrained growth. Model predictions can be tested via combined genomic and phenotypic measurements. Our work identifies cross-reactivity as an important regulator of diversity and coevolutionary outcome, and reveals the remarkable effect of ecological feedback in pattern-forming systems, which drives evolution toward non-steady states different than the Red Queen persistent cycles.'
author:
- Hongda Jiang
- Shenshen Wang
title: 'Trait-space patterning and the role of feedback in antigen-immunity coevolution'
---
Introduction
============
Highly variable antigenic challengers, such as fast evolving viruses and cancer cells, are rapid in replication and abound with genetic or phenotypic innovations [@duffy:08; @greaves:12], thus managing to evade immune recognition. On the reciprocal side, the host’s immune system adjusts on the fly the clonal composition of its finite receptor repertoire to recognize the altered versions of the antigen. As a result, coevolutionary arms races between antigen and immunity may endure through an individual’s lifetime [@liao:13].
In this Red Queen [@valen:73] scenario, antigen and receptor populations constitute each other’s responsive environment and are mutually driven out of equilibrium: specific immune receptors prey on matching antigens and hence alter both the composition and overall abundance of antigens, which in turn modifies selective pressures on distinct receptors thus causing re-organization of the repertoire, and vice versa. Consequently, neither of the populations has enough time to equilibrate and yet they mutually engage in a dynamic balance. In this sense, the Red Queen state represents a nonequilibrium steady state [@lax:60; @qian:06]. Then the question is whether alternative evolutionary outcomes characteristic of non-steady states would occupy a larger volume of the state space of coevolving systems than does the Red Queen state.
Recent progress has been made toward understanding various aspects of coevolutionary dynamics in antigen-immunity systems [@luo:15; @armita:16; @cobey:15; @zanini:15; @neher:16; @luksza:14; @luksza:17; @weitz:05; @bradde:17], ranging from antibody evolution against HIV and influenza viruses to evolution of tumors and bacterial phage under host immunity. Yet we are still short of insights regarding key fundamental questions: How do receptor repertoire and antigen ensemble mutually organize, when ecological and evolutionary dynamics occur on comparable timescales? What govern the persistence and outcome of mutual adaptation? In existing generic models where both ecological and evolutionary processes are considered, a separation of timescales is often assumed so that the fast dynamics is slaved to the slow one (reviewed in [@dieckmann:04]). In cases where timescales are not treated as separated [@fussmann:00; @yoshida:07; @jones:07; @shih:14; @vetsigian:17; @wienand:18; @halatek:18; @kotil:18], the feedback between changes in diversity and population dynamics tends to be ignored. The goal of this paper is to consider inseparable timescales and at the same time account for feedback effect in order to address the questions raised above. Specifically, we develop a phenotypic model, based on predator-prey interactions between coevolving immune receptors and antigens, that combines evolutionary diversification and population dynamics. By formulating an ecological model in a trait space, we describe coevolutionary changes in the distribution of trait values and trait-dependent predation in the same framework. Importantly, this allows us to study the stability of speciation (pattern formation in the trait space) and its impact on the persistence of coevolution. Our model abstracts the key features of adaptive immunity: antigens and receptors move (due to trait-altering mutations) and behave like activators and inhibitors that react through predation; both antigens and receptors are cross-reactive — one receptor recognizes many distinct antigens and one antigen is recognized by multiple receptors — this flexibility in recognition stems from structural conservation of part of the receptor/antigen binding surface [@birnbaum:14] and provides an enormous functional degeneracy [@wooldridge:12] among distinct immune repertoires; there is no preexisting fitness landscape for either population so that selection pressures are owing purely to predation. The theory predicts, counterintuitively, that simultaneous patterning in coevolving populations can emerge solely from asymmetric range of activation and inhibition in predator-prey dynamics, without a need for competitive interaction within either population [@scheffer:06; @pigolotti:07; @doebeli:10; @rogers:12] or severely large differences in their rate of evolution [@turing:52] (aka mobility in their common phenotypic space), thus representing a new Turing mechanism. This surprising result can be understood from an intuitive picture: colocalized clusters of antigens and receptors form in the trait space when the “inhibition radii" of adjacent receptor clusters overlap so that inhibition of antigen is strongest in between them; whereas alternate clusters emerge if the “activation radii" of neighboring antigen clusters intersect, because then activation of receptor is most intense in the midway. We show for the first time that, by varying the asymmetry in cross-reactivity alone, transitions between qualitatively distinct regimes of eco-evolutionary dynamics observed in nature would follow, including persistent coexistence, antigen elimination and unrestrained growth. A new interesting outcome of our analysis is, spatial resonance of Turing modes can drive the arms races off balance: given sufficient asymmetry, spontaneous oscillations in Turing patterns precede antigen extinction, whereas uncontrolled antigen growth follows the formation of alternate quasispecies as ineffective receptors exhaust the limited immune resources; these measurable features may serve as the precusors of the off-balance fates.
Many theoretical studies have considered adaptation to time-varying environments with prescribed environmental statistics [@meyers:02; @kussell:05; @mustonen:08; @mustonen:09; @ivana:15; @xue:16; @xue:19]. This work makes a step toward a theory of coevolution from the perspective of responsively changing environments (mutual niche construction [@smee:96] in ecological terms), highlighting the role of feedback in driving evolution toward novel organization regimes and non-steady states. As new genomic and phenotypic methods are developed to better characterize antigenic [@bloom:06; @gong:13; @wu:15] and immunogenic [@klein:13; @diskin:13; @boyer:16; @adams:16] landscapes as well as bidirectional cross-reactivity [@west:13], the predictions for repertoire composition and coevolutionary outcome derived from this study can be compared with high-throughput profiling of coevolving immune repertoire and antigen ensemble in humans [@gao:14; @moore:15; @freund:17].
Model
=====
A finite repertoire of immune receptors that collectively cover the antigenic space while leaving self types intact is conceivable, if the distribution of potential threats is predetermined [@deboer:94; @deboer:01; @mayer:15; @wang:17]. Given a fixed distribution of pathogenic challenges, competitive exclusion is shown to drive clustering of cross-reactive receptors [@mayer:15]. In coevolution, however, antigen distribution is no longer preset but responds to reorganization of receptors. In addition, cross-reactivity is bidirectional: not only can a receptor be activated by a range of distinct antigens, but an antigen can be removed by a variety of receptors. Then, can predation lead to simultaneous clustering of antigens and receptors in their common trait space? If so, are such patterns stable? Would the concurring patterns interact to affect population dynamics?
![Schematic of antigen-receptor interaction with asymmetric range of inhibition and activation in the phenotypic space. (A) $\Ri>\Ra$: the receptor (blue Y-shape) is not activated by the antigen (red flower-shape) but nevertheless inhibits it. (B) $\Ra>\Ri$: the antigen activates the receptor but is not subject to its inhibition. Lower row: in addition to predation (black arrows; blunt for inhibition, acute for activation), antigens self replicate (red acute arrow) whereas receptor-carrying cells spontaneously decay (blue blunt arrow) in the absence of stimulation. []{data-label="asymmetry"}](asymmetry.pdf){width="1\columnwidth"}
To answer these questions, we consider a dynamical system of activators and inhibitors representing antigens and receptors, which diffuse in a shared phenotypic space and react through predator-prey interactions. Population densities of antigens $A(\vec{x},t)$ and receptors $B(\vec{x},t)$ evolve according to $$\begin{aligned}
\partial_tA(\vec{x},t)&=&D_1\nabla^2A(\vec{x},t)+\lambda_1A(\vec{x},t)\nonumber\\
&-&\alpha_1A(\vec{x},t)\int S_1(|\vec{x}-\vec{y}|;\Ri)B(\vec{y},t)\mathrm{d}\vec{y},\nonumber\\
\partial_tB(\vec{x},t)&=&D_2\nabla^2B(\vec{x},t)-\lambda_2B(\vec{x},t)+B_{\mathrm{in}}\nonumber\\
&+&\alpha_2B(\vec{x},t)\int S_2(|\vec{x}-\vec{y}|;\Ra)A(\vec{y},t)\mathrm{d}\vec{y}.\end{aligned}$$ Here, $D_1$ and $D_2$ denote isotropic diffusion constants of antigens and receptors, respectively, that mimic the rates of trait-altering mutations. Other forms of jump kernels do not change qualitative results \[SI\]. Antigens self replicate at rate $\lambda_1$ whereas receptors spontaneously decay at rate $\lambda_2$. Receptors inhibit antigens with an intrinsic rate $\alpha_1$ and grow at rate $\alpha_2$ upon activation; $\Ri$ denotes the range of receptors that can inhibit a given antigen, while $\Ra$ represents the range of antigens by which a receptor can be activated (Fig. \[asymmetry\]). In real systems, there is likely a distribution of reaction range; we assume a single value to simplify analysis. The term $B_{\mathrm{in}}$ corresponds to a small influx of lymphocytes that constantly output from the bone marrow and supply nascent receptors; without stimulation, receptors are uniformly distributed at a resting concentration given by $B_{\mathrm{in}}/\lambda_2$. We choose the lifetime of receptors, $\lambda_2^{-1}$, as the time unit and the linear dimension $L$ of the phenotypic space as the length unit. To account for the discreteness of replicating entities and hence avoid unrealistic revival from vanishingly small population densities, we impose an extinction threshold; antigen or receptor types whose population falls below this threshold are considered extinct and removed from the system.
In the spirit of Perelson and Oster [@perelson:79], we think of receptors and antigens as points in a high-dimensional phenotypic shape space, whose coordinates are associated with physical and biochemical properties that affect binding affinity. We assume that the strength of cross-reactive interaction only depends on the relative location, $\vec{r}=\vec{x}-\vec{y}$, of receptor and antigen in this space, as characterized by the non-local interaction kernels $S_1(|\vec{r}|; \Ri)$ and $S_2(|\vec{r}|; \Ra)$. Close proximity in shape space indicates good match between the binding pair leading to strong interaction, whereas large separation translates into weak affinity and poor recognition. Importantly, cross-reactivity is not necessarily symmetric as typically assumed; difference in biophysical conditions among other factors may well render disparate criteria for antigen removal and receptor activation [@tarr:11; @sather:14], i.e., $\Ri\neq\Ra$. For instance, removing an antigen may only require modest on rate (wide reaction range, large $\Ri$) of multiple receptors that together coat its surface, whereas activating an immune cell expressing a unique type of receptors can demand lasting antigen stimulation hence small off rate (close match of shape, small $\Ra$), or vice versa depending on conditions. How this asymmetry impacts coevolution is our focus.
![Phases in a 1D reaction-diffusion system under local predator-prey interactions. (A) Phase diagram on the plane spanned by the ratio between diffusion constants $D_1/D_2$ and that between birth and death rates $\lambda_1/\lambda_2$ of antigens (activators) and receptors (inhibitors). Dynamics start from a local dose of antigens and uniform receptors. The early extinction phase is color coded for the logarithm of the inverse time to antigen extinction. The persistence phase (blank) divides into a propagating wave state (upper) and a uniform coexistence state (lower). Insets show typical kymographs in each subphase, red for antigen and blue for receptor; the upper pair corresponds to the filled circle at $\lambda_1/\lambda_2=200$, $D_1/D_2=10^{-2}$, and the lower one corresponds to the open circle at $\lambda_1/\lambda_2=10$, $D_1/D_2=10^{-2}$. (B) Representative abundance trajectories. Top: $\lambda_1/\lambda_2=20$, $D_1/D_2=10^{-3}$ (red dot in panel A); bottom: $\lambda_1/\lambda_2=10$, $D_1/D_2=10^{-2}$. Corresponding phase plots are shown on the right; vertical dashed lines indicate the extinction threshold. $B_{\mathrm{in}}=10$, $\alpha_1=10^{-3}$, $\alpha_2=10^{-4}$. []{data-label="local"}](local_int.pdf){width="0.85\columnwidth"}
Results
=======
Phases under local predator-prey interactions
---------------------------------------------
This reaction-diffusion system (Eq. \[asymmetry\]) presents a homogeneous fixed point of population densities, $A_s=\lambda_2/(\alpha_2\Omega_2)$ and $B_s=\lambda_1/(\alpha_1\Omega_1)$, where $\Omega_1=\int \mathrm{d}\vec{r}S_1(|\vec{r}|;\Ri)$ and $\Omega_2=\int \mathrm{d}\vec{r}S_2(|\vec{r}|;\Ra)$ are respectively the shape-space volume of the “inhibition sphere" centered at an antigen and that of the “activation sphere" surrounding a receptor. When receptor-antigen interactions are local, depending on the ratio of the rates $\lambda_1/\lambda_2$ and diffusivity $D_1/D_2$, coevolving populations exhibit two main phases within the chosen parameter range (Fig. \[local\]A): antigen early extinction (colored region) and persistence (white region); the latter divides into two subphases, steady traveling waves (upper) and uniform coexistence (lower); as seen in typical kymographs of the 1D density fields (insets), starting from localized antigens and uniform receptors.
Extinction is expected when antigens replicate fast (large $\lambda_1/\lambda_2$) but mutate slowly (small $D_1/D_2$): after a brief delay during which antigen reaches a sufficient prevalence to trigger receptor proliferation, receptors rapidly expand in number and mutate to neighboring types; the pioneer receptors stay ahead of mutating antigens and eliminate them before escape mutants arise. Once antigen is cleared, the receptor population regresses to the resting level (Fig. \[local\]B upper panel). With sufficiently high replication rates, faster mutation allows antigen mutants to lead the arms races against receptors resulting in a persistent evolving state — a traveling wave Red Queen state, similar to that shown in a recent model of influenza evolution under cross-immunity [@yan:19]. Interestingly, as $\lambda_1/\lambda_2$ increases, while the rate of extinction (color-coded in the phase region) increases, a smaller $D_1/D_2$ is needed for transition to the traveling wave state. On the other hand, at modest $\lambda_1/\lambda_2$, a uniform coexistence phase is reached following population cycles dampened by mutation (Fig. \[local\]B lower panel). Under local interactions, this homogeneous fixed point is stable to perturbation and does not support spontaneous antigen speciation (i.e., breakup of a continuum into fragments in the shape space). Thus, in what follows, we start from this uniform steady state and introduce the key ingredient — asymmetric nonlocal interaction — to show how it drives spontaneous organization.
![Asymmetric cross-reactive interactions simultaneously organize receptor and antigen distributions. (A and B) The pattern wavelength, $\lambda$, identical for both populations, is symmetric under the interchange of the interaction ranges $\Ri$ and $\Ra$. (A) The scaled wavelength increases with the extent of asymmetry $\gamma\equiv(\Ri-\Ra)/(\Ri+\Ra)$; $\Ri+\Ra=0.015, 0.02, 0.03$ from top to bottom. (B) Pattern diagram in the ($\Ra,\Ri$) plane. The white region corresponds to stable behavior, whereas patterning occurs in the colored areas. Solid lines indicate the instability onset (Eq. \[boundary\]). The color bar shows the values of the wavelength determined from the critical mode. (C) Typical mutual distributions of receptor (blue) and antigen (red) in a 1D trait space with coordinate $x$. The actual (solid line) and effective (dashed line) population densities (scaled by total abundance) show mismatch for receptors (antigens) when $\Ri>\Ra$ ($\Ri<\Ra$), leading to colocalized (alternate) density peaks between two populations, as indicated by the yellow bars. Shaded are the effective density fields $A_{\mathrm{eff}}(x)$ and $B_{\mathrm{eff}}(x)$. These two examples correspond to the open circle ($\Ri=0.025$, $\Ra=0.005$) and the filled circle ($\Ri=0.005$, $\Ra=0.025$) in panel B. $\lambda_1=10$, $\lambda_2=1$, $\alpha_1=10^{-3}$, $\alpha_2=10^{-4}$, $D_1=10^{-6}$, $D_2=10^{-4}$. []{data-label="pattern"}](pattern.pdf){width="1\columnwidth"}
Simultaneous patterning under asymmetric cross-reactivity
---------------------------------------------------------
The analogy between antigen-immunity interaction and predation has been made before [@fenton:10; @kimberly:14; @armita:16; @bradde:17]; however, spontaneous speciation has not been described yet. On the other hand, for general activator-inhibitor systems — in the physical space — Turing patterns can emerge, either from demographic stochasticity which prevents the system from reaching its homogeneous fixed point [@rogers:12; @karig:18], or, more classically, from prohibitively large differences in diffusivity between the autocatalytic and the inhibitory reactants [@turing:52]. Here, using a simple phenomenological model accounting for cross-reactivity (Eq. \[asymmetry\]), we show that coevolutionary speciation is possible, without requiring any of the aforementioned patterning mechanisms.
{width="1.8\columnwidth"}
To identify the onset of patterning instability, we perturb the uniform stationary state $(A_s, B_s)$ with non-uniform variations, $A(\vec{x},t),B(\vec{x},t)\sim\exp\left(\omega_k t+i\vec{k}\cdot\vec{x}\right)$, where $\vec{k}$ is the wavevector of a spatial Fourier mode. The Turing instability occurs when the most unstable mode, i.e., the critical mode with a wavenumber $k_c$, begins to grow while all other modes decay. This condition, $\mathrm{Re}\left[\omega(k_c)\right]\geq0$, corresponds to the following inequality in the n-dimensional recognition space: - , \[patterning\] where $\hat{S}_1(k)$ and $\hat{S}_2(k)$ are the Fourier transform of the interaction kernels. It immediately follows that Turing instability in our system is purely driven by *asymmetric nonlocal* interactions and independent of diffusion: if the kernels were symmetric, i.e., $S_1(r; \Ri)=S_2(r; \Ra)$, the right hand side of Eq. \[patterning\] can never be positive and hence patterns do not develop; on the other hand, when $D_1 D_2=0$, the patterning condition is most readily satisfied, implying that diffusion is not necessary. In fact, the commonly assumed Gaussian kernel represents a marginal case which does not robustly warrant instability [@pigolotti:07]. Instead, $\hat{S}(k)<0$ is guaranteed if the strength of interaction decreases steeply with increasing separation across the edge of the interaction range \[SI\]. For simplicity, we assume step-function kernels, $S_1(r)=\Theta(\Ri-r)$ and $S_2(r)=\Theta(\Ra-r)$. Accordingly, under a modest extent of asymmetry, i.e., $\gamma\equiv(\Ri-\Ra)/(\Ri+\Ra)\ll1$, the pattern-forming condition can be explicitly expressed in terms of $\gamma$ \[SI\]: ||\_c, \[gamma\_c\] or equivalently, |\^2-\^2|C\_n. \[boundary\] Here $C_n$ is a constant that only depends on the dimension, $n$, of the shape space. Rapid increase of $C_n$ with $n$ (Fig. S3) indicates that stable uniform coexistence extends to stronger asymmetry as the phenotypic space involves higher dimensions. Therefore, under sufficient asymmetry, a continuum of antigen (receptor) types spontaneously segregates into species-rich and species-poor domains with densities on either side of $A_s$ ($B_s$). The spacing between adjacent antigen or receptor density peaks, i.e., the pattern wavelength $\lambda\simeq 2\pi/k_c$, is modestly larger than the sum of activation and inhibition radii (Fig. \[pattern\]A) due mainly to asymmetry and slightly to diffusion. Note that the minimum level of asymmetry required for patterning decreases with increasing range of cross-reactivity as $\gamma_c\sim(\Ri+\Ra)^{-2}$ (Eq. \[gamma\_c\]); furthermore, the pattern wavelength is symmetric under the interchange of $\Ra$ and $\Ri$ (Figs. \[pattern\]A and \[pattern\]B).
However, mutual distributions of receptor and antigen break the symmetry (Fig. \[pattern\]C): co-localized patterns form when $R_{\mathrm{act}}<R_{\mathrm{inh}}$ (left panel) while alternate patterns emerge when $R_{\mathrm{act}}>R_{\mathrm{inh}}$ (right panel). This seemingly counterintuitive behavior can be explained by a rather general mechanism. When $2\Ri>\lambda$, the “inhibition sphere" of an antigen may enclose adjacent receptor density peaks. As a result, locations between the peaks, where the actual receptor density $B(x)$ (blue solid line) is in fact the lowest, are instead the worst positions for antigens to be in, because the effective receptor density field acting on antigens at position $x$, $B_{\mathrm{eff}}(x)=\int_{x-\Ri}^{x+\Ri}B(y)\mathrm{d}y$ (blue dashed line), is maximal when $x$ is right amid receptor peaks. Thus the antigen distribution winds up tracking the receptor distribution (yellow bar, left panel). Conversely, when $2\Ra>\lambda$, the “activation sphere" of a receptor may encompass adjacent antigen peaks; the stimulation for receptor replication is strongest in between the peaks, according to the effective antigen densities $A_{\mathrm{eff}}(x)=\int_{x-\Ra}^{x+\Ra}A(y)\mathrm{d}y$ (red dashed line). Consequently, receptors view antigens as most concentrated in positions where they are actually least prevalent (red solid line). Therefore, depending on whether $\Ri$ or $\Ra$ is larger, colocalized or alternate distributions result, which reflect a mismatch between the actual distribution and the effective one seen by the apposing population. In what follows we show that distinct spatial phase relations between mutual distributions will lead to drastically different pattern dynamics and evolutionary outcomes.
{width="2\columnwidth"}
Coevolutionary regimes and ecological feedback
----------------------------------------------
![Asymmetric cross-reactivity yields diverse phases. (A) Without homeostatic constraints on lymphocyte counts ($\theta_2=\infty$), above the critical asymmetry (beyond the light blue region), patterns form. The pattern-forming boundaries are symmetric about the diagonal. The boundary between the late antigen extinction phase (blue) and the persistent patterned phase (yellow) is determined by tracking the prevalence trajectories until $t=100$. (B) Under a finite carrying capacity ($\theta_2=3\times10^5$), the pattern-forming region is no longer symmetric and an antigen escape phase (red) emerges at the small-$\Ri$ large-$\Ra$ corner, where the phase boundary corresponds to the transition between supercritical and subcritical bifurcations. (C) First order pattern amplitudes as a function of carrying capacity $\theta_2$. Lines are analytical solutions of amplitude equations, and symbols are numerical values extracted from Fourier spectrum of stationary patterns right after abundance shift. Insets show examples of population dynamics in escape (subcritical) and persistence with pattern (supercritical) phases; pattern amplitudes diverge near the transition. Red (blue) for antigen (receptor). []{data-label="diagram"}](diagrams_v.pdf){width="0.69\columnwidth"}
**Multi-stage patterning.** Shown in Fig. \[phases\] are the abundance trajectories (top row) and kymographs of concurring patterns (lower rows) demonstrating their concomitant progression and mutual influence. Depending on the sign of asymmetry and the size of carrying capacity, qualitatively distinct regimes appear, including late antigen extinction, persistence, and escape (panels A to C). Intriguingly, concentrations and patterns evolve via three distinct stages: uniform steady state, stationary pattern, and oscillatory pattern. Right after co-patterns spontaneously emerge from the homogeneous steady state, concentration changes in both populations are observed: in the absence of homeostatic constraints (panels A and B), antigen abundance (red) shifts downward whereas receptor prevalence (blue) shifts upward. This is unanticipated because patterning instability in a density field is not expected to alter the overall abundance: growing unstable modes merely redistribute densities in space without changing the average concentration. This appears to break down when patterns develop in two interacting density fields. In fact, the most unstable modes (with wavenumber $k_c$) from both populations couple and modify the zero modes, resulting in the shift in mean population densities. A weakly nonlinear analysis close to the critical point quantitatively captures both the phase relation between patterned distributions and the shift in overall abundances (Fig. \[nonlinear\]). For analytical tractability, we perform the calculation in 1D \[see SI for details\]. Below we only stress the essential results.
**Co-localized and alternate quasispecies.** Close to the patterning transition, $D_1=D_1^\star(1-\epsilon)$, where $D_1^\star$ is the critical diffusion constant of antigen and $\epsilon$ is small and positive, we seek stationary solutions of the form $A(x)=A_s(1+u(x))$ and $B(x)=B_s(1+v(x))$, where the deviation $\bm{w}=(u(x), v(x))^{T}$ from the homogeneous steady state $(A_s, B_s)^{T}$ is expanded in powers of $\epsilon^{1/2}$ to the second order: $$\bm{w}=\bm{w}_1^{(1)}\cos(k_cx)\mathcal{A}\epsilon^{1/2}+\left(\bm{w}_2^{(0)}
+\bm{w}_2^{(2)}\cos(2k_cx)\right)\mathcal{A}^2\epsilon.$$ The saturated amplitude $\mathcal{A}$ of the perturbation is determined by the amplitude equation at the order of $\epsilon^{3/2}$. The spatial phase difference between the leading pattern modes in antigen and receptor populations, $u_1^{(1)}$ and $v_1^{(1)}$, respectively, can be found from $$\xi\equiv\frac{v_1^{(1)}}{u_1^{(1)}}=\frac{\lambda_2}{D_2k_c^2}\frac{\sin(k_c\Ra)}{k_c\Ra}
=-\frac{D_1^\star k_c^2}{\lambda_1}\frac{k_c\Ri}{\sin(k_c\Ri)},$$ which implies that $\xi\propto\sin\left(\frac{2\pi}{\lambda}\frac{(\Ra+\Ri)}{2}(1-\gamma)\right)
=\sin\left(\pi\frac{1-\gamma}{\lambda/(\Ra+\Ri)}\right)$, with $\lambda$ being the pattern wavelength and $\gamma=(\Ri-\Ra)/(\Ri+\Ra)$. It immediately follows that when $\gamma<0$ (i.e., $\Ra>\Ri$), $1-\gamma>\lambda/(\Ra+\Ri)>1$ (see Fig. \[pattern\]A), thus $\xi<0$; when $\gamma>0$ (i.e., $\Ri>\Ra$), $1-\gamma<1<\lambda/(\Ra+\Ri)$, thus $\xi>0$. Therefore, the spatial patterns of antigen and receptor distributions are either in phase ($\xi>0$) or out of phase ($\xi<0$), purely determined by the sign of asymmetry (Fig. \[nonlinear\]A). This provides rigor to the intuitive argument we made earlier in relation to Fig. \[pattern\]C. Furthermore, the changes in the overall abundance of antigens and receptors are proportional to $u_2^{(0)}$ and $v_2^{(0)}$, respectively. At $\mathcal{O}(\epsilon)$, we find $u_2^{(0)}\propto-\xi\sin(k_c\Ra)<0$ and $v_2^{(0)}\propto-\xi\sin(k_c\Ri)>0$, i.e., the direction of abundance shift is independent of the sign of asymmetry, in line with numerical solutions (Figs. \[phases\]A and \[phases\]B, top row; Fig. \[nonlinear\]B). Importantly, $u_2^{(0)},v_2^{(0)}\propto u_1^{(1)}v_1^{(1)}$, indicating that shift in abundance indeed results from coupling between simultaneous Turing modes.
**Dynamic transients.** A further surprise comes at longer times: the stationary co-patterns are only metastable. Soon after abundance shift takes place, instability starts to grow, visible as increasingly strong oscillations that eventually drive the antigen population to pass below the extinction threshold (Fig. \[phases\]A top panel). By perturbing around the abundance-shifted stationary patterns, we indeed identify a growing oscillatory instability from the dispersion relation of linearized dynamics \[SI\]. The interrupted oscillation amplitudes at later times arise from asynchronous extinction of local antigen clusters (Fig. \[phases\]A middle panel). Note that this late extinction phase only occurs to colocalized population densities, i.e., when $\Ri>\Ra$.
Upon interchange of $\Ri$ and $\Ra$ (Fig. \[phases\]B), pattern evolution exhibits new features: as some antigen clusters go extinct as a result of oscillatory instability, neighboring clusters migrate to these just vacated sites, where receptors decay due to a lack of stimulation and delay in response, thus locally and temporarily evading immune inhibition. These surviving clusters then go through successive branching events (i.e., widening then splitting), forming a tree-like structure over time. The coevolving receptor population drives the branching and subsequently traces the newly formed branches (see Movie a for pattern dynamics). Such coevolutionary speciation, enabled by mutation, persists for extended periods of time so that it effectively overcomes the growing oscillations and maintains antigen at modest prevalence indefinitely. Note that the persistent ramifying pattern only emerges from alternate density peaks, i.e., when $\Ra>\Ri$.
Is there a chance that antigen population can achieve a global escape from immune control as is often envisaged as a catastrophic failure? This does happen as soon as we turn on a sufficiently strong homeostatic constraint on receptor abundance (Fig. \[phases\]C). Considering homeostasis, the decay rate of receptors becomes $\lambda_2 B(x,t)\left(1+\int_{0}^{1} B(y,t)\mathrm{d}y/\theta_2\right)$, which now includes a contribution from the global constraint characterized by a carrying capacity $\theta_2$. Strikingly, reducing the immune capacity appears to alter the nature of the instability (Fig. \[diagram\]C): a critical value of $\theta_2$ marks the transition from supercritical bifurcation (yellow region), where nonlinearity acts to saturate the amplitude of the perturbation, to subcritical bifurcation (red region) where higher order processes have to intervene for stabilization. The latter corresponds to the antigen escape phase (Fig. \[phases\]C): an unrestrained growth indicates a loss of immune control.
**Higher dimension.** Similar progression of patterns and population dynamics in distinct regimes is also seen in 2D starting from the uniform steady state (Movies b to d). An analogous “branching" scenario in the persistence phase is particularly intriguing: antigen droplets deform and migrate to neighboring vacant loci and resist elimination. Oscillations of dense spots in both populations resemble the “twinkling eyes" pattern proposed for synthetic materials. It has been suggested [@yang:03] that oscillatory patterns can arise in a system consisting of two coupled reaction-diffusion layers, one capable of producing Turing patterns while the other supporting Hopf instability. Distinct from these built-in mechanisms, instabilities in our system are self-generated: interacting populations spontaneously fragment in the trait space, and spatial resonance of the resulting Turing modes leads to abundance shift and subsequent growing oscillations.
**Phase diagram.** To stress the role of asymmetric cross-reactivity in governing the diverse behaviors, we present phase diagrams on the $(\Ra, \Ri)$ plane (Fig. \[diagram\]). Without homeostatic constraints ($\theta_2=\infty$, panel A), patterns form above the critical asymmetry marked by solid lines that are symmetric about the diagonal; the enclosed patternless phase (light blue region) corresponds to stable homogeneous coexistence like for local interactions. On the $\Ri>\Ra$ side, the late extinction phase (blue) transitions to the persistent patterned phase (yellow) at a boundary (dashed line) determined by tracking the prevalence trajectories until $t=100$; longer tracking time would expand the extinction phase. Under a finite carrying capacity ($\theta_2=3\times10^5$, panel B), two major changes occur: First, the pattern-forming region is no longer symmetric but expands on the $\Ra>\Ri$ side toward the diagonal. Second, the antigen escape phase (red region) emerges at the small-$\Ri$ large-$\Ra$ corner; the phase boundary corresponds to the transition between supercritical and subcritical bifurcations in the amplitude equation. The escape phase enlarges as the carrying capacity diminishes. Thus, our model predicts expansion of the antigen escape phase with age, owing to diminishing counts of renewable lymphocytes [@miller:96]. The regime of persistence with pattern (yellow), irrespective of the homeostatic constraint, differs between the flanks; while oscillations occur on both sides off the diagonal, antigen branching (Fig. \[phases\]B) only appears when $\Ra>\Ri$, manifesting the potential for evasion.
Discussion
==========
Environment becomes a relative concept in the case of coevolution. We present a general model of mutual organization between continuous distributions of antigens and receptors that interact cross-reactively. In a shared phenotypic space, the receptor repertoire and antigen population constitute each other’s environment and adapt to mutually constructed fitness seascapes. This phenomenological approach allows us to describe the interplay between ecological and evolutionary processes that do not separate in timescales, thus revealing a variety of long-lived transients and dynamic steady states observed in nature, such as antigen extinction, chronic persistence, and unrestrained growth until eventual saturation.
This simple model of antigen-immunity coevolution demonstrates a new type of Turing mechanism that stems from tolerance for imperfect lock-key interactions and disparate conditions for receptor activation and antigen removal. While it might be intuitive that under reciprocal cross-reactivity, antigen and receptor populations would simultaneously fragment (Fig. \[pattern\]), more surprises come after the co-pattern emerges (Fig. \[phases\]): When two density distributions are in phase ($\Ra<\Ri$), spatial resonance between the lowest Turing modes precedes growing oscillations in the overall abundance, driving antigens to extinction; when apposing populations are out of phase ($\Ra>\Ri$), strong homeostatic constraints on immune cells alter the nature of pattern instability from supercritical to subcritical, leading to uncontrolled growth. The intuitive picture is, when $\Ra<\Ri$, antigens are inhibited by receptors that they do not activate and hence fail to evade immune attack; when $\Ra>\Ri$, receptors are activated by antigens that they cannot inhibit, thus, under resource limits, an increasingly weaker defense results. Such multi-stage patterning and its feedback to population dynamics, triggered by asymmetric non-local interactions, is a qualitatively new phenomenon, and is clearly distinct from speciation due to competitive exclusion in a single population. Our predictions are supported by experiments: strong oscillations in antigen abundance prior to crash to extinction have been seen in viral evolution within humans and attributed to cross-reactive antibody response [@bailey:17], whereas strategies of distracting immune attention are indeed used by many viruses that create a vast excess of defective particles than functional ones [@huang:70].
Stochasticity arising from demographic noise does not modify qualitative model behaviors in all distinct regimes \[SI\]. Albeit not required for pattern formation, stochastic fluctuations appear to accelerate the growth of instability and speed up antigen extinction (Fig. S12); this observation and further effect of demographic noise will receive a careful examination in future work.
Our results suggest that, the immune system may have evolved to exploit the asymmetry between activation and inhibition by differentiating these processes physically and biochemically. A remarkable example is affinity maturation of B lymphocytes [@victora:12] in which rapid Darwinian evolution acts over days to weeks to select for high affinity clones: Immature B cells are trained in lymphoid tissues where antigens are presented in a membrane form and declining in availability; fierce competition for limited stimuli thus provides a sustained selection pressure that constantly raises the activation threshold, i.e., decreasing $\Ra$. In contrast, mature B cells then released into circulation encounter soluble antigens at higher abundance, corresponding to $\Ri>\Ra$. As a result, enhanced asymmetry between conditions for immune stimulation and antigen removal facilitates elimination of pathogens. Conversely, pathogens evolve immunodominance [@schrier:88] and make fitness-restoring mutations [@crawford:07] that increase $\Ra$ and decrease $\Ri$, both of which aid in evasion.
A dynamic balance hence persistent coevolution, often pictured as an asymptotic state, can only be sustained when the asymmetry is not too severe. It is thus likely to be favorable if the extent of asymmetry stays near the edge between persistence and imbalance, which adjusts to the tension between the need for defense against foreign pathogens ($\gamma>\gamma_c$) and that for tolerance toward benign self tissues ($|\gamma|\leq\gamma_c$). Interestingly, the critical asymmetry $\gamma_c$ rises with the dimension of the phenotype space (Fig. S3), suggesting that dynamic balance become easier to maintain for more complex traits. Our analysis also reveals other stabilizing factors for coexistence \[SI\], including a stronger influx of naive cells, larger jump sizes in trait values, and longer-tailed interaction kernels.
Because the present model of antigen-immunity coevolution is a sufficiently abstracted one, having properties which seem quite robust and independent of the details of predation, we expect that the results and predictions are relevant for a wide range of coevolving systems including cancer cells and T lymphocytes, embryonic tissues and self-reactive immune cells, as well as bacteria and bacteriophage. This model can be adapted to be more biologically faithful, e.g., by incorporating preexisting antigenic landscapes, taking rates to be age-dependent, and treating cross-reactivity as an evolvable character.
Our predictions can potentially be tested in experiments that track both the pathogen load and the diversity history via high-throughput longitudinal sequencing of receptors and antigens [@liao:13; @gao:14; @moore:15; @freund:17]. In addition, phenotypic measurements using binding and neutralization assays [@west:13] can inform the extent of asymmetry. Combining these two sets of experiments in different individuals would allow to correlate the degree of asymmetry with evolutionary outcomes.
We hope that this work proves useful in providing a framework for understanding and testing how cross-reactive interactions — ubiquitous and crucial for biological sensory systems — can lead, in part, to the generation, maintenance and turnover of diversity in coevolving systems. More broadly, our work provides the basis for a theory of evolution in responsively changing environments, highlighting that ecological feedback in pattern-forming systems can yield dynamic transients and drive evolution toward non-steady states that differ from the Red Queen persistent cycles.
*Acknowledgments*.—We thank Sidney Redner and Paul Bressloff for enlightening discussions. SW gratefully acknowledges funding from UCLA.
Appendix
========
Below we provide additional description of the main steps in our analysis. The three subsections are ordered in accordance with the successive instabilities that give rise to the multi-stage progression of trait-space patterns and associated population dynamics.
Linear instability of the uniform steady state
----------------------------------------------
To identify the onset of patterning instability, we linearize the equation of motion (Eq. 1 in the main text) around the homogeneous fixed point $(A_s, B_s)$ and work in the Fourier space. Defining $A(\bm x, t)=A_s+\sum_k\delta A_k\exp(\omega_k t+i\bm k\cdot \bm x)$ and $B(\bm x, t)=B_s+\sum_k\delta B_k\exp(\omega_k t+i\bm k\cdot \bm x)$ yields $$\omega(k)
\begin{pmatrix}
\delta A_k\\
\delta B_k\\
\end{pmatrix}
=
\begin{pmatrix}
-D_1 k^2 & -\alpha_1 A_s \hat{S}_1(k)\\
\alpha_2 B_s \hat{S}_2(k) & -D_2k^2\\
\end{pmatrix}
\begin{pmatrix}
\delta A_k\\
\delta B_k\\
\end{pmatrix},$$ where $\hat S_1(k)$ and $\hat S_2(k)$ are the Fourier transform of interaction kernels $S_1(r)$ and $S_2(r)$, respectively. Solving this characteristic equation gives the dispersion relation, i.e., the linear growth rate of the Fourier modes: $$\omega(k)=-(D_1+D_2)k^2+\sqrt{(D_1-D_2)^2k^4
-4\lambda_1\lambda_2\frac{\hat S_1(k)\hat S_2(k)}{\hat{S}_1(0)\hat{S}_2(0)}}.$$ Turing instability occurs when the least stable mode (with a wavevector $k_c$) begins to grow, namely, $$Re[\omega(k_c)]\geq0,$$ where the wavenumber of the critical mode can be determined by $\partial_k \omega|_{k=k_c}=0$. This gives the pattern-forming condition (Eq. \[patterning\] in the main text).
Weakly nonlinear stability analysis: amplitude of patterns
----------------------------------------------------------
In this section we lay out the procedure of amplitude expansion [@gambino:13; @stephenson:95] that yields an approximate description of the patterns formed.
Close to the transition, we introduce a small positive parameter $\epsilon=(D_1^\star-D_1)/D_1^\star$. Thus, terms in the equation of motion can be collected into a linear part evaluated at the transition, $\mathcal{L}^\star\bm w$, and the rest, $\bm g_{\epsilon}(\bm w)$, for which the evolution operator explicitly depends on $\epsilon$: $$\partial_t\bm w=\mathcal{L}^\star\bm w+\bm g_{\epsilon}(\bm w).$$ Here $\bm w=(u(x), v(x))^T$ denotes the deviation from the homogeneous steady state.
In the vicinity of the transition, one can expand the inhomogeneous deviation in powers of $\epsilon^{1/2}$: $$\bm w=\bm w_1\epsilon^{1/2}+\bm w_2\epsilon +\bm w_3\epsilon^{3/2} +o(\epsilon^2).$$ Note $\bm w_n\propto\mathcal{A}^n$, where $\mathcal{A}$ represents pattern amplitude; the closer to the onset of instability, the smaller the saturated amplitude of the perturbation. Substituting this expansion into the equation of motion yields order by order (explicit expressions can be found in SI): $$\begin{aligned}
&O(\epsilon^{1/2})
&&\quad\quad
\mathcal{L^\star}\pmb{w}_1=0\\
&O(\epsilon)
&& \quad\quad
\mathcal{L^\star}\pmb{w}_2=\pmb{F}(\pmb{w}_1)\\
&O(\epsilon^{3/2})
&&\quad\quad
\mathcal{L^\star}\pmb{w}_3=\pmb{G}(\pmb{w}_1, \pmb{w}_2)
\end{aligned}$$ The first equation effectively recovers the linear theory. For nontrivial solutions to these equations to exist, certain conditions need to be met. Such solvability condition associated with the third equation serves to determine the pattern amplitude $\mathcal{A}$. It reads $$\langle\bm\rho^{\dagger}, \bm G\rangle = \int\bm\rho^{\dagger}(x)^T\bm G(x)dx=0,$$ where $\bm\rho^{\dagger}$ is the nontrivial kernel of the adjoint operator $\mathcal L^{\star\dagger}$, satisfying $\mathcal L^{\star\dagger}\bm\rho^\dagger=0$. This orthogonal condition results in the stationary amplitude equation (or Stuart-Landau equation) $$a_0\mathcal A-a_1\mathcal A^3=0,$$ where $a_0$ is the linear growth rate and $a_1$ the Landau parameter, both of which depend on system parameters. The formation of steady patterns with a finite amplitude requires $a_0\times a_1>0$, corresponding to supercritical bifurcation where nonlinearity acts to saturate the growth of linearly unstable modes. In contrast, $a_0\times a_1<0$ leads to subcritical bifurcation, indicating that even higher order nonlinearities are required to stabilize the pattern.
This method also applies when homeostasis is considered. Transition from supercritial to subcritical bifurcation occurs when the carrying capacity of immune receptors (predators) falls below a critical value (Fig. \[diagram\]C, Fig. S9).
As such, we obtain an approximate solution of the patterned state close to transition (Eq. 5 in the main text), which shows nice agreement with the numerical solution of the equation of motion (Fig. \[nonlinear\]).
Pattern stability
-----------------
To determine the stability of the patterned state, we introduce a small perturbation $\delta \bm w$ to the steady pattern $\bm w_s$ obtained in the last section, i.e., $\bm w=\bm w_s+\delta \bm w$, so that the perturbation evolves according to $$\partial_t\delta \bm w=\mathcal L_s\delta\bm w,$$ where $\mathcal L_s$ stands for the linearized evolution operator evaluated at $\bm w_s$. Again we expand the perturbation in Fourier modes $
\delta \bm w=\int_{-\infty}^{\infty}\delta\bm w_q e^{iqx}\textrm{d}q.
$ Since the operation of $\mathcal L_s$ contains modes $k_c$ and $2k_c$ (according to Eq. 5), it couples $q$-mode with those having wavenumbers $q\pm k_c$, $q\pm2k_c$, etc. Consequently, unlike perturbations to the homogeneous state that lead to decoupled characteristic equations for individual modes, now modes that differ by multiples of $k_c$ are all coupled and evolve together according to a matrix equation $$\partial_t\bm\Psi=M\bm\Psi.$$ In principle, the state vector $\bm\Psi$ would be infinite dimensional, containing modes of perturbations associated with wavenumbers $q, q\pm k_c, q\pm2k_c, \cdots$. Correspondingly, the evolution matrix $M$ would also be infinite, forbidden from being solved. Thus, truncation of the cascade is needed to make progress. After truncating the matrix to a modest rank, we can proceed by numerically solving for the eigenvalues of the characteristic matrix. Then, like in usual stability analysis, the leading eigenvalue, $m_1(q)$, describes the long-term growth of the perturbation. Close to the transition, $m_1(q)$ solved as such agrees well with the linear growth rate of perturbations observed in numerical solutions of the original equation of motion (Fig. S7). In particular, this analysis captures the growing oscillations in the patterned state (as shown in Fig. \[phases\] in the main text); soon after patterns form in both populations, an oscillatory instability ensues; this linear instability stems from coupling between the spatial modes of the perturbation and those of the steady pattern with matched wavelengths.
[70]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\
12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****, ()](https://doi.org/10.1038/nrg2323) [****, ()](https://doi.org/10.1038/nature10762) [****, ()](https://doi.org/10.1038/nature12053) [**](https://www.mn.uio.no/cees/english/services/van-valen/evolutionary-theory/volume-1/vol-1-no-1-pages-1-30-l-van-valen-a-new-evolutionary-law.pdf), Vol. () pp. @noop [****, ()]{} @noop () [****, ()](https://doi.org/10.1073/pnas.1505207112) [****, ()](https://doi.org/10.1371/journal.pgen.1006171) [****, ()](https://doi.org/10.1098/rstb.2014.0235) [****, ()](https://doi.org/10.7554/eLife.11282) [****, ()](https://doi.org/10.1073/pnas.1525578113) [****, ()](https://doi.org/10.1038/nature13087) [****, ()](https://doi.org/10.1038/nature24473) [****, ()](https://doi.org/10.1073/pnas.0504062102) [****, ()](https://doi.org/10.1371/journal.pcbi.1005486) [**](https://doi.org/DOI: 10.1017/CBO9781139342179) (, , ) [****, ()](https://doi.org/10.1126/science.290.5495.1358) [****, ()](https://doi.org/10.1371/journal.pbio.0050235) [****, ()](https://doi.org/10.1007/s00285-007-0094-6) [****, ()](https://doi.org/10.1103/PhysRevE.90.050702) [****, ()](https://doi.org/10.1038/s41559-017-0189) [****, ()](https://doi.org/10.1098/rsif.2018.0343) [****, ()](https://doi.org/10.1038/s41567-017-0040-5) [****, ()](https://doi.org/10.1038/s41559-018-0655-7) [****, ()](https://doi.org/https://doi.org/10.1016/j.cell.2014.03.047) [****, ()](https://doi.org/10.1074/jbc.M111.289488) [****, ()](https://doi.org/10.1073/pnas.0508024103) [****, ()](https://doi.org/10.1103/PhysRevLett.98.258101) [****, ()](https://doi.org/10.1126/science.1187468) [****, ()](https://doi.org/10.1209/0295-5075/97/40008) [****, ()](https://doi.org/10.1098/rstb.1952.0012) [****, ()](https://doi.org/https://doi.org/10.1016/S0169-5347(02)02633-2) [****, ()](https://doi.org/10.1126/science.1114383) [****, ()](https://doi.org/10.1103/PhysRevLett.100.108101) [****, ()](https://doi.org/https://doi.org/10.1016/j.tig.2009.01.002) [****, ()](https://doi.org/10.1073/pnas.1505406112) @noop [****, ()]{} @noop [ ()]{} [****, ()](https://doi.org/10.1086/285870) [****, ()](https://doi.org/10.1073/pnas.0510098103) [****, ()](https://doi.org/10.7554/eLife.00631) [****, ()](https://doi.org/10.1371/journal.pgen.1005310) [****, ()](https://doi.org/https://doi.org/10.1016/j.cell.2013.03.018) [****, ()](https://doi.org/10.1084/jem.20130221) [****, ()](https://doi.org/10.1073/pnas.1517813113) [****, ()](https://doi.org/10.7554/eLife.23156) [****, ()](https://doi.org/10.1073/pnas.1309215110) [****, ()](https://doi.org/https://doi.org/10.1016/j.cell.2014.06.022) [****, ()](https://doi.org/https://doi.org/10.1016/j.tim.2014.12.007) ****, [10.1126/scitranslmed.aal2144](https://doi.org/10.1126/scitranslmed.aal2144) () [****, ()](https://doi.org/https://doi.org/10.1006/jtbi.1994.1160) [****, ()](https://doi.org/https://doi.org/10.1006/jtbi.2001.2379) [****, ()](https://doi.org/10.1073/pnas.1421827112) [****, ()](https://doi.org/10.1371/journal.pcbi.1005336) [****, ()](https://doi.org/https://doi.org/10.1016/0022-5193(79)90275-3) [****, ()](https://doi.org/10.1128/JVI.01332-10) [****, ()](https://doi.org/10.1128/JVI.01816-14) [ ()](https://arxiv.org/abs/1810.11918) [ ****, ()](https://doi.org/DOI: 10.1017/S0031182009991788) [****, ()](https://doi.org/10.1371/journal.pone.0102821) [****, ()](https://doi.org/10.1073/pnas.1720770115) [****, ()](https://doi.org/10.1103/PhysRevLett.90.178303) [****, ()](https://doi.org/10.1126/science.273.5271.70) ****, [10.1172/jci.insight.92872](https://doi.org/10.1172/jci.insight.92872) () [****, ()](https://doi.org/10.1038/226325a0) [****, ()](https://doi.org/10.1146/annurev-immunol-020711-075032), [****, ()](https://jvi.asm.org/content/62/8/2531) [****, ()](https://doi.org/10.1128/JVI.00465-07) [****, ()](https://doi.org/10.1103/PhysRevE.88.042925) [****, ()](https://doi.org/10.1007/BF00187282)
|
---
abstract: 'Classical extreme value statistics consists of two fundamental approaches: the block maxima (BM) method and the peak-over-threshold (POT) approach. It seems to be general consensus among researchers in the field that the POT method makes use of extreme observations more efficiently than the BM method. We shed light on this discussion from three different perspectives. First, based on recent theoretical results for the BM approach, we provide a theoretical comparison in i.i.d. scenarios. We argue that the data generating process may favour either one or the other approach. Second, if the underlying data possesses serial dependence, we argue that the choice of a method should be primarily guided by the ultimate statistical interest: for instance, POT is preferable for quantile estimation, while BM is preferable for return level estimation. Finally, we discuss the two approaches for multivariate observations and identify various open ends for future research.'
address:
- 'Heinrich-Heine-Universität Düsseldorf, Mathematisches Institut, Universitätsstr. 1, 40225 Düsseldorf, Germany. '
- 'Econometric Institute, Erasmus University Rotterdam, 3000 DR Rotterdam, The Netherlands. '
author:
-
-
bibliography:
- 'biblio.bib'
title: 'A horse racing between the block maxima method and the peak–over–threshold approach'
---
Introduction {#sec:intro}
============
Extreme-Value Statistics can be regarded as the art of extrapolation. Based on a finite sample from some distribution $F$, typical quantities of interest are quantiles whose levels are larger than the largest observation or probabilities of rare events which have not occurred yet in the observed sample. Estimating such objects typically relies on the following fundamental domain-of-attraction condition: there exists a constant $\gamma\in{\mathbb R}$ and sequences $a_r>0$ and $b_r$, $r\in{\mathbb N}$, such that $$\label{eq:doabm}
\lim_{r\to\infty} F^r(a_r x+b_r)=\exp{\left\{-(1+\gamma x)^{-1/\gamma}\right\}} \mbox{\ for all \ } 1+\gamma x>0.$$ In that case, $\gamma$ is called the *extreme value index*. The limit appears unnecessarily specific, but it is in fact the only non-degenerate limit of the expression on the left-hand side. An equivalent representation of the domain of attraction condition is as follows: there exists a positive function $\sigma=\sigma(t)$ such that $$\label{eq:doapot}
\lim_{t\uparrow x^*} \frac{1-F(t+\sigma(t)x)}{1-F(t)}=(1+\gamma x)^{-1/\gamma} \mbox{\ for all \ } 1+\gamma x>0,$$ where $x^*$ denotes the right end-point of the support of $F$, see [@BalDeh74]. The two sequences in are related to the function $\sigma$ as follows: $a_r=\sigma(b_r)$ and $b_r=U(r)$ where $U(r)=F^{\leftarrow}(1-1/r) = (1/(1-F))^{\leftarrow}(r)$, with $\cdot^{\leftarrow}$ denoting the left–continuous inverse of some monotone function.
Consider for instance the consequences of the previous two displays for high quantiles of $F$. By by , for all $p$ sufficiently small, $$\begin{aligned}
\label{eq:quantile}
F^{\leftarrow}(1-p) \approx b_r + a_r \frac{\{- r \log(1-p)\}^{-\gamma}-1}{\gamma} \approx b_r + a_r \frac{(rp)^{-\gamma}-1}{\gamma}.\end{aligned}$$ Hence, by the plug-in-principle, a suitable choice of $r$ and suitable estimators of $a_r, b_r$ and $\gamma$ immediately suggest estimators for high quantiles.
Similarly, by , for all $p$ sufficiently small, $$\begin{aligned}
\label{eq:quantilepot}
F^{\leftarrow}(1-p) \approx t + \sigma(t) \frac{\{\frac{p}{1-F(t)}\}^{-\gamma}-1}{\gamma}.\end{aligned}$$ Again, by the plug-in-principle, a suitable choice of $t$ and suitable estimators of $\sigma(t)$, $\gamma$ and $1-F(t)$ immediately leads to estimators for high quantiles. Here, $t$ is typically chosen as a large order statistic $t=X_{n-k:n}$ and $1-F(t)$ is replaced by $k/n$.
In practice, estimators for the parameters in these two approaches typically follow their corresponding basic principles: the block maxima method motivated by and the peak-over-threshold approach motivated by . Let $X_1, X_2, \dots, X_n$ be a sample of observations drawn from $F$, and for the moment assume that the observations are independent. Then gives rise to the *block maxima method (BM)* [@Gum58]: for some block size $r\in\{1,\dots, n\}$, divide the data into $k=\lfloor n/r\rfloor$ blocks of length $r$ (and a possibly remaining block of smaller size which has to be discarded). By independence, each block has cdf $F^r$. By , for large block sizes $r$, the sample of block maxima can then be regarded as an approximate i.i.d. sample from the three-parametric generalized extreme-value (GEV) distribution $G_{\gamma, b,a}$ with location parameter $b=b_r$, scale parameter $a=a_r$ and shape parameter $\gamma$, defined by its cdf $$G_{\gamma , b,a}^{\mathrm{GEV}}(x)
:= \exp\Big\{-\Big(1+\gamma \frac{x-b}{a}\Big)^{-1/\gamma}\Big\} {\boldsymbol}1\Big(1+\gamma \frac{x-b}{a}>0\Big).$$ The three parameters can be estimated by maximum-likelihood or moment-matching, among others. Irrespective of the particular estimation principle, any estimator defined in terms of the sample of block maxima will be referred to as an estimator based on the block maxima method.
Often, an available data-sample consists of block maxima only, for example, annual maxima of a river level. Then a practitioner may only rely on the block maxima method. If the underlying observations are available, then gives rise to the competing *peak-over-threshold approach (POT)* [@Pic75]: for sufficiently large $t$ in , we obtain that, for any $x>0$, $$\begin{aligned}
\label{eq:POT}
\Pr(X>t + x\mid X>t) = \frac{\Pr(X>t + x)}{\Pr(X>t)} \approx \Big(1+\gamma \tfrac{x}{\sigma}\Big)^{-1/\gamma} =: 1- G_{\gamma , \sigma}^{\mathrm{GP}}(x),\end{aligned}$$ where the right-hand side defines the two-parametric generalized Pareto (GP) distribution with scale parameter $\sigma:=\sigma(t)$ and shape parameter $\gamma$. In practice, $t$ is typically chosen as the $(n-k)$-th order statistic $X_{n-k:n}$ for some intermediate value $k$ (hence, $X_{n-k:n}$ is the $(1-1/r)$-sample quantile with $r=n/k$). Then, one may regard the sample $X_{n-k+1:n}-X_{n-k:n}, \dots X_{n:n}-X_{n-k:n}$ as observations from the two-parametric generalized Pareto-distribution. The parameters can hence be estimated by moment matching, and even by maximum-likelihood since the sample of order statistics can actually be regarded as independent (see, e.g., Lemma 3.4.1 in [@DehFer06]). In general, any estimator defined in terms of all observations exceeding some (random) threshold will be referred to as an estimator based on the POT approach. The vanilla estimator within this class is the Hill estimator [@Hil75] in the case $\gamma>0$.
The goal of the present paper is an in-depth comparison of the two approaches, in particular in terms of recent solid theoretical advances on asymptotic theory for the BM method, but also with a view on time series data and multivariate observations. The discussion will mostly be of reviewing nature, but some new insights will be presented as well. The next paragraphs summarize our contribution in a chronological order.
**Efficiency comparison in i.i.d. scenarios.** It seems to be general consensus among researchers in extreme value statistics that the POT method produces more efficient estimators than the BM method. The main heuristic reason is that all large observations are used for the calculation of POT estimators, while BM estimators may miss some large observations falling into the same block. This heuristics was confirmed by simulation studies in [@Cai09], see also the additional references mentioned in [@FerDeh15]. Due to some recent advances on theoretic properties of BM estimators [@Dom15; @FerDeh15; @BucSeg14; @BucSeg18; @DomFer17], the two approaches may actually be compared on solid theoretical grounds. For a certain type of cdfs, such a discussion has been carried out in [@FerDeh15] and [@DomFer17]; their findings are summarized and extended in Section \[sec:iid\] of this paper. We show that, depending on the data generating process, the convergence rate of the two methods may be different, with no general winner being identifiable. In case the rates are the same, BM estimators typically have a smaller variance, but a larger bias than their POT-competitors.
**BM and POT applied to time series.** The above discussion motivating the BM and POT approach was based on an i.i.d. assumption on the underlying sample. This assumption is actually quite restrictive since it excludes many common environmental or financial applications, where the underlying sample is typically a (stationary) time series. In this setting, it seems to be general consensus that the block maxima method still ‘works’ because the block maxima are (1) still approximately GEV-distributed [@Lea74] and (2) distant from each other and thus bear low serial dependence. Consequently, the sample of block maxima may still be regarded as an approximate i.i.d. sample from the three-parametric GEV-distribution. This heuristics is confirmed by recent theoretical results in [@BucSeg18; @BucSeg14].
Nevertheless, as discussed in Section \[sec:ts\] below, an obstacle occurs: the location and scale parameters attached to block maxima of a time series will typically be different from those of an i.i.d. series from the same stationary distribution $F$, whence estimators for quantities that depend on the stationary distribution only will possibly be inconsistent. The missing link is provided by the extremal index [@Lea83], a parameter in $[0,1]$ capturing the tendency of the extreme observations of a stationary time series to occur in clusters. The discussion will be worked out on the example of high quantile estimation: based on suitable estimators for the extremal index, see Section \[sec:ts\] below, can in fact be modified to obtain consistent BM estimators of large quantiles.
On the other hand, estimators based on the POT method for characteristics of the stationary distribution remain consistent. This however comes at the cost of an increased variance of the estimators due to potential clustering of extremes, see [@Hsi91; @Dre00; @Roo09], among many others. Should the ultimate interest be in return level or return periods estimation, the picture is reversed: the BM method is consistent without the need of estimating the extremal index, while POT estimators typically require estimates of the extremal index. More details are provided in Section \[sec:ts\].
**Extensions to multivariate observations and stochastic processes.** The previous discussion focussed on the univariate case. Section \[sec:mult\] briefly discusses multivariate extensions. On the theoretical side, while there are many results available for the POT approach, there is clearly a supply issue regarding the BM approach: almost all statistical theory is formulated under the assumption that the block maxima genuinely follow a multivariate extreme value distribution, thereby ignoring a potential bias and rendering a fair theoretical comparison impossible for the moment (to the best of our knowledge, the only available results on the BM method are provided in [@BucSeg14]). Instead, we provide a review on some of the existing theoretical results using these two approaches, and identify the open ends that may eventually lead to results allowing for an in-depth theoretical comparison in the future.
Not surprisingly, a fair comparison is even more difficult when considering extreme value analysis for stochastic processes. Most of the existing statistical methods are based on max-stable process models, i.e., on limit models arising for maxima taken over i.i.d. stochastic processes. The respective statistical theory is again mostly formulated under the assumption that the observations are genuine observations from the max-stable model, whence the statistical methods can (in most cases) be generically attributed to the BM approach. As for multivariate models, potential bias issues are mostly ignored. By contrast to multivariate models, however, very little is known for the POT approach to processes. A comparison is hence not feasible for the moment, and we limit ourselves to a brief review of existing results in Section \[sec:process\].
Finally, we end the paper by a section summarizing possible open research questions, Section \[sec:open\], and by a short conclusion, Section \[sec:con\].
Efficiency Comparison for univariate i.i.d. observations {#sec:iid}
========================================================
The efficiency of BM and POT estimators can be compared in terms of their asymptotic bias and variance. In this section, we particularly focus on the estimation of the extreme value index $\gamma$ because for estimating other tail related characteristics such as high quantiles or tail probabilities, the asymptotic distributions of respective estimators are typically dominated by those derived from estimating the extreme value index.
In both the BM and POT approach, a key tuning parameter is the intermediate sequence $k=k(n)$, which corresponds to either the number of blocks in the BM approach, or the number of upper order statistics in the POT approach. For most data generating processes, consistency of respective estimators can be guaranteed if $k$ is chosen in such a way that $k\to\infty$ and $k/n\to 0$ as $n\to\infty$. Here, the small fraction $k/n$ reflects the fact that the inference is based on observations in the tail only. Typically, the variance of respective estimators is of order $1/k$, while the bias depends on how well the distribution of block maxima or threshold exceedances is approximated by the GEV or GP distribution, respectively. Choosing $k$ in such a way that variance and squared bias are of the same order (see Section \[subsec:rate\] below), one may derive an optimal rate of convergence for a given estimator. Depending on the model, the optimal choice of $k$ may result in a faster rate for the BM method or the POT approach, as will be discussed next.
It is instructive to consider two extreme examples first (where the condition $k/n\to0$ as $n\to\infty$ may in fact be discarded): if $F$ is the standard Fréchet-distribution, then block maxima of size $r=1$ are already GEV-distributed. In other words a sample of $k=n$ block maxima of size $r=1$ can be used for estimation via the BM method. The rate of convergence is thus $1/\sqrt{n}$ and the POT method fails to achieve this rate. On the other hand, if $F$ is the standard Pareto distribution, then all $k=n$ largest order statistics can be used for the estimation via the POT approach. The rate of convergence is $1/\sqrt{n}$ for the POT method, which is not achievable via the BM method.
Apart form these two (or similar) extreme cases, the optimal choice of $k$ depends on second order conditions quantifying the speed of convergence in the domain of attraction condition. These are often (though not always) formulated in terms of the two quantile functions $$U(x) = \Big(\frac{1}{1-F}\Big)^\leftarrow(x) \quad\text{and}\quad V(x) = \Big(\frac{1}{-\log F}\Big)^\leftarrow(x)$$ for the POT- and the BM method, respectively. Note that the domain of attraction condition is equivalent to the fact that there exists a positive function $a_{{{\scriptscriptstyle\mathrm{POT}}}}$ such that, for all $x>0$, $$\begin{aligned}
\label{eq:doau}
\lim_{t \to \infty} \frac{U(tx)-U(t)}{a_{{{\scriptscriptstyle\mathrm{POT}}}}(t)} = \int_1^x s^{\gamma-1}\, ds ,\end{aligned}$$ see Theorems 1.1.6 and 1.2.1 in [@DehFer06]. The function $a_{{\scriptscriptstyle\mathrm{POT}}}$ is related to the sequence $(a_r)_r$ appearing in via $a_{{{\scriptscriptstyle\mathrm{POT}}}}(r) = a_{\lfloor r \rfloor}$.
In parallel, is also equivalent to the fact there exists a positive function $a_{{{\scriptscriptstyle\mathrm{BM}}}}$ such that, for all $x>0$, $$\begin{aligned}
\label{eq:doav}
\lim_{t \to \infty} \frac{V(tx)-V(t)}{a_{{{\scriptscriptstyle\mathrm{BM}}}}(t)} = \int_1^x s^{\gamma-1}\, ds.\end{aligned}$$ The bias of certain BM- and POT estimators is determined by the speed of convergence in the latter two limit relations, which can be captured by suitable second order conditions.
For $\gamma\in{\mathbb R},\rho\le 0$ and $x>0$, let $$h_\gamma(x) = \int_1^x s^{\gamma-1}\, ds, \qquad H_{\gamma,\rho}(x) = \int_1^x s^{\gamma-1} \int_1^s u^{\rho-1}\, du \, ds.$$
\[def:secor\] Let $F$ be a cdf satisfying the domain-of-attraction condition for some $\gamma\in{\mathbb R}$. Consider the following two assumptions.
\[cond:sou\] Suppose that there exists $\rho_{{{\scriptscriptstyle\mathrm{POT}}}}\le 0$, a positive function $a_{{\scriptscriptstyle\mathrm{POT}}}$ and a positive or negative function $A_{{\scriptscriptstyle\mathrm{POT}}}$ with $\lim_{t\to\infty } A_{{\scriptscriptstyle\mathrm{POT}}}(t) = 0$, such that, for all $x>0$, $$\lim_{t \to \infty} \frac{1}{A_{{\scriptscriptstyle\mathrm{POT}}}(t)} \bigg( \frac{U(tx)-U(t)}{a_{{{\scriptscriptstyle\mathrm{POT}}}}(t)} - h_\gamma(x) \bigg) = H_{\gamma, \rho_{{\scriptscriptstyle\mathrm{POT}}}}(x).$$
\[cond:sov\] Suppose that there exists $\rho_{{{\scriptscriptstyle\mathrm{BM}}}}\le 0$, a positive function $a_{{\scriptscriptstyle\mathrm{BM}}}$ and a positive or negative function $A_{{\scriptscriptstyle\mathrm{BM}}}$ with $\lim_{t\to\infty } A_{{\scriptscriptstyle\mathrm{BM}}}(t) = 0$, such that, for all $x>0$, $$\lim_{t \to \infty} \frac{1}{A_{{\scriptscriptstyle\mathrm{BM}}}(t)} \bigg( \frac{V(tx)-V(t)}{a_{{{\scriptscriptstyle\mathrm{BM}}}}(t)} - h_\gamma(x) \bigg) = H_{\gamma, \rho_{{\scriptscriptstyle\mathrm{BM}}}}(x).$$
The functions $|A_{{\scriptscriptstyle\mathrm{BM}}}|$ and $|A_{{\scriptscriptstyle\mathrm{POT}}}|$ are then necessarily regularly varying with index $\rho_{{\scriptscriptstyle\mathrm{BM}}}$ and $\rho_{{\scriptscriptstyle\mathrm{POT}}}$, respectively. The limit function $H_{\gamma,\rho}$ might appear unnecessarily specific, but in fact it is not, see [@DehSta96] or Section B.3 in [@DehFer06]. If the speed of convergence in or is faster than any power function, we set the respective second order parameter as minus infinity. For example, for $F=G_{\gamma, \sigma}^{\mathrm{GP}}$ from the GP family, we have $\{U(tx)-U(t)\}/(\sigma t^\gamma)=h_\gamma(x)$, i.e. $\rho_{{\scriptscriptstyle\mathrm{POT}}}= -\infty$ in this case. Likewise, any $F=G_{\gamma,\sigma,\mu}^{\mathrm{GEV}}$ from the GEV distribution satisfies $\{V(tx)-V(t)\}/(\sigma t^\gamma)=h_\gamma(x)$, which prompts us to define $\rho_{{\scriptscriptstyle\mathrm{BM}}}=-\infty$.
It is important to note that $\rho_{{{\scriptscriptstyle\mathrm{BM}}}}$ and $\rho_{{{\scriptscriptstyle\mathrm{POT}}}}$ can be vastly different. A general result can be found in [@DreDehLi03], Corollary A.1: under an additional condition which only concerns the cases $\gamma=1$, $\rho_{{\scriptscriptstyle\mathrm{BM}}}=-1$ or $\rho_{{\scriptscriptstyle\mathrm{POT}}}=-1$, the two coefficients are equal within the range $[-1,0]$. Otherwise, if \[cond:sov\] holds with $\rho_{{{\scriptscriptstyle\mathrm{BM}}}}<-1$, then \[cond:sou\] holds with $\rho_{{{\scriptscriptstyle\mathrm{POT}}}}=-1$; if \[cond:sou\] holds with $\rho_{{{\scriptscriptstyle\mathrm{POT}}}}<-1$, then \[cond:sov\] holds with $\rho_{{{\scriptscriptstyle\mathrm{BM}}}}=-1$. Some values of the parameters for various types of distributions are collected in Table \[tab:secor\].
Distribution $\gamma$ $\rho_{{{\scriptscriptstyle\mathrm{POT}}}}$ $\rho_{{{\scriptscriptstyle\mathrm{BM}}}}$
------------------------------------------------------- -------------------- --------------------------------------------- --------------------------------------------
$\mathrm{GP}(\gamma,\sigma)$ $\gamma$ $- \infty$ $-1$
$\mathrm{Exponential}(\lambda)$ 0 $ - \infty$ $-1$
$\mathrm{Uniform}(0,1)$ $-1$ $-\infty$ $-1$
$\mathrm{Arcsin}$ $-2$ $-2$ $-1$
$\mathrm{Burr}(\eta,\tau,\lambda)$ $1/(\lambda\tau) $ $-1/\lambda$ $\max(-1/\lambda, -1)$
$t_\nu, \nu\ne 1$ $1/\nu $ $-2/\nu$ $\max(-2/\nu, -1)$
$\mathrm{Cauchy} (=t_1)$ 1 $-2$ $-2$
$\mathrm{Weibull}(\lambda,\beta), \beta\ne 1$ 0 0 0
$\Gamma(\alpha,\beta)$ 0 0 0
$\mathrm{Normal}(\mu,\sigma^2)$ 0 0 0
$F(x) = \exp(-(1+x^\alpha)^\beta)$ $1/(\alpha\beta)$ $\max(-1/\beta, -1)$ $-1/\beta$
$\mathrm{\text{Fr\'{e}chet}}(\alpha, \sigma)$ $1/\alpha$ $-1$ $- \infty$
$\mathrm{\text{Reverse Weibull}}(\beta, \mu, \sigma)$ $-1/\beta$ $-1$ $- \infty$
$\mathrm{GEV}(\gamma,\mu,\sigma)$ $\gamma$ $-1$ $- \infty$
: Extreme value index and second order parameters for various models. []{data-label="tab:secor"}
We remark that for the the $t_1$-distribution, we obtained $\rho_{{{\scriptscriptstyle\mathrm{BM}}}}=\rho_{POT}=-2$. This is a special example for which Corollary 4.1 in [@DreDehLi03] is not applicable: $2tA(t)$ converges to $0=1-\gamma$. Notice that for the six models in the first category $\rho_{POT}<\rho_{BM}$ (if we consider $\lambda>1$ in the Burr distribution and $\nu>2$ in the $t_\nu$ distribution). For the four models in the second category $\rho_{POT}=\rho_{BM}$ while for the last four models, $\rho_{POT}>\rho_{BM}$ if we consider $\beta>1$ in the model $F(x) = \exp(-(1+x^\alpha)^\beta)$.
Let us now consider asymptotic theory for the estimation of the extreme value index $\gamma$. Perhaps surprisingly, asymptotic theory for the BM method has hitherto mostly ignored the fact that block maxima are only approximately GEV distributed (see, e.g., [@PreWal80; @HosWalWoo85; @BucSeg17], among others). Only recent theoretical studies in [@FerDeh15] and [@DomFer17] for the probability weighted moment (PWM) and the maximum likelihood (ML) estimator, respectively, take the approximation into account. Correspondingly, the asymptotic bias can be explicitly analyzed, relying on the second order condition \[cond:sov\] above. On the other hand, solid theoretical studies regarding the POT method have a much longer history, see [@DehFer06] for a comprehensive overview. For the sake of theoretical comparability with the BM method, we will subsequently exemplarily deal with the ML estimator and the PWM estimator, for which Theorems 3.4.2 and 3.6.1 in [@DehFer06] provide the respective asymptotic theory under the assumption that \[cond:sou\] is met (the results rely on [@Dre98; @DreDehFer04]).
Summarizing the above mentioned results, for both methods (BM and POT), the ML-estimators are consistent for $\gamma>-1$ and asymptotically normal for $\gamma>-1/2$, while PWM-estimators are consistent for $\gamma<1$ and asymptotically normal for $\gamma<1/2$. Asymptotic theory is formulated under the conditions that $k=k_n$ satisfies $k\to\infty$ and $k/n\to0$ (POT method) or $r=r_n$ satisfies $r\to\infty$ and $k=r/n \to 0$ (BM method), as $n\to\infty$. Under the respective second order conditions \[cond:sou\] and \[cond:sov\] formulated above, the asymptotic results can be summarized as $$\begin{aligned}
\hat \gamma \stackrel{d}{\approx} \mathcal N \Big(\gamma + A_{m}(n/k) b , \frac{1}{k} \sigma^2\Big), \quad m \in \{{{\scriptscriptstyle\mathrm{BM}}}, {{\scriptscriptstyle\mathrm{POT}}}\},\end{aligned}$$ where $\hat\gamma$ is one of the four estimators, and where the asymptotic bias $b$ and the asymptotic variance $\sigma^2$ depend on the specific estimator, the second order index $\rho_m$ and $\gamma$. In particular, the rate of convergence of the bias $A_m(n/k)$ crucially depends on the second order index $\rho_m$.
In the next two subsections, we first discuss the best achievable rate of convergence and then the asymptotic mean squared error in case the rates are the same. Finally, in the last subsection, we discuss the choice of $k$, i.e., the number of large order statistics in the POT approach or the number of blocks in the BM approach.
Rate of convergence {#subsec:rate}
-------------------
As is commonly done, we consider the rate of convergence of the root mean squared error. It is instructive to first elaborate on the case $A_m(t) \asymp t^{\rho_m}$ with $\rho_m\in(-\infty,0)$. The best attainable rate of convergence is achieved when squared bias and variance are of the same order, that is, when $$A_m^2\Big(\frac{n}k\Big) \asymp \Big(\frac{n}k\Big)^{2\rho_m} \asymp \frac1k.$$ Solving for $k$ yields $k \asymp n^{-2\rho_m/(1-2\rho_m)}$, which implies $$\text{Rate of Convergence of $\hat \gamma$} = n^{\rho_m/(1-2\rho_m)}$$ irrespective of $m\in\{{{\scriptscriptstyle\mathrm{ML}}}, {{\scriptscriptstyle\mathrm{PWM}}}\}$. For the POT approach, this result is known to hold for many other estimators of $\gamma$; see [@DehFer06] (though not for every estimator, see Table 3.1 in that reference). In fact, it can be shown that this is the optimal rate under some specific assumptions on the data generating process, see [@HalWel84]. We conjecture that the same result holds true for many other estimators relying on the BM method, though the literature does not provide sufficient theoretical results yet except for the ML and PWM estimators.
Since $\rho_{{{\scriptscriptstyle\mathrm{BM}}}}$ and $\rho_{{{\scriptscriptstyle\mathrm{POT}}}}$ might not be the same, the best attainable rate of convergence may be different for the BM and POT approach. Table \[tab:mse1\] provides a summary of which method results in a better rate. The case where the rates are the same is discussed in more detail in Section \[subsec:bias\] below.
2nd Order Parameters Rate POT Rate BM Better rate
------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------ -------------
$\rho=\rho_{{\scriptscriptstyle\mathrm{BM}}}=\rho_{{{\scriptscriptstyle\mathrm{POT}}}} \in [ -1, 0)$ $n^{\rho/(1-2\rho)}$ $n^{\rho/(1-2\rho)}$ -
$\rho_{{\scriptscriptstyle\mathrm{BM}}}= -1, \rho_{{\scriptscriptstyle\mathrm{POT}}}< -1$ $n^{\rho_{{\scriptscriptstyle\mathrm{POT}}}/(1-2\rho_{{\scriptscriptstyle\mathrm{POT}}})}$ $n^{-1/3}$ POT
$\rho_{{\scriptscriptstyle\mathrm{POT}}}= -1, \rho_{{\scriptscriptstyle\mathrm{BM}}}< -1$ $n^{-1/3}$ $n^{\rho_{{\scriptscriptstyle\mathrm{BM}}}/(1-2\rho_{{\scriptscriptstyle\mathrm{BM}}})}$ BM
: Best attainable convergence rates for the BM and POT approach in case $A_m(t) \asymp t^{\rho_m}$ with $\rho_m<0$ and for typical relationships between $\rho_{{\scriptscriptstyle\mathrm{BM}}}$ and $\rho_{{\scriptscriptstyle\mathrm{POT}}}$ (see [@DreDehLi03]).[]{data-label="tab:mse1"}
Let us finally mention that the specific assumption on the function $A_m$ made above (i.e., $A_m(t) \asymp t^{\rho_m}$ with $\rho_m\in(-\infty,0)$) is not essential, see the argumentation on pages 79–80 in [@DehFer06]. Moreover, for $\rho_m=-\infty$, the convergence rate is ‘faster than $n^{-1/2+{\varepsilon}}$ for any ${\varepsilon}>0$’, and, depending on the underlying distribution, in fact could even achieve $n^{-1/2}$ (see also Remark 3.2.6 in [@DehFer06]).
Asymptotic mean squared error {#subsec:bias}
-----------------------------
As discussed in the previous subsection, if $\rho_{{{\scriptscriptstyle\mathrm{POT}}}}\neq \rho_{{{\scriptscriptstyle\mathrm{BM}}}}$, the approach corresponding to a lower $\rho$ generically yields estimators for $\gamma$ with a faster attainable rate of convergence than the other approach. In this subsection, we consider the case $\rho_{{{\scriptscriptstyle\mathrm{POT}}}}=\rho_{{{\scriptscriptstyle\mathrm{BM}}}}$. Then both approaches, at their best attainable rate of convergence, will yield estimators of $\gamma$ with the same speed of convergence. Hence, the efficiency comparison should be made at the level of asymptotic mean squared error (AMSE) or, more precisely, its two subcomponents: asymptotic bias and asymptotic variance. Notice that the asymptotic bias and variance depends on the specific estimator used, whence the comparison can only be performed based on some preselected estimators.
A detailed analysis of the PWM and the ML estimators under the BM and POT approach has been carried out in [@FerDeh15] and [@DomFer17], for the case $\rho_{{\scriptscriptstyle\mathrm{BM}}}= \rho_{{\scriptscriptstyle\mathrm{POT}}}\in [-1,0]$ and $\gamma\in(-0.5, 0.5)$. The results are as follows: when using the same value for $k$, being either the number of large order statistics in the POT approach or the number of blocks in the BM approach, the BM version of either ML or PWM leads to a lower asymptotic variance compared to the corresponding POT version, for all $\gamma\in(-0.5, 0.5)$. On the other hand, the (absolute) asymptotic bias is smaller for the POT versions of the two estimators, for all $(\gamma, \rho) \in (-0.5, 0.5) \times [-1,0]$.
When comparing the optimal AMSE (where optimal refers to the fact that the parameter $k$ is chosen in such a way that the AMSE for the specific estimator is minimized), it turns out that, for the ML estimator, the POT approach yields a smaller optimal AMSE. For the PWM estimator, the BM method is preferable for most combinations of $(\gamma, \rho)$. When comparing all four estimators, the combination ML-POT has the overall smallest optimal AMSE.
Threshold and block length choice
---------------------------------
Both the POT and the BM approach require a practical selection for the intermediate sequence $k=k_n$ in a sample of size $n$. In the POT approach, the choice of $k$ problem can be interpreted as the choice of the threshold above which the POT approximation in is regarded as sufficiently accurate. Similarly, in the BM approach, $k$ is related to $r=n/k$, which is the size of the block of which the GEV approximation to the block maximum is regarded as sufficiently accurate.
The theoretical conditions that $k\to\infty$ and $k/n\to 0$, as $n\to\infty$ are useless in guiding the practical choice. Practically, often a plot between the estimates based on various $k$ against the values of $k$ is made for resolving this problem, the so-called “Hill plot” [@DreDehRes00], despite the fact that it can be also be applied to other POT or even BM estimators than just the Hill estimator. The ultimate choice is then made by taking a $k$ from the first stable region in the “Hill plot”. Nevertheless, the estimators are often rather sensitive to the choice of $k$.
For the POT approach, there exist a few attempts on resolving the choice of $k$ issue in a formal manner. For example, one solution is to find the optimal $k$ that minimizes the asymptotic MSE; see, e.g., [@DanDehPenDev01], [@DreKau98] and [@GuiHal01]. As an indirect solution to the problem, one may also rely on bias corrections, which typically allows for a much larger choice of $k$, see, e.g., [@GomDehRod08]. After the bias correction, the “Hill plot” usually shows a stable behavior and the estimates are less sensitive to the choice of $k$. For an extensive review on bias corrections, see [@BeiCaeGom12].
Compared to the extensive studies on the threshold choice and on bias corrections for the POT approach, there is, to the best of our knowledge, no existing literature addressing these issues for the BM approach. This may partly be explained by the fact that block sizes are often given by the problem at hand, for instance, block sizes corresponding to year. Nevertheless, based on the recent solid theoretical advances on the BM method, the foundations are laid to explore these issues in a rigorous manner in the future.
BM and POT for Univariate Stationary Time Series {#sec:ts}
================================================
In many practical applications, the discussion from the previous section is not quite helpful: the underlying data sample is not i.i.d., but in fact a stretch of a possibly non-stationary time series. Often, by either restricting attention to a proper time horizon or by some suitable transformation, the time series can at least be assumed to be stationary.[^1] Throughout this section, we make the following generic assumption: $(X_t)_{t\in{\mathbb Z}}$ is a strictly stationary univariate time series, and the stationary cdf $F$ satisfies the domain-of-attraction condition . It is important to note that the parameters $\gamma,a_r$ and $b_r$ only depend on the stationary cdf $F$, and that for instance expressing high quantiles of $F$ through these parameters continues to hold for time series. Let us begin by passing over the arguments from Section \[sec:intro\] that eventually led to the BM- and POT method.
The POT approach for time series
--------------------------------
Recall that the POT approach is based on the sample of large order statistics denoted by $\mathcal{X}_{{\scriptscriptstyle\mathrm{POT}}}= \{X_{n-k:n}, \dots, X_{n:n}\}$. The main motivation that lead us to consider this sample was the marginal limit relation . Bearing in mind that, under mild extra conditions on the serial dependence (ergodicity, mixing conditions, …), empirical moments are consistent for their theoretical counterparts, it is thus still reasonable to estimate the respective parameters by any form of moment matching, e.g., by PWM. The asymptotic variance of such estimators will however be different from the i.i.d. case in general (a consequence of central limit theorems for time series under mixing conditions).
Consider the ML-method: unlike for i.i.d. data, the sample $\mathcal{X}_{{\scriptscriptstyle\mathrm{POT}}}$ cannot be regarded as independent anymore, whence it is in general impossible to derive the (approximate) generalized Pareto likelihood of $\mathcal{X}_{{\scriptscriptstyle\mathrm{POT}}}$. As a circumvent, one may ‘do as if’ the likelihood arising in the i.i.d. case is also the likelihood for the time series case (quasi-maximum likelihood), and use essentially the same ML-estimators as for the i.i.d. case. Then, since the latter estimator is in fact also depending on empirical moments only, we still obtain proper asymptotic properties such as consistency and asymptotic normality.
Respective theory can be found in [@Hsi91; @ResSta98] for the Hill estimator and in [@Dre00] for a large class of estimators, including PWM and ML. Most of the estimators have the same bias as in the i.i.d. case, whereas their asymptotic variances depend on the serial dependence structure and are usually higher than those obtained in the i.i.d. case. Since the asymptotic bias shares the same explicit form, bias correction can also be performed in the same way as in the i.i.d. case; see, e.g., [@DehMerZho16].
The BM approach for time series
-------------------------------
Recall that the BM approach is based on the sample of block maxima $\mathcal{X}_{{\scriptscriptstyle\mathrm{BM}}}= \{M_{1,r}, \dots, M_{k:r}\}$, where $M_{j,r}$ denotes the maximum within the $j$th disjoint block of observations of size $r$. The main motivation in Section \[sec:intro\] that lead us to consider this sample as approximately GEV-distributed was the relation $$\Pr(M_{1,r}\le a_rx+b_r) = F^r(a_rx+b_r) \approx G_{\gamma,0,1}^{{\scriptscriptstyle\mathrm{GEV}}}(x),$$ for large $r$. The first equality is not true for time series, whence more sophisticated arguments must be found for the BM method to work for time series. In fact, it can be shown that if $F$ satisfies , if $\Pr(M_{1,r}\le a_rx+b_r)$ is convergent for some $x$ and if mild mixing conditions on the serial dependence (known as $D(u_n)$-conditions) are met, then there exists a constant $\theta\in[0,1]$ such that $$\lim_{r\to\infty} \Pr(M_{1:r} \le a_r x + b_r ) = \big( G_{\gamma,0,1}^{{\scriptscriptstyle\mathrm{GEV}}}(x) \big)^\theta$$ for all $x\in{\mathbb R}$ [@Lea83]. The constant $\theta$ is called the *extremal index* and can be interpreted as capturing the tendency of the time series that extremal observations occur in clusters. If $\theta>0$, then letting $$\begin{aligned}
\label{eq:tilde}
\tilde a_r = a_r \theta^\gamma, \qquad \tilde b_r = b_r - a_r \frac{1-\theta^\gamma}{\gamma}\end{aligned}$$ we immediately obtain that $$\begin{aligned}
\label{eq:doats}
\lim_{r\to\infty} \Pr(M_{1:r} \le \tilde a_r x + \tilde b_r ) = G_{\gamma,0,1}^{{\scriptscriptstyle\mathrm{GEV}}}(x)\end{aligned}$$ for all $x\in{\mathbb R}$. Hence, the sample $\mathcal{X}_{{\scriptscriptstyle\mathrm{BM}}}$ is approximately GEV-distributed with parameter $(\tilde a_r, \tilde b_r, \gamma)$, which can then be estimated by any method of choice. It is important to note that, unless $\theta=1$ or $\gamma=0$, $\tilde a_r $ and $\tilde b_r$ are different from $a_r$ and $b_r$. Consequently, additional steps must be taken for estimating quantiles of $F$ via , see also Section \[subsec:quantile\] below. Via , it is possible to transform between $(a_r,b_r)$ and $(\tilde a_r, \tilde b_r)$ if the extremal index $\theta$ is known or estimated. Regarding the estimation of the extremal index, a large variety of estimators has been proposed, which may itself be grouped into four categories: 1) BM-like estimators based on “blocking” techniques (), 2) POT-like estimators that rely on threshold exceedances ([@FerSeg03; @Suv07]), 3) estimators that use both principles simultaneously ([@Hsi93; @Rob09; @RobSegFer09]) and 4) estimators which, next to choosing a threshold sequence, require the choice of a *run-length* parameter [@SmiWei94; @WeiNov98].
Since the distance between the time points at which the maxima within two successive blocks are attained is likely to be quite large, the sample $\mathcal X_{{\scriptscriptstyle\mathrm{BM}}}$ can be regarded as approximately independent. As a matter of fact, the literature on statistical theory for the BM method is mostly based on the assumption that $\mathcal X_{{\scriptscriptstyle\mathrm{BM}}}$ is a genuine i.i.d. sample from the GEV-family (see, e.g., [@PreWal80; @HosWalWoo85; @BucSeg17], among others). Two approximation errors are thereby completely ignored: the cdf is only approximately GEV, and the sample is only approximately independent. Solid theoretical results taking these errors into account are rare: [@BucSeg18] treat the ML-estimator in the heavy-tailed case ($\gamma>0$). The main conclusions are: the sample can safely be regarded as independent, but a bias term may appear which, similar as in Section \[sec:iid\], depends on the speed of convergence in . [@BucSeg18b] improve upon that estimator by using sliding blocks instead of disjoint blocks. The asymptotic variance of the estimator decreases, while the bias stays the same. Moreover, the resulting ‘Hill-Plots’ are much smoother, guiding a simpler choice for the block length parameter.
Comparison between the two methods {#subsec:compts}
----------------------------------
Let us summarize the main conceptual differences between the BM and the POT method for time series. First of all, BM and POT estimate ‘the same’ extreme value index $\gamma$, but possibly different scaling sequence $\tilde a_r, \tilde b_r$ and $a_r, b_r$. Second, the sample $\mathcal X_{{\scriptscriptstyle\mathrm{BM}}}$ can be regarded as asymptotically independent (asymptotic variances of estimators are the same as if the sample was i.i.d.), while $\mathcal X_{{\scriptscriptstyle\mathrm{POT}}}$ is serially dependent, possibly increasing asymptotic variances of estimators compared to the i.i.d. case.
Due to the lack of a general theoretical result on the BM method, a theoretical comparison on which method is more efficient along the lines of Section \[sec:iid\] seems out of reach for the moment. In particular, a relationship between the respective second order conditions controlling the bias is yet to be found. However, some insight into the merits and pitfalls of two approaches can be gained by considering the problem of estimating high quantiles and return levels.
### Estimating high quantiles {#subsec:quantile}
Recall that high quantiles of the stationary distribution can be expressed in terms of $a_r, b_r$ and $\gamma$, see . As a consequence, based on the plug-in principle, the POT method immediately yields estimators for high quantiles. On the other hand, the BM method cannot be used straight-forwardly, as it commonly only provides estimators of $\tilde a_r, \tilde b_r$ and $\gamma$. Via , the latter estimators may be transferred into estimators of $a_r, b_r$ and $\gamma$ using an additional estimator of the extremal index $\theta$. It is important to note that the latter estimators typically depend on the choice of one or two additional parameters, and that they are often quite variable. By contrast, the POT approach therefore seems more suitable when estimating high quantiles or, more generally, parameters that only depend on the stationary distribution (such as probabilities of rare events). Recall though that estimators based on the POT approach usually suffer from a higher asymptotic variance due to the serial dependence.
### Estimating return levels {#subsec:quantile}
Let $F_r(x)=\Pr(M_{1:r} \le x)$. For $T \ge 1$, the $T$-return level of the sequence of block maxima is defined as the $1-1/T$ quantile of $F_r$, that is, $$\mathrm{RL}(T,r) = F_r^{\leftarrow}(1-1/T) = \inf \{ x \in {\mathbb R}: F_r(x) \ge 1 - 1/T \}.$$ Since block maxima are asymptotically independent, it will take on average $T$ blocks of size $r$ until the first such block whose maximum exceeds $\mathrm{RL}(T, r)$. Now, since $F_r$ is approximately equal to the GEV-cdf with parameters $\gamma, \tilde b_r, \tilde a_r$ for large $r$ by , we obtain that $$\mathrm{RL}(T,r) \approx \tilde b_r + \tilde a_r \frac{\{- r \log(1-1/T)\}^{-\gamma}-1}{\gamma} \approx \tilde b_r + \tilde a_r \frac{(r/T)^{-\gamma}-1}{\gamma}.$$ In comparison to the estimation of high-quantiles, see , we have now expressed the object of interest in terms of the sequences $\tilde a_r$ and $\tilde b_r$ and the extreme-value index $\gamma$. Following the discussion in the previous section, it is now the BM method which yields simpler estimators that do not require additional estimation of the extremal index. By contrast, the POT approach only results in estimators of $(a_r,b_r)$ and $\gamma$, and therefore requires a transformation to $(\tilde a_r,\tilde b_r)$ via based on an estimate of the extremal index $\theta$.
BM and POT for Multivariate Observations {#sec:mult}
========================================
Due to the lack of asymptotic results on the multivariate BM method which take the approximation error into account, a deep comparison between the BM and POT approach is not feasible yet. Within this section we try to identify the open ends that may eventually lead to such results in the future.
Let $F$ be a $d$-dimensional cdf. The basic assumption of multivariate extreme-value theory, generalizing , is as follows: suppose that there exists a non-degenerate cdf $G$ and sequences $(a_{r,j})_{r\in{\mathbb N}}, (b_{r,j})_{r\in{\mathbb N}}, j=1, \dots d,$ with $a_{r,j}>0$ such that $$\begin{aligned}
\label{eq:doamult}
\lim_{r \to\infty} \Pr\Big(\frac{\max_{i=1}^r X_{i,1}-b_{r,1}}{a_{r,1}} \le x_1 , \dots,\frac{\max_{i=1}^r X_{i,d}-b_{r,d}}{a_{r,d}} \le x_d\Big)
= G(x_1, \dots, x_d)\end{aligned}$$ for any $x_1, \dots, x_d\in{\mathbb R}$, where $\bm X_i=(X_{i,1}, \dots, X_{i,d})', i\in{\mathbb N},$ is an i.i.d. sequence from $F$, and where the marginal distributions $G_j$ of $G$, $j=1, \dots, d$, are GEV-distributions with location parameter 0, scale parameter 1 and shape parameter $\gamma_j\in{\mathbb R}$ (location 0 and scale 1 can always be reached by adapting the sequences $a_{r,j}$ are $b_{r,j}$ if necessary). The dependence between the coordinates of $G$ can be described in various equivalent ways (see, e.g., [@Res87; @BeiGoeSegTeu04; @DehFer06]): by the stable tail dependence function $L$ [@Hua92], by the exponent measure $\mu$ [@BalRes77], by the Pickands dependence function $A$ [@Pic81], by the tail copula $\Lambda$ [@SchSta06], by the spectral measure $\Phi$ [@DehRes77], by the madogram $\nu$ [@NavGuiCooDie09], or by other less popular objects. All these objects are in one-to-one correspondence, and for each of them a large variety of estimators has been proposed, both in a nonparametric way and under the assumption that the objects are parametrized by an Euclidean parameter.
In this paper, we will mainly focus on nonparametric estimation. As in the univariate case, the estimators may again be grouped into BM and POT based estimators, see Sections \[subsec:potmult\] and \[subsec:bmmult\] below. Often, estimation of the marginal parameters and of the dependence structure is treated successively. It is important to note that standard errors for estimators of the dependence structure may then be influenced by standard errors for the marginal estimation, a point which is often ignored in the literature on statistics for multivariate extremes. In fact, a phenomenon well-known in statistics for copulas [@GenSeg10] may show up: possibly completely ignoring additional information about the marginal cdfs, estimators for the dependence structure may have a lower asymptotic variance if marginal cdfs are estimated nonparametrically; see [@Buc14] for a discussion of the empirical stable tail dependence function from Section \[subsec:potmult\] below, and [@GenSeg09] for estimation of Pickands dependence function based on i.i.d. data from a bivariate extreme value distribution, Section \[subsec:bmmult\] below.
The POT method in the multivariate case {#subsec:potmult}
---------------------------------------
Suppose $\bm X_1, \dots, \bm X_n$, with $\bm X_i=(X_{i,1}, \dots, X_{i,d})'$, is an i.i.d. sample from $F$. Recall that the univariate POT method was based on the observations $\mathcal X_{{\scriptscriptstyle\mathrm{POT}}}=\{X_{n-k:n}, \dots, X_{n:n}\}$, which may be rewritten as $\mathcal X_{{\scriptscriptstyle\mathrm{POT}}}=\{X_i: \text{rank}(X_i \text{ among } X_1, \dots, X_n) \ge n-k) $. Thus, a possible generalization to multivariate observations consists of defining $$\mathcal X_{{\scriptscriptstyle\mathrm{POT}}}= \{ \bm X_i \mid
\text{rank}(X_{i,j} \text{ among } X_{1,j}, \dots, X_{n,j}) \ge n-k \text{ for some } j=1, \dots, d\},$$ that is, $\mathcal X_{{\scriptscriptstyle\mathrm{POT}}}$ comprises all observations for which at least one coordinate is large. Any estimator defined in terms of these observations may be called an estimator based on the multivariate POT method.
As an example, consider the estimation of the so-called stable tail dependence function $L$, which is defined as $$\begin{aligned}
\label{eq:L}
L(\bm x) = \lim_{t\downarrow0} t^{-1} \Pr( F_1(X_1) > 1-tx_1 \text{ or } \dots \text{ or } F_d(X_d) > 1-tx_d),\end{aligned}$$ where $\bm x=(x_1, \dots, x_d)'\in[0,1]^d$; a limit that necessarily exists under , but may also exist for marginals $F_j$ not in any domain-of-attraction. The function $L$ can be estimated by its empirical counterpart, defined as $$\hat L(x_1, \dots, x_d) = \frac1k \sum_{i=1}^n \bm 1( \hat F_{n,1} (X_{i,1}) > 1-\tfrac{k}n x_1 \text{ or } \dots \text{ or } \hat F_{n,d} (X_{i,d}) > 1-\tfrac{k}n x_d),$$ where $\hat F_{n,j}$ denotes the empirical cdf based on the observations $X_{1,j}, \dots , X_{n,j}$; see, e.g., [@Hua92]. Since $\bm x\in [0,1]^d$, the estimator in fact only depends on the sample $\mathcal X_{{\scriptscriptstyle\mathrm{POT}}}$.
Suppose the following natural second order condition quantifying the speed of convergence in is met: there exists a positive or negative function $A$ and a real-valued function $g\not \equiv 0$ such that $$\begin{aligned}
\label{eq:secorm}
\lim_{t\to\infty} \frac{t \Pr( F_1(X_1) > 1-\tfrac{x_1}t \text{ or } \dots \text{ or } F_d(X_d) > 1-\tfrac{x_d}t) - L(x_1, \dots, x_d)}{A(t)} = g(\bm x)\end{aligned}$$ uniformly in $\bm x\in[0,1]^d$. Then, under additional smoothness conditions on $L$, it can be shown that $\hat L$ is consistent and asymptotically Gaussian in terms of functional weak convergence, the variance being of order $1/k$ and the bias being of order $A(n/k)$, provided that $k=k_n\to\infty$ and $k/n\to0$ as $n\to\infty$; see, e.g., [@Hua92; @EinKraSeg12], among others. Following the discussion in Section \[sec:iid\], if we additionally assume that $A(t) \asymp t^\rho$ for some $\rho\in(-\infty,0)$, the best attainable convergence rate, achieved when squared bias and variance are balanced, is $$\text{Rate of Convergence of $\hat L(\bm x)$} = n^{\rho/(1-2\rho)}.$$ This convergence rate is in fact optimal under additional conditions on the data-generating process, see [@DreHua98]. Also note that $\hat L$ suffers from an asymptotic bias as in the univariate case, and that corresponding bias corrections for the bivariate case have been proposed in [@FouDehMer2015].
As in the univariate case, the literature on further theoretical foundations for the multivariate POT method is vast, see, e.g., [@EinDehPit01; @EinSeg09] for nonparametric estimation of the spectral measure, [@DreDeh15] for estimation of failure probabilities, or [@DehNevPen08; @EinKraSeg12] for parametric estimators, among many others.
The BM method in the multivariate case {#subsec:bmmult}
--------------------------------------
Again suppose $\bm X_1, \dots, \bm X_n$ is an i.i.d. sample from $F$. Let $r$ denote a block size, and $k=\lfloor n/r \rfloor$ the number of blocks. For $\ell=1, \dots, k$, let $\bm M_{\ell,r} =(M_{\ell, 1,r}, \dots, M_{\ell,1,r})'$ denote the vector of componentwise block-maxima in the $\ell$th block of observations of size $r$ (it is worthwhile to note that $\bm M_{\ell,r}$ may be different from any $\bm X_{i}$). Any estimator defined in terms of the sample $\mathcal X_{{{\scriptscriptstyle\mathrm{BM}}}} = (\bm M_{1,r}, \dots, \bm M_{k,r})$ is called an estimator based on the BM approach.
Just as for the univariate BM method, asymptotic theory is usually formulated under the assumption that $\bm M_1, \dots, \bm M_k$ is a genuine i.i.d. sample from the limiting distribution $G$; a potential bias is completely ignored. Moreover, estimation of the marginal parameters is often disentangled from estimation of the dependence structure, with theory for the latter either developed under the assumption that marginals are completely known (which usually leads to wrong asymptotic variances), or under the assumption that marginals are estimated nonparametrically. See, for instance, [@Pic81; @CapFouGen97; @ZhaWelPen08; @GenSeg09; @GudSeg12] for nonparametric estimators and [@GenGhoRiv95; @DomEngOes16b] for parametric ones, among many others.
To the best of our knowledge, the only reference that takes the approximation error induced by the assumption of observing data from a genuine extreme-value model into account is [@BucSeg14], where the estimation of the Pickands dependence function $A$ based on the BM-method is considered. Not only the bias is treated carefully there, but also the underlying observations $\bm X_1, \dots, \bm X_n$ may possess serial dependence in form of a stationary time series. Just like in the univariate case described above, the best attainable convergence rate of the estimator again depends on a second order condition.
Comparison between the two methods {#comparison-between-the-two-methods}
----------------------------------
Due to the lack of honest theoretical results on the BM method, not much can be said yet about which method is better in terms of, say, the rate of convergence. The missing tool is a multivariate version of Corollary A.1 in [@DreDehLi03], allowing one to move from a BM second order condition (such as the one imposed in [@BucSeg14]) to a POT second order condition as in , and vice versa. It then seems likely that similar phenomena as in the univariate case in Section \[sec:iid\] may show up.
Multivariate time series
------------------------
Moving from i.i.d. multivariate observations to multivariate strictly stationary time series induces similar phenomena as in the univariate case, whence we keep the discussion quite short. Under suitable conditions on the serial dependence, estimators based on the POT approach are still consistent and asymptotically normal, though with a possibly different asymptotic variance (this can for instance be deduced from [@DreRoo10]). Regarding the BM method, the same heuristics as in the univariate case apply: block maxima may safely be assumed as independent and as following a multivariate extreme value distribution [@BucSeg14]. The estimators based on the BM method are then also consistent and asymptotically normal with a potential bias. Similar to the discussion on the location and scale parameters in the univariate case, the objects that are estimated by POT and BM may be different but are linked by the multivariate extremal index ([@Nan94], see also Section 10.5 in [@BeiGoeSegTeu04]). Hence, following the discussion in Section \[subsec:compts\], it seems preferable to estimate quantities that only depend on the tail of the stationary distribution by the POT approach, while tail quantities similar to the univariate return levels (that also depend on the serial dependence) are preferably estimated by the BM approach. As in the univariate case, a detailed theoretical comparison does not seem to be feasible.
BM and POT for stochastic processes {#sec:process}
===================================
The BM approach for stochastic processes is based on modeling by max-stable processes, i.e., on limit models arising for block maxima taken over i.i.d. stochastic processes. Recent research has focussed on the structure and characteristics of max-stable processes, see, e.g., [@Deh84], [@GinHahVat90] and [@KabSchDeh09]; on simulating from max-stable processes, see, e.g., [@DomEyiRib13], [@DieMik15], [@DomEngOes16] and [@OesSchZhou18]; and on statistical inference based on max-stable processes, see, e.g., [@ColTaw96], [@BuiDehZho08], [@PadRibSis10] and [@HusDav14].
As mentioned in the introduction, there is a clear supply issue regarding the POT approach to stochastic process models. Early studies such as [@EinLin06] consider the estimation of marginal parameters only, or consider nonparametric estimation of the dependence structure [@deHaanLin2003], however with only weak consistency established. Recent development on *Generalized Pareto Processes* allow for considering parametric estimation for the dependence structure, see, e.g. [@FerDeh14], [@ThiOpi15] and [@HusWad17]. Given the imbalanced nature, we skip a deeper review on the BM and POT approaches for extremes regarding stochastic processes.
Open problems {#sec:open}
=============
Throughout this paper, we have already identified a number of open research problems, mostly related to an honest verification of the BM approach. Within the following list, we recapitulate those issues and add several further possible research questions:
Asymptotic theory on further estimators based on the block maxima method, if possible allowing for a comparison between the imposed second order condition and those from the POT approach.
In case the BM method yields to faster attainable rates of convergence than the POT approach (Section \[subsec:rate\]): are the obtained rates optimal?
Derive a test for which approach is preferably for a given data set ($H_0: \rho_{{\scriptscriptstyle\mathrm{BM}}}\le \rho_{{\scriptscriptstyle\mathrm{POT}}}$, or similar).
Block length choice and bias reduction for BM.
More results on the sliding block maxima method (non-heavy tailed case, multivariate case).
A comparison of BM and POT second order conditions in the multivariate case.
A comparison of return level/quantile estimation based on BM and POT, possibly incorporating an estimator for the extremal index.
Extension to stochastic processes (max-stable processes and generalized Pareto processes): theoretical results on statistical methodology are still rare, and a comparison between BM and POT is not feasible yet.
Conclusion {#sec:con}
==========
There is no winner.
[^1]: For example, for financial applications, the stationarity assumption can often be approximately guaranteed by restricting attention to a time horizon during which few macro economic conditions had changed. Similarly, for environmental applications, this can be achieved by restricting attention to observations falling into, say, the summer months.
|
---
abstract: 'By identifying Hamiltonian flows with geodesic flows of suitably chosen Riemannian manifolds, it is possible to explain the origin of chaos in classical Newtonian dynamics and to quantify its strength. There are several possibilities to geometrize Newtonian dynamics under the action of conservative potentials and the hitherto investigated ones provide consistent results. However, it has been recently argued that endowing configuration space with the Jacobi metric is inappropriate to consistently describe the stability/instability properties of Newtonian dynamics because of the non-affine parametrization of the arc length with physical time. To the contrary, in the present paper it is shown that there is no such inconsistency and that the observed instabilities in the case of integrable systems using the Jacobi metric are artefacts.'
author:
- Loris Di Cairano
- Matteo Gori
- Marco Pettini
title: 'Coherent Riemannian-geometric description of Hamiltonian order and chaos with Jacobi metric'
---
Introduction
============
Elementary tools of Riemannian differential geometry can be successfully used to explain the origin of chaos in Hamiltonian flows or, equivalently, in Newtonian dynamics. Natural motions of Hamiltonian systems can be viewed as geodesics of the configuration-space manifold $M$ equipped with the Riemannian metric $g_J$, known as the Jacobi metric (or kinetic energy metric). The stability/instability properties of such geodesics can be investigated by means of the Jacobi–Levi-Civita (JLC) equation for geodesic spread. It has been shown that chaos in physical geodesic flows does not stem from hyperbolicity of $M$: phase space trajectories/geodesics are destabilized both by regions of negative curvature and by parametric instability caused by positive curvature varying along the geodesics [@marco; @physrep; @book]. Another remarkable fact is that the JLC equation written for a geometrization of Hamiltonian systems in an enlarged configuration space-time endowed with a metric due to Eisenhart [@eisenhart] yields the standard tangent dynamics equation commonly used in numerical computations of the largest Lyapunov exponent (LLE). These two different geometric framework have been proven to give the same information about order and chaos and about the strength of chaos as well. This has been checked in the case of two-degrees of freedom Hamiltonian systems, for the Hénon-Heiles model and for two coupled quartic oscillators, respectively [@cerruti1996geometric; @rick], and in the case of a large number of degrees of freedom (from 150 up to 1000) [@cerruti1997lyapunov]. It has been found that the JLC equation stemming from the Jacobi metric gives exactly the same quantitative results of the tangent dynamics equation.
This notwithstanding, in Ref. [@cuervo2015non] it has been argued that the non-affine parametrization of the arc-length with time in configuration space endowed with the Jacobi metric leads to nonphysical instabilities. More precisely, the JLC equation written for the Jacobi metric seems to give chaos also for a system of harmonic oscillators. Moreover, such alleged non-physical instabilities are found to be stronger in systems with few degrees of freedom. Such an argument seems to radically exclude the use of Jacobi metric in configuration space to consistently investigate Hamiltonian chaos; in fact, the non-affine parametrization of the arc length with respect to physical time is an unavoidable consequence of the way Jacobi metric is derived from Maupertuis’ least action principle. Some mathematical works [@montgomery2014s; @giambo2014morse; @giambo2015normal], partly motivated by the results reported in Ref.[@cuervo2015non], have investigated the behaviour of the geodesics in configuration space endowed with the Jacobi metric near the so called Hill’s boundaries, i.e. the regions in configuration space where $E=V(\mathbf{q})$ and where the Jacobi metric is singular ($g_{J}=0$). In particular, all these works emphasized the phenomenon of geodesic reflection near Hill’s boundaries and the relevance of these reflections to characterize periodic orbits, with, among the others, an original proposal dating back to Ref. [@seifert1948periodische]. Finally, other authors [@yamaguchi2001geometric] have tackled the geometrization of Hamiltonian dynamical systems by lifting the Jacobi metric from configuration space $M$ to its cotangent bundle $T^{*}M$, i.e. to the phase space. In this framework JLC equations are rewritten as a system of first order linear differential equations on the tangent bundle $TT^{*}M$ of phase space: this allows to identify the adequate degrees of freedom to compute the *Geometric Largest Lyapunov Exponent*(GLLE). Within this framework, the GLLE (JLC equations) and LLE (tangent dynamics) are found in very good agreement to characterize the chaotic regime (strong/weak chaos) of the Hénon-Heiles model. All these studies, together with the manifest contradiction among the outcomes of Ref.[@cuervo2015non] and those of Refs. [@cerruti1996geometric; @rick] have motivated the present work. The present paper is organized as follows. In Section \[sec:confscal\_jacobi\] we briefly discuss some aspects of the construction of the Jacobi metric, with emphasis on the consequences of non-affine parametrization of the arc length with time for the JLC equation, and on the presence of boundaries where the metric is singular. In Section \[sec\_parallel\_transported\_frame\], the JLC equation is rewritten by introducing a parallel transported frame for which explicit expressions are then given in Subsections \[subsec\_simulation\_parallel\_transported\_frame2\] and \[subsec\_simulation\_parallel\_transported\_frame3\] for $N=2$ and $N=3$, respectively. Then in Subsections \[simulation2\] and \[simulation3\] the results of the corresponding numerical simulations are reported for two and three (resonant) harmonic oscillators, respectively, using the most critical values of the parameters which, according to Ref.[@cuervo2015non], should yield non physical instabilities. It is shown that this is not the case. In Section \[sec\_concentration\_measure\], we discuss how the measure concentration phenomenon, which takes place at a large number of degrees of freedom, completely removes any problem making unnecessary even resorting to a parallel transported frame. Some conclusions are drawn in Section \[conclusion\].
Effects of Non-Affine Parametrization of the arc length with Jacobi Metric {#sec:confscal_jacobi}
==========================================================================
Among the different possibilities of rephrasing Newtonian dynamics in geometric terms, as reported in [@physrep; @book], the Jacobi metric in configuration space leads to the mathematically richest structure \[$(M,g_J)$ is geodesically complete in the sense of the Hopf-Rinow theorem\]. Let us consider a thorough investigation of the geometrization of Newtonian dynamics by means of $(M,g_J)$ for systems described by a Lagrangian of the form $$L(\dot{q},q)=\dfrac{1}{2} \overline{g}_{ij}(q) \dot{q}^{i} \dot{q}^j -V(q)\,\,\, ,$$ where $$\dot{q}^{i}(t)=\dfrac{\mathrm{d}q^i}{\mathrm{d}t}(t)$$ $\overline{g}_{ij}$ are the components of the kinetic energy metric on configuration space $M$ with the associated Levi-Civita connection $\overline{\nabla}$ specified by the Christoffel coefficients $\overline{\Gamma}^{i}_{jk}$, and $V(q)$ is the potential energy. It is well known that, given a chart $({U},\phi)$ on the configuration space $M$ and a curve $\gamma:I\subset\mathbb{R}\longrightarrow M$, the natural motions $\phi(\gamma(t))=\pieno{\gamma}(t)=\pieno{q}(t)$ are the class of curves that make stationary the action functional $$\label{eq:action_functional}
S[\pieno{q}(t)]=\int_{t_0}^{t_1} L(\dot{\pieno{q}}(t),\pieno{q}(t)) \mathrm{d}t$$ on the class of curves with $\pieno{q}(t_0)=\pieno{a}$ and $\pieno{q}(t_1)=\pieno{b}$ fixed, i.e. $$\label{eq:leastact_princ}
\delta S[\gamma]=0 \qquad \text{with} \qquad \gamma(t_0)=a \,\,\,\, \text{and}\,\,\,\,\, \gamma(t_1)=b \ .$$ The Newton equations are derived from the Euler-Lagrange equations, i.e. $$\overline{\nabla}_{\dot{\gamma}}\dot{\gamma}=-\overline{\mathrm{grad}} V$$ where $\overline{\mathrm{grad}}^{i} f(q)=\overline{g}^{ik}\partial_{k} f$ is the gradient. Natural motions $\gamma(t)$ belong to a class of curves of configuration space satisfying the “physical” variational principle , therefore these curves can be identified with geodesics of configuration space which also satisfy a variational principle but in this case of a “geometrical” kind. In fact, geodesics $\tilde{\gamma}(s)$ are curves of a Riemannian manifold $(M,\tilde{g})$ endowed with a metric $\tilde{g}$ that makes stationary the length functional between two fixed points, i.e. $$\delta\mathit{l}[\tilde{\gamma}]=0 \qquad \text{with} \quad \mathit{l}[\tilde{\gamma}(s)]=\int_{s_{0}}^{s_{1}}\sqrt{\tilde{g}_{ij}\dfrac{\mathrm{d}q^{i}}{\mathrm{d}s}\dfrac{\mathrm{d}q^{j}}{\mathrm{d}s}}\mathrm{d}s \qquad \tilde{\gamma}(s_0)=a \quad \tilde{\gamma}(s_1)=b\,\,.$$ One possible way to provide an identification of natural motion with geodesics is provided by the introduction of the Jacobi metric on a subspace of configuration space. Let us consider the total energy function $$H(\dot{q},q)=\left(\dot{q}^i\dfrac{\partial L}{\partial \dot{q}^i}-L\right)=\dfrac{1}{2} \overline{g}_{ij}(q) \dot{q}^{i} \dot{q}^j+V(q)$$ which is obviously a conserved quantity along the natural motions, i.e. $d{H}(\dot{q}(t),q(t))/dt=0$. Since the Lagrangian is a homogeneous function in $\dot{q}^i$ it follows that $H(\dot{q},q)=2W-L$ where $$W=\dfrac{1}{2}g_{ij}\dot{q}^i\dot{q}^{j}=H(\dot{q},q)-V(q)$$ is the kinetic energy. If we consider a class of *isoenergetic* trajectories $q(t;E)$ in configuration space, that is having the same total energy value $H(\dot{q}(t;E),q(t;E))=E$, as the kinetic energy is non negative, the trajectories of the system in configuration space are confined in the region $\mathcal{M}_{V < E}=\left\{q\in M|V(q) < E\right\}$. Moreover, for isoenergetic trajectories the kinetic energy can be expressed as a function of the coordinates, i.e. $$W(q(t;E))=E-V(q)$$ thus the action functional of Eq. can be rewritten in the form $$\begin{split}
\label{eq:reduced_action}
\mathit{S}_{E}[q(t;E)]=\int_{t_0}^{t_1} L(\dot{q}(t;E),q(t;E))\,\,\, \mathrm{d}t&=\int_{t_0}^{t_1}\left[E+2W(q(t;E))\right]\,\,\,\mathrm{d}t\\
&=(t_1-t_0)E+\int_{t_0}^{t_1}2W(q(t;E))\,\,\mathrm{d}t\,\,\,.
\end{split}$$ as we are interested in the variational principle and $t_0,t_1,E$ are fixed quantities, the first term in the last equality can be neglected. The integral in Eq. can be interpreted as a length integral in configuration space, in fact $$\label{eq:reduced_action_ds}
\begin{split}
\mathit{S}_{E}[q(t;E)]&=\int_{t_0}^{t_1}2W(q(t;E))\,\,\mathrm{d}t\\
&=\int_{t_0}^{t_1}\sqrt{2W(q(t;E))}\sqrt{2W(q(t;E))}\,\,\mathrm{d}t=\int_{t_0}^{t_1}\sqrt{2\left[E-V(q)\right]}\sqrt{\bar{g}_{ij}(q)\dot{q}^i\dot{q}^j}\,\,\mathrm{d}t=\\
&=\int_{t_0}^{t_1}\sqrt{2\left[E-V(q)\right]\bar{g}_{ij}\dot{q}^i\dot{q}^j}\,\,\mathrm{d}t=\int_{t_0}^{t_1}\sqrt{g_{ij}\dot{q}^i\dot{q}^j}\,\,\mathrm{d}t=\int_{t_0}^{t_1}\sqrt{\left(\dfrac{\mathrm{d}s}{\mathrm{d}t}\right)^2}\,\,\mathrm{d}t\\
&=\int_{s_0}^{s_1}\mathrm{d}s
\end{split}$$ where the new metric $$\label{eq:Jacobi_scaling_ch2}
g_{ij}:=2W(q) \overline{g}_{ij},\qquad W(q):=E-V(q)$$ called also *Jacobi metric*, has been introduced with the associated the arc-length element $$\mathrm{d}s^2=g_{ij}\mathrm{d}q^{i}\mathrm{d}q^j=2\left[E-V(q)\right]\overline{g}_{ij}\mathrm{d}q^{i}\mathrm{d}q^j=4[W(q)]^2\mathrm{d}t^2\,\,.$$ A central point of the following discussion is related to the *non-affine parametrization* of the arc-length $s$ with respect to the physical time $t$, which is clearly a necessary consequence of the construction of Jacobi metric. Moreover, we observe that the Jacobi metric is related to kinetic energy metric $\overline{g}_{ij}$ through a *conformal rescaling* via a factor proportional to the kinetic energy $[E-V(q)]$ preserving the signature of the metric only in the interior of $\mathcal{M}_{V\leq E}$, i.e. $\mathcal{M}_{V < E}=\mathcal{M}_{E}=\left\{q\in M|V(q)<E\right\}$, the so called *Hill’s region*. By endowing the region $\mathcal{M}_{E}$ with the Jacobi metric $g$ *the natural motions with fixed energy $E$ are the same as geodesics $\gamma(s)$ of the manifold $(M_E,g)$*. This approach has remarkable consequences, as it is discussed in the following. In fact, the geometric description of Newtonian dynamics, identifying the solutions of Newton equations with the geodesics of suitable Riemannian manifolds, provides a powerful conceptual and mathematical framework to study the stability/instability of dynamics in terms of the stability properties of a geodesic flow, described by the geodesic spread equation relating stability/instability with geometry.
The standard observable to define the presence of dynamical chaos and to measure its strength is the *largest Lyapunov exponent*, and, as we will see in the following, in the geometrical framework a Geometrical Lyapunov exponent can be defined. Now, a key point in derivation of Jacobi metric from Maupertuis’ principle is to set the relation between the physical time $t$ and the arc-length $s$ as $$\label{eq:s_to_t_jacobi_ch2}
ds^{2}=4W^2(q) dt^{2}\,.$$ It follows that a generic geodesic parametrized by the arc-length $s$, i.e. $\dot{\pieno{q}}(s)=\{\dot{q}^{i}(s)\}_{i\in[1,n]}$ has a unit velocity $$g_{J}(\dot{q}(s),\dot{q}(s))=2W\frac{dq^{i}}{ds}\delta_{ij}\frac{dq^{j}}{ds}=1$$ while if parametrized with respect to the physical time $$\label{eq:2W_of_dqdt}
\dfrac{dq^{i}}{dt}g_{ij}\frac{dq^{j}}{dt}=2W\,.$$ So, in general, the physical time is a *non-affine parametrization* of the geodesics of Jacobi metric. We derive in what follows the effect of the reparametrization on the geodesic equation. Let us introduce the vector fields $$Y^{i}:=\dfrac{\mathrm{d}q^i}{\mathrm{d}s}$$ and $$X^{i}:=\dfrac{\mathrm{d}\tilde{q}^i}{\mathrm{d} t}=\dfrac{\mathrm{d}s}{\mathrm{d}t}\dfrac{\mathrm{d}q^i}{\mathrm{d}s}=2W Y^{i}$$ defined along the geodesic $q^{i}(s)=q^{i}(s(t))=\tilde{q}^{i}(t)$. Using the definition of geodesic for the vector field $Y$ $$\nabla_{Y}Y=0$$ it is possible to derive the equations for the vector field $X=\alpha Y$ (with $\alpha=(2W)^{-1}$) $$\nabla_{\alpha X}\left(\alpha X\right)=\alpha \left(\nabla_{X}\alpha\right)+\alpha^2 X=0$$ that implies $$\label{eq:geod_dt_coorfree}
\nabla_{X}X=-\nabla_{X}(\log\alpha) X=X \nabla_{X}\left(\log 2W\right)\,.$$ In a natural coordinate system $\{q^{i}\}_{i=1,...,N}$ the equation reads $$\label{eq:geod_dt_natcoor}
\frac{\mathrm{d}^{2}q^{i}}{\mathrm{d}t^{2}}+\Gamma^{i}_{jk}\frac{\mathrm{d}q^{j}}{dt}\frac{\mathrm{d}q^{k}}{dt}=\delta^{i}_{k}\dfrac{\mathrm{d}q^{k}}{\mathrm{d}t}\dfrac{\mathrm{d}q^{j}}{\mathrm{d}t}\partial_{j}\log(2W)\,.$$ The Christoffel symbols in Jacobi metric take the form $$\label{eq:ChristoffelJacobi}
\begin{split}
\Gamma^{i}_{jk}=&\overline{\Gamma}^{i}_{jk}+\dfrac{1}{2}\left[\delta^{i}_{j}\partial_{k}\log(2W)+\delta^{i}_{k}\partial_{j}\log(2W)-g_{jk}\overline{\mathrm{grad}}^{i}\log(2W)\right]
\end{split}$$ where $\overline{\Gamma}^{i}_{jk}$ are Christoffel symbols of the kinetic energy metric $g_{ij}$ and $\overline{\mathrm{grad}}^{i}f:=g^{ik}\partial_{i}f$ is the gradient with respect to kinetic energy metric. Substituting in and using we obtain $$\overline{\nabla}_{X}X=\frac{\mathrm{d}^{2}q^{i}}{\mathrm{d}t^{2}}+\overline{\Gamma}^{i}_{jk}\frac{\mathrm{d}q^{j}}{dt}\frac{\mathrm{d}q^{k}}{dt}=\dfrac{1}{2}\overline{\mathrm{grad}}^{i} (2 W)=-\overline{\mathrm{grad}}^{i} V$$ i.e. the Newton’s equations of the dynamical system. As already mentioned above, dynamical chaos can now be investigated by means of the equation for the geodesic spread describing the stability of a geodesic flow of a Riemannian manifold. The geodesic spread is measured by a vector field $J$ which locally gives the distance between nearby geodesics. This vector field evolves along a reference geodesic according to the Jacobi-Levi Civita equation which, in components, reads $$\label{eq:JLC_general_ch2}
\dfrac{\nabla^{2}J^{i}}{ds^{2}}+R^{i}_{jkl}\dfrac{dq^{j}}{ds}J^{k}\frac{dq^{l}}{ds}=0$$ where $$R^{i}_{jkl}=\partial_{k}\Gamma^{i}_{jl}-\partial_{l}\Gamma^{i}_{jk}+\Gamma^{m}_{lj}\Gamma^{i}_{km}-\Gamma^{m}_{kj}\Gamma^{i}_{ml}$$ is the Riemann curvature tensor. In Jacobi metric we have $$\begin{split}
\dfrac{\nabla^{2} X^{i}}{ds^{2}}&=\dfrac{1}{2W}\dfrac{\nabla}{dt}\left(\frac{1}{2W}\dfrac{\nabla X^{i}}{dt}\right)\\
&=\dfrac{1}{4W^{2}}\dfrac{\nabla^{2}X^{i}}{dt^{2}}+\dfrac{1}{4W}\dfrac{\nabla X^{i}}{dt}\dfrac{d}{dt}\left(\dfrac{1}{W}\right)
\end{split}$$ that substituted into yields to $$\label{eq:JLC_jacobi_repar}
\begin{split}
\frac{\nabla^{2}J^{i}}{ds^{2}}+R^{i}_{jkl}\frac{dq^{j}}{ds}J^{k}\frac{dq^{l}}{ds}=0\mapsto\frac{\nabla^{2}J^{i}}{dt^{2}}+R^{i}_{jkl}\frac{dq^{j}}{dt}J^{k}\frac{dq^{l}}{dt}=\frac{\nabla J^{i}}{dt}\frac{d}{dt}\ln W\,.
\end{split}$$ Every non-affine parametrization in Jacobi metric can be reduced to a relation between the physical time and the arc-length parameter of the form $t\mapsto ds=f(q)dt$, where $f(q)$ is a function of the point and generates a term proportional to the derivative of $f(q)$ into each equations. Thus, the JLC equation written for the Jacobi metric is not invariant for time reparametrization. We can rephrase the main point raised by the work in Ref.[@cuervo2015non] as attributing to the right hand-side of Eq. the origin of chaotic-like instabilities even for integrable systems, that is, the origin of non-physical artifacts. In Ref. [@cuervo2015non] it has been surmised that the larger the fluctuations of kinetic energy $W$ the more dramatic the occurrence of non-physical instabilities stemming from Eq.. Remarkably, the geometrization of Newtonian dynamics in an enlarged configuration space-time equipped with the Eisenhart metric tensor [@marco] yields the JLC equation in the form of the Tangent Dynamics Equation which is commonly used to compute the Largest Lyapunov Exponent (LLE) [@marco; @book], therefore the authors of Ref. [@cuervo2015non] claim that this is the only consistent geometrization of Newtonian dynamics to investigate chaos, while the Jacobi metric would be unsuitable for the same task.
Geometrical Lyapunov exponent
-----------------------------
We conclude the present Section by giving a definition of a Geometrical Lyapunov exponent. Of course, the starting point is the JLC which, in intrinsic notation, is $$\begin{split}
\nabla_{\pieno{\xi}}^{2}J+R(J,\pieno{\xi})\pieno{\xi}=0 \ .
\end{split}$$ By defining $Y=\nabla_{\nero{\xi}}J$ and $R_{\pieno{\xi}}J:=R(\pieno{\xi},J)\pieno{\xi}$ the above equation becomes $$\begin{split}
\nabla_{\nero{\xi}}J&=Y\\
\nabla_{\pieno{\xi}}Y&=R_{\pieno{\xi}}J \ .
\end{split}$$ Then, by putting $$\mathcal{A}:=
\left(
\begin{matrix}
0 && 1\!\!1 \\
R_{\pieno{\xi}} && 0
\end{matrix}
\right),\qquad\qquad \mathcal{J}:=
\left(
\begin{matrix}
J \\
Y
\end{matrix}
\right)$$ the JLC equation reads $$\label{compact_JLC}
\begin{split}
\nabla_{\nero{\xi}}\mathcal{J}=\mathcal{A}\mathcal{J} \ .
\end{split}$$ A Geometrical Lyapunov exponent, in analogy with the definition of the standard Lyapunov exponent, can be defined after having expressed as a function of physical time the solution of Eq. as $$\begin{aligned}
\label{geometrical_lyapunov_exponent}
\lambda_{G}:=\lim_{t\longrightarrow\infty}\frac{1}{t} \frac{\|\mathcal{J}(t)\|_{\pieno{g}}}{\|\mathcal{J}(0)\|_{\pieno{g}}} \ .\end{aligned}$$ where the norm of $\mathcal{J}$ is $$\|\mathcal{J}(t)\|^{2}_{\pieno{g}}=2\,W(\pieno{q})\delta_{ij}\{J^{i}(t)J^{j}(t)+(\nabla_{\dot{\pieno{q}}}J)^{i}(t)(\nabla_{\dot{\pieno{q}}}J)^{j}(t)\} \ .$$ Let us note that throughout the literature, [@marco], [@book], [@cerruti1996geometric] and [@cerruti1997lyapunov], the geometrical Lyapunov exponent has been expressed as a function of physical time $t$, whereas in [@cuervo2015non] the Geometrical Lyapunov exponent is defined as a function of the arc-length. As a final comment, let us remark that the existence of many different frameworks to rephrase Newtonian dynamics in geometric terms [@book] can lead, a-priori, to different *quantitative* evaluations of the strength of chaos, to the contrary, the use of different geometric frameworks must lead to the same *qualitative* description of the stability/instability properties of the dynamics. For example, the transition from weak to strong chaos must be at least qualitatively and coherently reproduced in any geometric framework, as is for example actually shown for high dimensional Hamiltonian flows in Ref. [@cerruti1997lyapunov].
Parallel Transported Frame for a system of $N$ particles in Jacobi Manifold {#sec_parallel_transported_frame}
===========================================================================
Let us now work out the JLC equation for a parallel transported orthonormal frame along a reference geodesic. The advantage of this representation with respect to the use of standard local coordinates is that by making parallel transported frames to “incorporate” the geodesics reflection, when they approach the Hill’s boundaries in configuration space, eliminates a source of artefacts in the numerical solution of the JLC equation. In fact, the sharp reflection of geodesics close to the Hill’s boundary of a mechanical manifold would require a prohibitively high numerical precision to avoid the introduction of an error amplification mimicking chaos even for integrable systems. To the contrary, with respect to a parallel transported frame - “incorporating” the geodesic reflection - nearby geodesics are no longer affected by the fake error amplification due to the reflection. As a matter of fact, we will show that the solutions of the JLC equation - written for the Jacobi metric - have the correct physical meaning also in the “pathological” cases where unphysical instabilities were found in Ref.[@cuervo2015non].
The *parallel transported frame* is built by requiring that the covariant derivative $\nabla_{\nero{\xi}}X$ of all vectors $X$ with respect to the geodesic flow $\nero{\xi}$ is orthogonal to $\nero{\xi}$. Let us introduce a reference frame $e_{i}:=\partial/\partial q^{i}$ on the tangent bundle $TM$ and the corresponding *dual frame* $\theta^{i}:=dq^{i}$ such that $\theta^{i}(e_{j})=\delta^{i}_{j}$. The Jacobi metric tensor is the multi-linear map $\pieno{g}_{J}:TM\times TM\longrightarrow{\Bbb R}$ such that, if written with respect to the *natural basis*, is given by $$\pieno{g}_{J}:=g_{ij}\theta^{i}\otimes \theta^{j}$$ the components of which are $$g_{ij}:=\pieno{g}(e_{i},e_{j})=2(E-V(q))\delta_{ij}$$ with the differential arc-length $$ds^{2}=g_{ij}\theta^{i}\theta^{j} \ .$$ To build a *parallel transported frame* we consider a geodesic $\gamma:I\subset{\Bbb R}\longrightarrow M$, with tangent vector field $\nero{\xi}:I\longrightarrow TM$ and the Jacobi vector field $J:I\longrightarrow TM$ such that respect to the natural basis $\{e_{i}\}_{i\in[1,n]}$ they are written as $$\begin{split}
J&=J^{i}e_{i}\\
\nero{\xi}&=\xi^{i}e_{i}
\end{split}$$ where the Jacobi field, by definition, verifies $$\nabla_{\nero{\xi}}^{2}J+R(J,\nero{\xi})\nero{\xi}=0
\label{JLC_chapter2}$$ We shall show that there exist reference frames parallel transported along any geodesic. Such systems exist if an orthogonal tensor field $\nero{\Omega}:I\longrightarrow SO_{N}(\gamma(I))$ is defined at each point along the geodesic flow. This tensor field allows to define the required frame where the basis on $TM$ is given by $\{E_{i}\}_{i=i,\cdots,N}$ and dual frame on $T^{*}M$ $\{\Theta^{i}\}_{i=i,\cdots,N}$ and, by definition of dual frame we have $\Theta^{i}(E_{j})=\delta^{i}_{j}$. Therefore, the *parallel transported frame* is defined by $$\label{def_Omega}
\begin{split}
E_{i}=\nero{\Omega}\cdot e_{i},\quad \nabla_{\nero{\xi}}E_{i}&=0,\quad \pieno{g}(E_{i},E_{j})=\delta_{ij}\\
\Theta^{i}=\theta^{i}\cdot\nero{\Omega}^{-1},\quad \nabla_{\nero{\xi}}\Theta^{i}&=0,\quad \pieno{g}^{*}(\Theta^{i},\Theta^{j})=\delta^{ij}
\end{split}$$ In the Jacobi-Levi Civita equation, written in , we can define the Ricci tensor along the geodesic flow $Ric(\nero{\xi}):\gamma(I)\longrightarrow \mathcal{T}^{1}_{1}(\gamma(I))$ by $$Ric(\nero{\xi}):=R(\cdot,\nero{\xi})\nero{\xi} \ .$$ A canonical isomorphism exists between the rank-two tensor $Ric(\nero{\xi})$ and a symmetric matrix $N\times N$, that we shall denote with $\nero{R}$. For every $\omega\in T^{*}M$ and $X\in TM$, such a matrix is given by $$\begin{split}
Ric(\nero{\xi})(X,\omega)&:=Ric(\nero{\xi})^{i}\,_{j}X^{j}\omega_{i}=\pieno{g}(Y_{\omega},\nero{R}(X))
\end{split}$$ where $Y_{\omega}:=\pieno{g}^{-1}\omega$. We note the following symmetry properties of $Ric(\nero{\xi})$ $$\begin{split}
Ric(\nero{\xi})&=R_{jkl}\,^{i}\xi^{k}\xi^{l} e_{i}\otimes\theta^{j}=R_{jkli}\xi^{k}\xi^{l} \theta^{i}\otimes\theta^{j}\\
&=R_{ilkj}\xi^{k}\xi^{l} \theta^{i}\otimes\theta^{j}=R_{ilk}\,^{j}\xi^{k}\xi^{j} e_{j}\otimes\theta^{i}
\end{split}$$ and by redefining the indices $j\rightleftharpoons i$ and $l\rightleftharpoons k$ the symmetry of $Ric(\nero{\xi})$ is evident. Hence the associated matrix can be diagonalized at each point along the flow. Let note that the symmetry of Riemann tensor entails $$Ric(\nero{\xi})\nero{\xi}=R(\nero{\xi},\nero{\xi})\nero{\xi}=0 \ .$$ Then, the tangent vector to the geodesic $\nero{\xi}$ is an eigenvector of the matrix $\nero{R}$ associated to $Ric(\nero{\xi})$ with vanishing eigenvalue. A posteriori, one observes that for harmonic oscillators - geometrized through the Jacobi metric - the eigenbasis of the matrix $\nero{R}$ coincides with the parallel transported frame. In general, this is not true and we have two different orthogonal matrices, that one which diagonalises $\nero{R}$ and another one which transforms the natural basis into the parallel transported frame. Fortunately, in the present case, given the natural frame $\{e_{i}\}_{i\in[1,N]}$ on the tangent bundle $TM$, there exists an orthogonal transformation $\nero{\Omega}:I\longrightarrow SO_{N}(\gamma(I))$ such that $$E_{i}=\nero{\Omega}\cdot e_{i}$$ namely, such that it transforms the natural basis into the parallel transported frame $\{E_{i}\}_{i\in[1,N]}$ and, moreover, such that it diagonalises the matrix $\nero{R}$: $$\nero{R}_{d}=\nero{\Omega}^{-1}\circ \nero{R}\circ\nero{\Omega}$$ This allows to write the Jacobi field with respect to such a basis and thus $$\begin{split}
J=J^{i}e_{i}&=J^{i}(\nero{\Omega}^{-1})^{k}_{i}\nero{\Omega}^{l}_{k}e_{l}\\
&=[(\nero{\Omega}^{-1})^{k}_{i}J^{i}][\nero{\Omega}^{l}_{k}e_{l}]=\tilde{J}^{k}E_{k} \ .
\end{split}$$ Let us consider the JLC equation and proceed to substitute $J\mapsto\tilde{J}$ then $$\begin{split}
&\nabla_{\nero{\xi}}^{2}\tilde{J}+Ric(\nero{\xi})\tilde{J}=0\\
&\frac{d^{2}\tilde{J}^{i}}{ds^{2}}E_{i}+\tilde{J}^{k}Ric(\nero{\xi})^{l}\,_{k}E_{l}=0\\
&\left(\frac{d^{2}\tilde{J}^{k}}{ds^{2}}+\tilde{J}^{k}\Lambda_{k}\right)E_{k}=0\\
\end{split}$$ where now the repeated indices do not stand for summation. In this way, we obtain $N -1$ second order differential equations in the unknown functions $\{\tilde{J}^{k}\}_{i\in[1,N-1]}$, and the $N$-th equation is that for the vector tangent to the reference geodesic, equation which corresponds to the geodesic equation thus, by denoting with $\tilde{J}^{1}$ the function for this equation, we have $$\begin{split}
&\frac{d^{2}\tilde{J}^{1}}{ds^{2}}=0\\
&\frac{d^{2}\tilde{J}^{k}}{ds^{2}}+\tilde{J}^{k}\Lambda_{k}=0
\label{JLC trasportata par. ogni metrica}
\end{split}$$ It is interesting to remark that $\{\Lambda_{k}\}$ are just *sectional curvatures*, i.e the principal directions of curvature identified by the vectors $\{E_{k}\}_{k\in[1,N]}$, i.e $$\Lambda_{k}:=\langle E_{k},R(E_{k},E_{1})E_{1}\rangle$$ Now, passing from the arc-length parameter $s$ to the physical time $t$ the equations become $$\frac{d^{2}\tilde{J}^{k}}{dt^{2}}(t)-\frac{d}{dt}\log(W(q(t)))\frac{d\tilde{J}^{k}}{dt}(t)+4W(q(t))^{2}\tilde{J}^{k}(t)\Lambda_{k}(t)=0$$ whence $$\begin{split}
\label{before_canonical_way}
&\ddot{J}^{k}(t)-a(t)\dot{J}^{k}(t)+b(t)J^{k}(t)=0 \qquad k\in[2,N]\\
&\ddot{J}^{1}(t)-a(t)\dot{J}^{1}(t)=0\\
a(t)&:=\frac{d}{dt}\log(W(q(t)))\\
b_{k}(t)&:=4W(q(t))^{2}\Lambda_{k} \ .
\end{split}$$ The second equation in \[before\_canonical\_way\] can be immediately solved by setting $h(t):=\dot{J}^{1}(t)$ and then tackling the differential equation $$\dot{h}(t)+a(t)h(t)=0\implies h(t)=W(q(t))$$ giving $$J^{1}(t)=\int^{t}_{t_{0}} W(q(\eta))\,d\eta \ .$$ The other equations can be written in *canonical form*, namely, by redefining the function $\tilde{J}$ as follows $$\tilde{J}=A(t) f(t)$$ and substituting it into we obtain $$\label{Eqeffe}
\ddot{f}(t)+\dot{f}(t)\left(-a(t)+2\frac{\dot{A}(t)}{A(t)}\right)+f(t)\left(-a(t)\frac{\dot{A}(t)}{A(t)}+\frac{\ddot{A}(t)}{A(t)}+b(t)\right)=0 \ .$$ By choosing $A(t)$ such that the coefficient of $\dot f(t)$ vanishes, we get the following conditions for $A(t)$ $$\begin{split}
\frac{\dot{A}}{A}&=\frac{a(t)}{2}\\
\frac{\ddot{A}}{A}&=\frac{a(t)^{2}}{4}+\frac{\dot{a}(t)}{2}
\end{split}$$ thus giving $$\begin{split}
A(t)&=\exp\left(\frac{1}{2}\int a(t) \;dt\right)=\exp\left(\frac{1}{2}\int\frac{d}{dt}\log(W(q(t))) \;dt\right)\\
&=\exp\left(\frac{1}{2}\log(W(q(t)))\right)=\sqrt{W(q(t))} \ .
\end{split}$$ The equation becomes $$\begin{split}
\ddot{f}(t)+f(t)\left(-\frac{a(t)^{2}}{4}+\frac{\ddot{a}(t)}{2}+b(t)\right)=0\\
\ddot{f}(t)+f(t)\left(-\frac{3}{4}\left(\frac{\dot{W}}{W}\right)^{2}+\frac{1}{2}\frac{\ddot{W}}{W}+b(t)\right)=0
\end{split}$$ To apply this procedure to the equations , we set $$\label{forma canonica}
\tilde{J}^{k}:=A(t)f^{k}(t)$$ and for every $k\in[2,N]$ we obtain the final form for the components of the Jacobi-Levi Civita equation $$\ddot{f}^{k}+\omega_{(k)}(t)f^{k}(t)=0$$ where the functions in the second term of the l.h.s. can be interpreted as time dependent frequencies $$\omega_{(k)}(t) =4W(q(t))^{2}\Lambda_{k}(t)-\frac{3}{4}\left(\frac{\dot{W}}{W}\right)^{2}+\frac{1}{2}\frac{\ddot{W}}{W}$$ where $$\begin{split}
\dot{W} = -\dot{V}&= - \langle\nero{\xi},\nabla^{{\Bbb R}^{n}}V\rangle_{{\Bbb R}^{n}}\\
\ddot{W} = -\ddot{V}&= - HessV(\nero{\xi},\nero{\xi}) + \|\nabla^{{\Bbb R}^{n}} V\|_{{\Bbb R}^{n}}^{2}
\end{split}$$
Parallel Transported Frame for a system of $2$ harmonic oscillators {#subsec_simulation_parallel_transported_frame2}
-------------------------------------------------------------------
In Refs. [@cerruti1996geometric; @rick] it has been shown that for the Hénon-Heiles model and for two coupled quartic oscillators, respectively, the geometrization through the Jacobi metric perfectly discriminates between ordered and chaotic motions by investigating the stability/instability of geodesics through the JLC equation expressed in a parallel transported frame. In this section, we are going to show that for two harmonic oscillators (of course an integrable system) the norm of the geodesic separation vector remains bounded, in spite of the fluctuations of kinetic energy which, according to the claim of Ref.[@cuervo2015non], should have entailed apparent instability of the regular motions of this system. The Hamiltonian of this system is $$H(p_{1},p_{2},q_{1},q_{2}) =\frac{1}{2}(p_{1}^{2}+p_{2}^{2})+\frac{\kappa}{2}(q_{1}^{2}+q_{2}^{2})$$ and the associated Jacobi metric, having set $W(q):=E-V(q_{1},q_{2})$, is $$\pieno{g}_{J} =2W(q)(dq^{1}\otimes dq^{1}+dq^{2}\otimes dq^{2}) \ .$$ The Ricci tensor along the geodesic flow is $$[Ric(\nero{\xi})]^{i}\,_{j}=\frac{E\kappa}{W^{2}}\left(\begin{matrix}
\xi^{2}\xi^{2} && -\xi^{1}\xi^{2}\\
-\xi^{1}\xi^{2} && \xi^{1}\xi^{1} \ .
\end{matrix}\right)$$ The eigenvectors of this matrix are $$\begin{split}
E_{1}& =(\xi^{1},\xi^{2})^{t}\\ E_{2}&=(-\xi^{2},\xi^{1})^{t}
\end{split}$$ with the corresponding eigenvalues: $$\begin{split}
\Lambda_{1}&=0\\
\Lambda_{2}&=\frac{E \kappa}{2W^{3}} \ .
\end{split}$$ With these eigenvectors the matrix for the basis transformation is simply obtained in the form $$\nero{\Omega} =\left(\begin{matrix}
\xi^{1} && -\xi^{2}\\
\xi^{2} && \xi^{1}
\end{matrix}\right)$$ so that the JLC equation for the components of the parallel transported Jacobi vector field $\tilde{J}$ is written as $$\begin{split}
&\frac{d^{2}\tilde{J}^{1}}{ds^{2}}=0\\
&\frac{d^{2}\tilde{J}^{2}}{ds^{2}}+\frac{E\kappa}{2W^{3}}\tilde{J}^{2}=0
\end{split}$$ which, written for the physical time $t$ and with the notations of the preceding Section, read $$\label{JLC Jacobi 2 dimensioni nel tempo}
\begin{split}
&\frac{d^{2}J^{1}}{dt^{2}}-\frac{\dot{W}}{W}\frac{dJ^{1}}{dt}=0\\
&\frac{d^{2}f^{2}}{dt^{2}}+\tilde{\omega}_{2}f^{2}=0
\end{split}$$ where $$\begin{split}
\tilde{\omega}_{2}(t) =\frac{2E \kappa}{W}-\frac{3}{4}\left(\frac{\dot{W}}{W}\right)^{2}+\frac{1}{2}\frac{\ddot{W}}{W}\\
\label{frequency_jacobimetric_2dimension}
\end{split}$$
Numerical results {#simulation2}
-----------------
In Ref.[@cuervo2015non], it has been claimed that there exists a set of initial conditions for which the JLC equations written for the Jacobi metric lead to unstable solutions even in the case of two harmonic oscillators because of the affine parametrization of the arc-length with time and the consequent fluctuations of the kinetic energy. By starting from the general solutions for a system of two harmonic oscillators given by $$\begin{split}
\label{q1q2}
q_{1}(t)&=A_{1}\cos(\omega t+\phi_{1})\\
q_{2}(t)&=A_{2}\cos(\omega t+\phi_{2})
\end{split}$$ where $A_{1},A_{2}$ and $\phi_{1},\phi_{2}$ are determined by the initial conditions, the authors of Ref.[@cuervo2015non] used polar coordinates to rewrite the previous equations in the compact form $$\begin{split}
r(t)&=R^{2}+\Delta^{2}\cos(2\omega\,t)
\end{split}$$ where $$R^{2}=\frac{A_{1}^{2}+A_{2}^{2}}{2},\quad \Delta^{2}=\frac{A_{1}^{2}-A_{2}^{2}}{2} \ ,$$ then they have reported that every initial condition fulfilling the condition $$\label{cuervo-reyes_condition}
I = \left(\frac{\Delta}{R}\right)^{4}> \frac{4}{7}$$ yields unstable solutions. We have adopted the same initial conditions for two harmonic oscillators and representing the solutions of the JLC equations with respect to a parallel transported frame, already used in [@cerruti1996geometric], and we have found that the equations with the frequencies display stable solutions. We have solved the equations by using two different conditions both fulfilling Eq. . These are $$\begin{aligned}
I_{0}=\left(\frac{\Delta}{R}\right)^{4}&=&\frac{56}{81} \qquad \label{condition_0}\\
I_{1}=\left(\frac{\Delta}{R}\right)^{4}&=&\frac{21}{25} \qquad
\label{condition_1}\end{aligned}$$
Let us consider the condition . This is obtained by plugging through Eqs. the initial condition $$A_{1}\to \frac{1}{2},\; A_{2}\to \frac{1}{10} \left(9+2 \sqrt{14}\right),\; \phi_{1}\to 0,\; \phi_{2}\to \frac{\pi}{2}$$ into the frequency . The time variation of $\tilde{\omega}_{2}(t)$ is reported in Figure \[figura1\].
![Time dependence of the function $\tilde{\omega}_{2}(t)$ defined in , for the condition .[]{data-label="figura1"}](omega2_2dim_cond81.pdf){height="8cm" width="9cm"}
Correspondingly, the time dependence of the Geometric Lyapunov Exponent, $\lambda(t)$, reported in Figure \[2oscillatori\_jlc\_jacobi\_exp\_lyap\_geo\_I1condizione\], clearly displays the typical time dependence found for regular trajectories, that is, $\lambda(t)\sim 1/t$.
![Time dependence of the Geometric Lyapunov Exponent $\lambda(t)$ associated to the $f^{2}(t)$ solution of Eqs., for the condition . Log-Log scale is used to evidence the $1/t$ decay represented by the red line.[]{data-label="2oscillatori_jlc_jacobi_exp_lyap_geo_I1condizione"}](Lyap_Exp_2dim_cond81.pdf){height="8cm" width="9cm"}
Coming now to the second condition given in , again this is implemented by plugging through Eqs. the initial condition $$A_{1}\to \frac{1}{2},\; A_{2}\to \frac{1}{4} \left(-5+\sqrt{21}\right),\; \phi_{1}\to 0,\; \phi_{2}\to \frac{\pi}{3}$$ into the frequency .
The time variation of $\tilde{\omega}_{2}(t)$ is now reported in Figure \[figura3\]. The corresponding time variation of the Geometric Lyapunov Exponent, $\lambda(t)$, is now reported in Figure \[2oscillatori\_jlc\_jacobi\_omega2\_tilde\_I3condizione\]. Again it is found that $\lambda(t)$ decays as $1/t$, as expected for regular motions.
![Time dependence of the function $\tilde{\omega}_{2}(t)$ defined in , for the condition .[]{data-label="figura3"}](omega2_2dim_cond83.pdf){height="8cm" width="9cm"}
![Time dependence of the Geometric Lyapunov Exponent $\lambda(t)$ associated to the $f^{2}(t)$ solution of Eqs., for the condition . Log-Log scale is used to evidence the $1/t$ decay represented by the red line.[]{data-label="2oscillatori_jlc_jacobi_omega2_tilde_I3condizione"}](Lyap_Exp_2dim_cond83.pdf){height="8cm" width="9cm"}
Parallel Transported Frame for a system of $3$ harmonic oscillators {#subsec_simulation_parallel_transported_frame3}
-------------------------------------------------------------------
A relevant step forward is now obtained by considering three degrees of freedom because now the geodesic separation vector has two nontrivial components (since the component parallel to the velocity vector does not accelerate). Therefore, we consider $3$ harmonic oscillators described by the Hamiltonian $$H(p_{1},p_{2},p_{3},q_{1},q_{2},q_{3})=\frac{1}{2}(p_{1}^{2}+p_{2}^{2}+p_{3}^{2})+\frac{\kappa}{2}(q_{1}^{2}+q_{2}^{2}+q_{3}^{2})$$ and the corresponding Jacobi metric, again with $W(q):=E-V(q_{1},q_{2},q_{3})$, is $$\pieno{g}_{J}=2W(q)(dq^{1}\otimes dq^{1}+dq^{2}\otimes dq^{2}+dq^{3}\otimes dq^{3})$$ The eigenvectors of the Ricci tensor $Ric(\nero{\xi},\nero{\xi})$ are $$\begin{split}
\tilde{E}_{1}&=\nero{\xi}=\left(
\begin{matrix}
\xi^{1}\\
\xi^{2}\\
\xi^{3}
\end{matrix}
\right)\\
\tilde{E}_{2}&=\left(
\begin{matrix}
\xi^{1}(q_{2}\xi^{2}+q_{3}\xi^{3})-q_{1}(\xi^{2}\xi^{2}+\xi^{3}\xi^{3})\\
\xi^{2}(q_{1}\xi^{1}+q_{3}\xi^{3})-q_{2}(\xi^{1}\xi^{1}+\xi^{3}\xi^{3})\\
\xi^{3}(q_{1}\xi^{1}+q_{2}\xi^{2})-q_{3}(\xi^{1}\xi^{1}+\xi^{2}\xi^{2})
\end{matrix}
\right)\\
\tilde{E}_{3}&=\left(
\begin{matrix}
q_{3}\xi^{2}-q_{2}\xi^{3}\\
q_{1}\xi^{3}-q_{3}\xi^{1}\\
q_{2}\xi^{1}-q_{1}\xi^{2}
\end{matrix}
\right)
\end{split}$$ with the associated eigenvalues $$\begin{split}
\label{sectional_curvature_3dim}
\Lambda_{1}& =0\\
\Lambda_{2}& =\frac{E\kappa}{2 W^{3}}\\
\Lambda_{3}& =-\frac{\kappa}{4 W^{2}}\Big(\xi^{1}\xi^{1}(-4E+3\kappa q_{2}^{2}+3 \kappa q_{3}^{2})\\
&+\xi^{2}\xi^{2}(-4E+3\kappa q_{1}^{2}+3 \kappa q_{3}^{2})-6 \kappa q_{2}q_{3}\xi^{2}\xi^{3}\\
&+\xi^{3}\xi^{3}(-4E+3\kappa q_{1}^{2}+3 \kappa q_{2}^{2})-6 \kappa q_{1}\xi^{1}(q_{2}\xi^{2}+q_{3}\xi^{3})\Big) \ ,
\end{split}$$ respectively.
With these eigenvectors the matrix for the basis transformation now is $$\nero{\Omega} = \left(
\begin{matrix}
\xi^{1} && \xi^{1}(q_{2}\xi^{2}+q_{3}\xi^{3})-q_{1}(\xi^{2}\xi^{2}+\xi^{3}\xi^{3}) &&q_{3}\xi^{2}-q_{2}\xi^{3}\\
\xi^{2} &&\xi^{2}(q_{1}\xi^{1}+q_{3}\xi^{3})-q_{2}(\xi^{1}\xi^{1}+\xi^{3}\xi^{3}) &&q_{1}\xi^{3}-q_{3}\xi^{1}\\
\xi^{3} && \xi^{3}(q_{1}\xi^{1}+q_{2}\xi^{2})-q_{3}(\xi^{1}\xi^{1}+\xi^{2}\xi^{2}) &&q_{2}\xi^{1}-q_{1}\xi^{2}
\end{matrix}
\right) \ .$$ In order to make the parallel transported reference frame orthonormal, we have to orthogonalise the above eigenvectors with respect to the Jacobi metric, thus for $i = 1, 2, 3$ we have $$n_{i}^{2}\pieno{g}_{J}(\tilde{E}_{i},\tilde{E}_{i})=1$$ namely $$n_{i}=\frac{1}{\sqrt{2W \delta_{\alpha\beta} E_{i}^{\alpha}\,E_{i}^{\beta}}}$$ These factors are $$\begin{split}
n^{2}_{1}&=\frac{1}{2W(q)}\\
n^{2}_{2}&=\frac{1}{
(q_{2}^{2}+q_{3}^{2})(\xi^{1})^{2
}-2q_{1}q_{2}\xi^{1}\xi^{2}+(q_{1}^{2}+q_{3}^{2})(\xi^{2})^{2}-2q_{1}q_{3}\xi^{1}\xi^{3}
-2q_{2}q_{3}\xi^{2}\xi^{3}+(q_{1}^{2}+q_{2}^{2})(\xi^{3})^{2}}\\
n_{3}^{2}&=\frac{(\xi^{1})^{2}+(\xi^{2})^{2}+(\xi^{3})^{2}}{((\xi^{1})^{2}+(\xi^{2})^{2})q_{3}^{2}-2q_{1}q_{3}\xi^{1}\xi^{3}-2q_{2}\xi^{2}(q_{1}\xi^{1}q_{3}\xi^{3})+q_{2}^{2}((\xi^{1})^{2}+(\xi^{3})^{2})+q_{1}^{2}((\xi^{2})^{2}+(\xi^{3})^{2})}
\end{split}$$ The reference frame is composed by the normalized vectors $$E_{i}=n_{i}\tilde{E}_{i}$$ and by representing the Jacobi vector field as $J=J^{i}E_{i}$, we write the Jacobi-Levi Civita equations for the three components as $$\label{JLC Jacobi 3 dimensioni}
\begin{split}
&\frac{d^{2}J^{1}}{ds^{2}}=0\\
&\frac{d^{2}J^{2}}{ds^{2}}+\Lambda_{2}J^{2}=0\\
&\frac{d^{2}J^{3}}{ds^{2}}+\Lambda_{3}J^{3}=0
\end{split}$$ Finally, passing to the physical time and by using Eq. we get $$\ddot{f}^{k}+\omega_{(k)}(t)f^{k}(t)=0$$ with, again, $$\omega_{(k)}(t)=4W(q(t))^{2}\Lambda_{k}(t)-\frac{3}{4}\left(\frac{\dot{W}}{W}\right)^{2}+\frac{1}{2}\frac{\ddot{W}}{W} \ .$$ The results reported in the following are worked out by numerically integrating the following equations $$\label{JLC Jacobi 3 dimensioni nel tempo}
\begin{split}
&\frac{d^{2}J^{1}}{dt^{2}}-\frac{\dot{W}}{W}\frac{dJ^{1}}{dt}=0\\
&\frac{d^{2}f^{2}}{dt^{2}}+\omega_{(2)}f^{2}=0\\
&\frac{d^{2}f^{3}}{dt^{2}}+\omega_{(3)}f^{3}=0
\end{split}$$ with the following expressions for $\omega_{(2)}(t)$ and $\omega_{(3)}(t)$ : $$\begin{split}
\omega_{(2)}(t)&:=\frac{2E \kappa}{W}-\frac{3}{4}\left(\frac{\dot{W}}{W}\right)^{2}+\frac{1}{2}\frac{\ddot{W}}{W}\\
\omega_{(3)}(t)&:=4W^{2}\Lambda_{3}-\frac{3}{4}\left(\frac{\dot{W}}{W}\right)^{2}+\frac{1}{2}\frac{\ddot{W}}{W}
\label{frequency_jacobimetric}
\end{split}$$
Numerical results {#simulation3}
-----------------
In Ref. [@cuervo2015non], the alleged definitive argument to rule out the use of Jacobi metric to consistently describe the stability/instability of Hamiltonian dynamics was given by considering $N$ decoupled harmonic oscillators. The claim was that non-vanishing fluctuations of kinetic energy (due to the non affine parametrization of the arc length with time) entail parametric resonance in the JLC equation mimicking chaos for an integrable system. The authors considered the solutions of this system in the form $$q_{k}(t)= \cos\left(\omega t+\theta_k \right), \quad k=[1,N] \ ,
\label{qukappa}$$ where $\theta_k = k\frac{2\pi f}{N}$, with the phases $\theta_k$ distributed on a fraction $f$ of the interval $2\pi$. It has been reported that the smaller $f$ the larger fluctuation of kinetic energy and the larger the Lyapunov exponent. The fluctuation of kinetic energy $\sqrt{\sigma}$ is given in [@cuervo2015non] by $$\label{flutuation_kineticenergy}
\sqrt{\sigma}=\left(\frac{\sin(2 \pi f)}{\sqrt{2}N \sin(2 \pi f/N)}\right) \ .$$ Notice that in the $N\to\infty$, the kinetic energy fluctuation magnitude $\sqrt{\sigma}$ has a non-vanishing value, so that the authors claim that any dimension this basic integrable system would display non-physical instabilities. In the next Section we will argue against this claim on the basis of an argument related with the concentration of measure at high dimension.
As shown in the preceding Section, and before in Refs.[@cerruti1996geometric; @rick], a consistent description of order and chaos is obtained using the Jacobi metric and writing the JLC equation for a parallel transported frame which is quite simple to be found for $N=2$, while the first non trivial extension is given by the $N=3$ case. The JLC equation for the three dimensional case is given by Eqs.. There are three principal directions of curvature, namely, the sectional curvatures \[Eqs. \], identified by the planes generated by the velocity vector along a geodesic, $E_{1}=\nero{\xi}$, and the parallel transported basis vectors $E_{k}$ with $k = 2, 3$. These sectional curvatures coincide with the eigenvalues of the operator $Ric(\nero{\xi},\nero{\xi})=R(\cdot,\nero{\xi})\nero{\xi}$; one of these is obviously zero because $Ric(\nero{\xi},\nero{\xi})\nero{\xi}=0$ while the other two are given by $\Lambda_{k}=\pieno{g}(E_{k},Ric(\nero{\xi},\nero{\xi})E_{k})$.
In Figure 6 of Ref.[@cuervo2015non], the largest value of $\lambda$ corresponds to a kinetic energy fluctuation level $\sqrt{\sigma}\simeq 0.7$ which can be obtained with different values of $N$ and $f$. The Geometrical Lyapunov exponent and the norm of the Jacobi vector field have been worked out by numerically integrating Eqs. for the following cases: $N=3,\ f =0.05$ corresponding to $\sqrt{\sigma(0.05)}=0.6968$; for $N=3,\ f =0.1$ corresponding to $\sqrt{\sigma(0.05)}=0.6663$; and for $N=3,\ f =0.45$ corresponding to $\sqrt{\sigma(0.45)}=0.0900$. The outcomes are reported in Figures \[Lyap\_Exp\_nu0\_05\] and \[NormJ\_nu0\_05\], in Figures \[Lyap\_Exp\_nu0\_1\] and \[NormJ\_nu0\_1\], and in Figures \[Lyap\_Exp\_nu0\_45\] and \[NormJ\_nu0\_45\], respectively.
![Norm of the Jacobi vector field. $f=0.05$, $\sqrt{\sigma(0.05)}=0.6968$.[]{data-label="NormJ_nu0_05"}](ExpLyap_f_005.pdf){width="\textwidth"}
![Norm of the Jacobi vector field. $f=0.05$, $\sqrt{\sigma(0.05)}=0.6968$.[]{data-label="NormJ_nu0_05"}](Jacobi_field_f_005.pdf){width="\textwidth"}
![Norm of the Jacobi vector field. $f=0.1$, $\sqrt{\sigma(0.1)}=0.6663$.[]{data-label="NormJ_nu0_1"}](ExpLyap_f_01.pdf){width="\textwidth"}
![Norm of the Jacobi vector field. $f=0.1$, $\sqrt{\sigma(0.1)}=0.6663$.[]{data-label="NormJ_nu0_1"}](Jacobi_field_f_01.pdf){width="\textwidth"}
![Norm of the Jacobi vector field. $f=0.45$, $\sqrt{\sigma(0.45)}=0.090$.[]{data-label="NormJ_nu0_45"}](ExpLyap_f_045.pdf){width="\textwidth"}
![Norm of the Jacobi vector field. $f=0.45$, $\sqrt{\sigma(0.45)}=0.090$.[]{data-label="NormJ_nu0_45"}](Jacobi_field_f_045.pdf){width="\textwidth"}
Equations have been integrated with a fourth-order Runge-Kutta algorithm along the $q_k(t)$ given by and setting the phases $\theta_k$ uniformly distributed on a fraction $f$ of the interval $2\pi$. It is well evident that the norm of the Jacobi geodesic separation vector is always bounded, coherently with the decay with $1/t$ of the running value of $\lambda(t)$. The JLC equation written for the Jacobi metric and with a parallel transported frame provides the correct result: no instability of the trajectories of an integrable system is found, contrary to the claim of Ref.[@cuervo2015non].
Concentration of measure of the volume occupied by accessible configurations with Jacobi metric {#sec_concentration_measure}
===============================================================================================
We know from the work in Ref.[@cerruti1997lyapunov] that the Jacobi-Levi Civita equation, written in the natural reference frame for a large number of degrees of freedom, appropriately works by producing Geometrical Lyapunov exponents in both qualitative and quantitative agreement with the standard Lyapunov exponents.
In view of the above discussed problems due to the bouncing of phase trajectories/geodesics on the Hill’s boundary, let us see why at large $N$ (large meaning just a few tens) the non physical divergencies are not found in spite of the use of the natural reference frame, in place of the parallel transported one, and in spite of the presence of kinetic energy fluctuations.
Consider a system composed by a large number of harmonic oscillators, denote by $\pieno{Q}=\{q_{1},\cdots,q_{N}\}$ and $\nero{P}=\{p_{1},\cdots,p_{N}\}$ the conjugate momenta, with $\kappa =1$ for simplicity, the Hamiltonian is $$H(\nero{P},\nero{Q})=\frac{\|\nero{P}\|^{2}_{{\Bbb R}^{N}}}{2}+\frac{\|\nero{Q}\|^{2}_{{\Bbb R}^{N}}}{2}=\sum_{i=1}^{N}\frac{1}{2}\left(|p_{i}|^{2}+|q_{i}|^{2}\right)$$ and the Jacobi metric in the Hill’s region $M_{E}$ is $$\pieno{g}_{J}=2 \left[E-\frac{\|\nero{Q}\|^{2}_{{\Bbb R}^{N}}}{2}\right]\delta_{ij}\;dq^{i}\otimes dq^{j} \ .$$ The associated Riemannian volume form is $$\nu_{J}=\sqrt{det\,\pieno{g}_{J}}\;dq^{1}\wedge\cdots\wedge dq^{N}$$ where the determinant of the metric is $$det\,\pieno{g}_{J}=2^{N}\left(E-\frac{\|\nero{Q}\|^{2}_{{\Bbb R}^{N}}}{2}\right)^{N}\,.$$ Therefore, the total volume of $M_{E}$ is given by the following integral $$\label{eq:VolumeME_Jacobi}
\mathcal{V}_{N}(E)=2^{N/2}\int_{M_{E}}\left(E-\frac{\|\nero{Q}\|^{2}_{{\Bbb R}^{N}}}{2}\right)^{N/2}\;dq^{1}\cdots dq^{N}$$ It is worth noticing a remarkable property of the Jacobi metric, its associated volume is proportional (up to an $N$-dependent factor) to the microcanonical ensemble measure $$\label{eq:Gibbs_microcan_pf}
\Omega_{N,\mu}(E)=\int_{\mathcal{M}_{E}} |dp_{1}\wedge\ldots \wedge dp_{N}\wedge dq^{1}\wedge\ldots \wedge dq^{N}|$$ where $\mathcal{M}_{E}=\left\{(\nero{P},\nero{Q})\in T^{*}M_{E}\,|H(\nero{P},\nero{Q})=E\right\}$. Due to the quadratic form of the kinetic energy $2W=R^2$, the integral can be rewritten as $$\begin{split}
\Omega_{N,\mu}(E)&=\int_{M_{E}}\,\mathcal{A}(\mathbb{S}^{N-1})\,\,\int_{R\leq \sqrt{2(E-V(\nero{Q}))}}\,\,R^{(N-1)} dR\,\,dq^{1}\ldots dq^{N}=\\
&=\dfrac{\mathcal{A}(\mathbb{S}^{N-1})}{N} \int_{M_{E}}\,\,\left[2\left(E-V(\nero{Q})\right)\right]^{N/2}dq^{1}\ldots dq^{N}=\dfrac{\mathcal{A}(\mathbb{S}^{N-1})}{N}\,\,\mathcal{V}_{N}(E)\,\,\, ,
\end{split}$$ where $\mathcal{A}(\mathbb{S}^{N-1})$ is the area of the unitary sphere. Let us now consider the volume of $M_E$ given by integral $\mathcal{V}_{N}(E)$ and show that it *concentrates* around an $N-1$-dimensional manifold $\Sigma_{\bar{V}}=\left\{\nero{Q}\in M_{E}\,| V(\nero{Q})=\bar{V}\right\}$, in other words, the overwhelming contribution to the volume integral is given by microscopic configurations far from the boundary of $M_E$. Although it could be questionable to provide a statistical argument for an integrable system for which ergodic hypothesis does not hold, the statistical averaging is intended as an averaging over all the possible configurations compatible with the constraint $H(\nero{P},\nero{Q})=E$. It is convenient to rewrite the integral with the volume element expressed in spherical coordinates $$|dq^{1}\wedge\cdots\wedge dq^{N}|=Q^{N-1}\sin^{N-2}(\phi_{1})\sin^{N-3}(\phi_{2})\cdots\sin(\phi_{N-2})d\phi_{1}\cdots d\phi_{N-2}\,dQ$$ where $Q=\|\nero{Q}\|_{{\Bbb R}^{N}}=\sqrt{2 V(Q)}$, because integrating over the angular variables one obtains $$\begin{split}
\mathcal{V}_{N}(E)&=2^{N/2} \mathcal{A}(\mathbb{S}^{N-1})\int_{0}^{\sqrt{2E}}\,\, \exp\left[\dfrac{N}{2}\ln\left(E-\dfrac{Q^2}{2}\right)+\left(N-1\right)\ln Q\right] dQ\\
&=2^{(N-1)/2} \mathcal{A}(\mathbb{S}^{N-1})\int_{0}^{E}\,\,dV \exp\left[N \left(\dfrac{1}{2}\ln\left(E-V\right)+\left(\dfrac{1}{2}-\dfrac{1}{N}\right)\ln V\right) \right]\\
&=2^{(N-1)/2} \mathcal{A}(\mathbb{S}^{N-1}) E^{N}\int_{0}^{1}\,\,dx \exp\left[-N F(x) \right]\,\,\,
\end{split}$$ where $x=V/E$ is the relative value of the potential energy with respect to the total energy and $$F(x)=-\dfrac{1}{2}\ln\left(1-x\right)-\left(\dfrac{1}{2}-\dfrac{1}{N}\right)\ln x\,\,.$$ As we are interested in the limit of large $N$, we can apply the Laplace approximation to evaluate the previous integral, i.e. we consider the Taylor expansion around the minimum with respect to $x$ of $F$ in the interval $(0,1)$, $$\int_{0}^{1} \exp\left[-N F(x) \right]\,\, dx\approx \exp\left[-N F(\bar{x})\right]\int_{0}^{1} \exp\left[-\dfrac{(x-\bar{x})^2}{2\sigma^2} \right] \,dx\,\,\,.$$ where $\sigma^2=\left(F''(\bar{x}) N\right)^{-1}>0$. For a generic value of $N$, the solution of $F'(x)=0$ is $$\bar{x}=\dfrac{1}{2}\dfrac{N-2}{N-1}\,\,\, ,$$ which is actually a minimum since $$\label{eq:d2fdx2_concmeas}
F''(\bar{x})=\dfrac{4(N-1)^3}{N^2(N-2)}>0 \qquad \text{for} \,\, N>2 \,\,.$$ This means that the largest part of the volume $\mathcal{V}_N(E)$ is concentrated around the hypersurface at constant potential energy $\Sigma_{\bar{x}E}$, a result very close to what is expected from the virial theorem [@note].
From and $\sigma^2=\left(F''(\bar{x}) N\right)^{-1}$, it follows that the largest part of the volume ($\sim 99.7\% $) is concentrated around $\Sigma_{\bar{x}E}$ in the interval $[\bar{x}-3\sigma,\bar{x}+3\sigma]$ with $$\sigma=\dfrac{1}{2\sqrt{N}}\left[\dfrac{\left(1-\dfrac{2}{N}\right)}{\left(1-\dfrac{1}{N}\right)}\right]^{1/2}\approx \dfrac{1}{2\sqrt{N}}\,\,.$$ This statistical argument shows that in the case of a large number of degrees of freedom the volume of the manifold is concentrated around a submanifold constant energy hypersurface $V=E/2$, far from the boundary of the Hill region where Jacobi metric is singular.
For example, with $N=100$ kinetic energy fluctuations of absolute value of $10\%$ occur with a probability of $68\%$ while the probability of configurations hitting the boundary ($E-V=0$) is $\sim 5.52\times 10^{-88}$.
Discussion {#conclusion}
==========
Even though the point raised in Ref.[@cuervo2015non] is interesting, the conclusion put forward by the authors is incorrect. The fluctuations of kinetic energy along a trajectory/geodesic of the Jacobi metric associated with an integrable system, like a collection of harmonic oscillators, are by no means responsible for the activation of parametric instability mimicking a chaotic behaviour. When the number of degrees of freedom of a Hamiltonian system is small, the associated geodesics can often approach the boundary $\partial M_E = \{ q\in M \vert V(q)=E\}$ of the mechanical manifold $M_E$, and, in so doing, the geodesics bounce on $\partial M_E$. The sharp reflection of the geodesics on the so-called Hill’s boundaries [@montgomery2014s; @giambo2014morse; @giambo2015normal; @seifert1948periodische] are at the origin of numerical instabilities which in principle could be perhaps avoided by a prohibitively high precision of the integration algorithm for the Jacobi–Levi-Civita equation describing the geodesic spread. However, throughout this paper we have shown that this problem can be fixed by choosing a parallel transported coordinate system. The stability/instability of geodesics is an intrinsic property thus in principle independent of the choice of the coordinate system, however, not all the coordinate systems are necessarily equivalent from the point of view of their numerical implementation and reliability of the corresponding outcomes. And, in fact, the sharp reflection of the geodesics by the boundaries $\partial M_E$ is accounted for by a sudden reflection of the coordinate axes of the parallel transported frames thus separating the true geometric origin of stability/instability of geodesics from the source of numerical artefacts related with their peculiar shape.
We have then shown that when the number of degrees of freedom increases, then the probability of approaching the boundary of the corresponding mechanical manifold $M_E$ gets lower and lower and, even if at finite $N$ the kinetic energy fluctuates it does not affect the strength of chaos measured through the outcomes of the JLC equation written for both the Jacobi and Einsenhart metrics which are in perfect agreement, as shown in Ref.[@cerruti1997lyapunov]. Thus already for a few tens of degrees of freedom the JLC equation for $(M_E,g_J)$ written in natural chart [@cerruti1997lyapunov; @book] $$\begin{aligned}
\frac{d^2J^k}{dt^2} &+& \frac{1}{E-V}\left(\partial_kV\delta_{ij}\frac{dq^i}{dt}
- \partial_jV\frac{dq^k}{dt}\right)\frac{dJ^j}{dt} + [\partial^2_{kj}V]\ J^j
\nonumber \\
& + & \frac{1}{E-V}\left[(\partial_kV)(\partial_jV)
-\left(\partial^2_{ij}
V+\frac{(\partial_iV)(\partial_jV)}{E-V}\right)\frac{dq^i}{dt}\frac{dq^k}{dt}
\right] J^j = 0~~.\nonumber
\label{jlc_final}\end{aligned}$$ can be safely used, at most with the exclusion of a zero measure set of initial conditions. For very weakly coupled harmonic oscillators and $N=128$ these equations give $\lambda$ as small as $2\times 10^{-6}$ .
In conclusion, the study of order and chaos of Hamiltonian flows - identified as geodesic flows of the Jacobi metric in configuration space - is legitimate and coherent, although not unique.
M. Pettini, [*Geometrical hints for a nonperturbative approach to Hamiltonian dynamics*]{}, Phys. Rev. E [**47**]{}, 828 (1993).
L. Casetti, M. Pettini, E.G.D. Cohen, [*Geometric approach to Hamiltonian dynamics and statistical mechanics*]{}, Phys. Rep. [**337**]{}, 237-342 (2000).
M. Pettini, [*Geometry and Topology in Hamiltonian Dynamics and Statistical Mechanics*]{}, IAM Series n.33, (Springer, New York, 2007).
L. P. Eisenhart, [*Dynamical Trajectories and Geodesics*]{}, Ann. of Math. (Princeton) [**30**]{}, 591 (1929).
M. Cerruti-Sola and M. Pettini, [*Geometric description of chaos in two-degrees-of-freedom Hamiltonian systems*]{}, Phys. Rev. E [**53**]{}, 179 (1996).
M. Pettini and R. Valdettaro, [*On the Riemannian description of chaotic instability in Hamiltonian dynamics*]{}, CHAOS [**5**]{}, 646 (1995).
M. Cerruti-Sola, R. Franzosi, and M. Pettini, [*Lyapunov exponents from geodesic spread in configuration space*]{}, Phys. Rev. E [**56**]{}, 4872 (1997).
E. Cuervo-Reyes and R. Movassagh, [*Non-affine geometrization can lead to non-physical instabilities*]{}, J. Phys. A: Math. and Theor. [**48**]{}, 075101 (2015).
R. Montgomery, [*Who’s afraid of the Hill boundary?*]{}, Symmetry, Integrability and Geometry: Methods and Applications [**10**]{}, 101 (2014).
R. Giambo, F. Giannoni, and P. Piccione, [*Morse theory for geodesics in singular conformal metrics*]{}, Communications in Analysis and Geometry [**22**]{}, 779 (2014).
R. Giambo, F. Giannoni, and P. Piccione, [*On the normal exponential map in singular conformal metrics*]{}, Nonlinear Analysis: Theory, Methods & Applications [**127**]{}, 35 (2015).
H Seifert, [*Periodische bewegungen mechanischer systeme*]{}, Mathematische Zeit. [**51**]{}, 197 (1948).
Y. Yamaguchi and T. Iwai, [*Geometric approach to Lyapunov analysis in Hamiltonian dynamics*]{}, Phys. Rev. E [**64**]{}, 066206 (2001).
The exact result expected from virial theorem, namely $\bar{x}=1/2$, is obtained considering the Boltzmann prescription for the microcanonical partition function $$\Omega_{N,Bol}(E)=\int \delta(E-H(\nero{P},\nero{Q})) \,\, |dp_{1}\wedge\ldots\wedge dp_N\wedge q^{1}\wedge\ldots\wedge dq^{N}|\propto \int \left[2(E-V(\nero{Q})\right]^{\frac{N}{2}-1}\,\,| dq^{1}\wedge\ldots\wedge dq^{N}|\,\,.
\nonumber$$ This fact stands on the edge of a long standing debate about the “correct” prescription for the microcanonical partition function. Such a debate is out of the scope of the present work: we just report that Boltzmann prescription gives a result more consistent with the virial theorem in the considered case.
|
---
abstract: 'We study the asymptotics of a Markovian system of $N \geq 3$ particles in $[0,1]^d$ in which, at each step in discrete time, the particle farthest from the current centre of mass is removed and replaced by an independent $U [0,1]^d$ random particle. We show that the limiting configuration contains $N-1$ coincident particles at a random location $\xi_N \in [0,1]^d$. A key tool in the analysis is a Lyapunov function based on the squared radius of gyration (sum of squared distances) of the points. For $d=1$ we give additional results on the distribution of the limit $\xi_N$, showing, among other things, that it gives positive probability to any nonempty interval subset of $[0,1]$, and giving a reasonably explicit description in the smallest nontrivial case, $N=3$.'
author:
- 'Michael Grinfeld[^1]'
- 'Stanislav Volkov[^2]'
- 'Andrew R. Wade[^3]'
title: 'Convergence in a multidimensional randomized [K]{}eynesian beauty contest'
---
[*Keywords:*]{} Keynesian beauty contest; radius of gyration; rank-driven process; sum of squared distances.
[*AMS 2010 Subject Classifications:*]{} 60J05 (Primary) 60D05, 60F15, 60K35, 82C22, 91A15 (Secondary)
Introduction, model, and results
================================
In a [*Keynesian beauty contest*]{}, $N$ players each guess a number, the winner being the player whose guess is closest to the mean of all the $N$ guesses; the name marks Keynes’s discussion of “those newspaper competitions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the prize being awarded to the competitor whose choice most nearly corresponds to the average preferences of the competitors as a whole” [@keynes Ch. 12, §V]. Moulin [@moulin p. 72] formalized a version of the game played on a real interval, the “$p$-beauty contest”, in which the target is $p$ ($p>0$) times the mean value. See e.g. [@degr] and references therein for some recent work on game-theoretic aspects of such “contests” in economics.
In this paper we study a stochastic process based on an iterated version of the game, in which players [*randomly*]{} choose a value in $[0,1]$, and at each step the worst performer (that is, the player whose guess is farthest from the mean) is replaced by a new player; each player’s guess is fixed as soon as they enter the game, so a single new random value enters the system at each step. Analysis of this model was posed as an open problem in [@gkw p. 390]. The natural setting for our techniques is in fact a generalization in which the values live in $[0,1]^d$ and the target is the barycentre (centre of mass) of the values. We now formally describe the model and state our main results.
Let $d \in {{\mathbb N}}:= \{1,2,\ldots\}$. We use the notation ${{\cal X}}_n = ( x_1, x_2, \ldots, x_n )$ for a vector of $n$ points $x_i \in {{\mathbb R}}^d$. We write $\mu_n ({{\cal X}}_n)
:= n^{-1} \sum_{i=1}^n x_i$ for the barycentre of ${{\cal X}}_n$, and $\| \, \cdot \, \|$ for the Euclidean norm on ${{\mathbb R}}^d$. Let ${\mathop{\mathrm{ord}}}( {{\cal X}}_n ) = ( x_{(1)} , x_{(2)} , \ldots, x_{(n)})$ denote the [*barycentric order statistics*]{} of $x_1, \ldots, x_n$, so that $ \| x_{(1)} - \mu_n ({{\cal X}}_n ) \| \leq \| x_{(2)} - \mu_n ({{\cal X}}_n) \| \leq \cdots \leq \| x_{(n)} - \mu_n ({{\cal X}}_n) \|$; any ties are broken randomly. We call ${{\cal X}}^*_n :=
x_{(n)}$ the [*extreme*]{} point of ${{\cal X}}_n$, a point of $x_1, \ldots, x_n$ farthest from the barycentre. We define the [*core*]{} of ${{\cal X}}_n$ as ${{\cal X}}_n'
:= (x_{(1)}, \ldots, x_{(n-1)})$, the vector of $x_1, \ldots, x_n$ with the extreme point removed.
The Markovian model that we study is defined as follows. Fix $N \geq 3$. Start with $X_1(0), \ldots, X_N(0)$, distinct points in $[0,1]^d$, and write ${{\cal X}}_N (0) := ( X_{(1)} (0), \ldots, X_{(N)} (0) )$ for the corresponding ordered vector. One possibility is to start with a [*uniform random*]{} initial configuration, by taking $X_1(0), \ldots, X_N (0)$ to be independent $U[0,1]^d$ random variables; here and elsewhere $U [0,1]^d$ denotes the uniform distribution on $[0,1]^d$. In this uniform random initialization, all $N$ points are indeed distinct with probability 1. Given ${{\cal X}}_N(t)$, replace ${{\cal X}}^*_N(t) = X_{(N)} (t)$ by an independent $U[0,1]^d$ random variable $U_{t+1}$, so that ${{\cal X}}_N (t+1) = {\mathop{\mathrm{ord}}}( X_{(1)} (t), \ldots, X_{(N-1)} (t), U_{t+1} )$.
The interesting case is when $N \geq 3$: the case $N=1$ is trivial, and the case $N=2$ is also uninteresting since at each step either point is replaced with probability $1/2$ by a $U[0,1]$ variable, so that, regardless of the initial configuration, after a finite number of steps we will have two independent $U[0,1]$ points. Our main result, Theorem \[thm1\], shows that for $N \geq 3$ all but the most extreme point of the configuration converge to a common limit.
\[thm1\] Let $d\in {{\mathbb N}}$ and $N \geq 3$. Let ${{\cal X}}_N(0)$ consist of $N$ distinct points in $[0,1]^d$. There exists a random $\xi_N := \xi_N ({{\cal X}}_N(0)) \in [0,1]^d$ such that $$\label{thm1a}
{{\cal X}}'_N(t) {{\stackrel{a.s.}{~\longrightarrow~}}}( \xi_N, \xi_N, \ldots, \xi_N) , \textrm{~~and~~} {{\cal X}}_N^*(t) - U_t {{\stackrel{a.s.}{~\longrightarrow~}}}0 ,$$ as $t \to \infty$. In particular, for $U \sim U [0,1]^d $, as $t \to \infty$, $${{\cal X}}_N (t) {{\stackrel{d}{~\longrightarrow~}}}( \xi_N, \xi_N, \ldots, \xi_N, U ) .$$
\[rem0\] Under the conditions of Theorem \[thm1\], despite the fact that ${{\cal X}}_N^* (t) - U_t \to 0$ a.s., we will see below that ${{\cal X}}_N^* (t) \neq U_t$ infinitely often a.s.
Theorem \[thm1\] is proved in Section \[sec:proof\]. Then, Section \[sec:1d\] is devoted to the one-dimensional case, where we obtain various additional results on the limit $\xi_N$. Finally, the Appendices, Sections \[appendix\] and \[appendix2\], collect some results on uniform spacings and continuity of distributional fixed-points that we use in parts of the analysis in Section \[sec:1d\].
Proof of convergence {#sec:proof}
====================
Intuitively, the evolution of the process is as follows. If, on replacement of the extreme point, the new point is the next extreme point (measured with respect to the new centre of mass), then the core is unchanged. However, if the new point is not extreme, it typically penetrates the core significantly, while a more extreme point is thrown out of the core, reducing the size of the core in some sense (we give a precise statement below). Tracking the evolution of the core, by following its centre of mass, one sees increasingly long periods of inactivity, since as the size of the core decreases changes occur less often, and moreover the magnitude of the changes decreases in step with the size of the core. The dynamics are nontrivial, but bear some resemblance to random walks with decreasing steps (see e.g. [@kr; @erdos] and references therein) as well as processes with reinforcement such as the Pólya urn (see e.g. [@pemantle] for a survey).
Our analysis will rest on a ‘Lyapunov function’ for the process, that is, a function of the configuration that possesses pertinent asymptotic properties. One may initially hope, for example, that the diameter of the point set ${{\cal X}}_N(t)$ would decrease over time, but this cannot be the case because the newly added point can be anywhere in $[0,1]^d$. What then about the diameter of ${{\cal X}}_N'(t)$, for which the extreme point is ignored? We will show later in this section that this quantity is in fact well behaved, but we have to argue somewhat indirectly: the diameter of ${{\cal X}}_N'(t)$ can increase (at least for $N$ big enough; see Remark \[rmk1\] below). However, there [*is*]{} a monotone decreasing function associated with the process, based on the sum of squared distances of a configuration, which we will use as our Lyapunov function.
For $n \in {{\mathbb N}}$ and ${{\cal X}}_n = ( x_1, x_2, \ldots, x_n ) \in {{\mathbb R}}^{dn}$, write $$\label{Gdef}
G_n ( {{\cal X}}_n ) := G_n (x_1, \ldots, x_n ) := n^{-1} \sum_{i=1}^n \sum_{j=1}^{i-1} \| x_i - x_j \|^2 = \sum_{i=1}^n \| x_i - \mu_n ( {{\cal X}}_n ) \|^2 ;$$ a detailed proof of the (elementary) final equality in (\[Gdef\]) may be found on pp. 95–96 of [@hughes], for example. We remark that $\frac{1}{n}G_n$ is the squared [*radius of gyration*]{} of $x_1, \ldots, x_n$: see e.g. [@hughes], p. 95. Note also that calculus verifies the useful variational formula $$\label{variational}
G_n (x_1, \ldots, x_n ) = \inf_{y \in {{\mathbb R}}^d} \sum_{i=1}^n \| x_i - y \|^2 .$$
For $n \geq 2$, define $$F_n ( {{\cal X}}_n ) := F_n ( x_1, \ldots, x_n ) := G_{n-1} ( {{\cal X}}_n' ) = G_{n-1} (x_{(1)}, \ldots, x_{(n-1)}) .$$
\[lem1\] Let $n \geq 2$ and ${{\cal X}}_n = ( x_1, x_2, \ldots, x_n ) \in {{\mathbb R}}^{dn}$. Then for any $x \in {{\mathbb R}}^d$, $$F_n ( x_{(1)}, \ldots, x_{(n-1)}, x ) \leq F_n ( {{\cal X}}_n ) .$$
For ease of notation, we write simply $(x_1, \ldots, x_n)$ for $(x_{(1)}, \ldots, x_{(n)})$, i.e., we relabel so that $x_j$ is the $j$th closest point to $\mu_n ( {{\cal X}}_n )$. Then ${{\cal X}}_n^* = x_n$, ${{\cal X}}'_n = (x_1,\ldots, x_{n-1})$, and $$\label{fold}
F_{\rm old} := F_n ({{\cal X}}_n ) = G_{n-1} ( x_1, \ldots, x_{n-1} ) = \sum_{i=1}^{n-1} \| x_i - \mu'_{\rm old} \|^2 ,$$ where $\mu'_{\rm old} := \mu_{n-1} ({{\cal X}}_n' )$. We compare $F_{\rm old}$ to $F_n$ evaluated on the set of points obtained by removing $x_n$ and replacing it with some $x \in {{\mathbb R}}^d$.
Write $y := \{ x_1, \ldots, x_{n-1} , x\}^*$ for the new extreme point. Then $$\label{fnew}
F_{\rm new} := F_n ( x_{1}, \ldots, x_{n-1}, x ) = \sum_{i=1}^{n-1} \| x_i - \mu'_{\rm new} \|^2
+ \| x- \mu'_{\rm new} \|^2 - \| y- \mu'_{\rm new} \|^2 ,$$ where $$\label{munew}
\mu'_{\rm new} := \frac{1}{n-1} \left( \sum_{i=1}^{n-1}x_i + x - y \right) = \mu'_{\rm old} + \frac{x-y}{n-1} .$$ Denote $\mu_{\rm new} := \mu_n (x_1, \ldots, x_{n-1}, x)$, so $$\label{eq3}
\mu'_{\rm new}=\frac{n \mu_{\rm new}}{n-1}-\frac y{n-1}.$$ From (\[fold\]), (\[fnew\]), and (\[munew\]), we obtain $$\begin{aligned}
\label{eq22}
F_{\rm new} - F_{\rm old}
& = \sum_{i=1}^{n-1} \left( \|x_i - \mu'_{\rm new} \|^2 - \|x_i - \mu'_{\rm old} \|^2 \right) + \|x - \mu'_{\rm new} \|^2 - \|y - \mu'_{\rm new} \|^2 .\end{aligned}$$ For the sum on the right-hand side of (\[eq22\]), we have that $$\begin{aligned}
\sum_{i=1}^{n-1} \left( \|x_i - \mu'_{\rm new} \|^2 - \|x_i - \mu'_{\rm old} \|^2 \right) & = \sum_{i=1}^{n-1}
\left( 2 x_i \cdot ( \mu'_{\rm old} - \mu'_{\rm new} ) + \| \mu'_{\rm new} \|^2 - \| \mu'_{\rm old} \|^2 \right) \\
& = (n-1) \left( 2 \mu'_{\rm old} \cdot ( \mu'_{\rm old} - \mu'_{\rm new} ) + \| \mu'_{\rm new} \|^2 - \| \mu'_{\rm old} \|^2 \right) \\
& = (n-1) \left( \| \mu'_{\rm old} \|^2 - 2 ( \mu'_{\rm old} \cdot \mu'_{\rm new} ) + \| \mu'_{\rm new} \|^2 \right).\end{aligned}$$ Simplifying this last expression and substituting back into (\[eq22\]) gives $F_{\rm new} - F_{\rm old}
= (n-1) \| \mu'_{\rm old} - \mu'_{\rm new} \|^2 + \|x - \mu'_{\rm new}\|^2 -
\|y - \mu'_{\rm new} \|^2$. Thus, using (\[munew\]) and then (\[eq3\]), $$\begin{aligned}
F_{\rm new} - F_{\rm old}
&
= \frac{\|x-y\|^2}{n-1}
+\|x\|^2-\|y\|^2 - 2 \mu'_{\rm new} \cdot (x-y)
\\
&= \frac{\|x\|^2+\|y\|^2-2x\cdot y}{n-1}
+\|x\|^2-\|y\|^2 - 2\left(\frac{n\mu_{\rm new}}{n-1}-\frac y{n-1}\right) \cdot (x-y). \end{aligned}$$ Hence we conclude that $$\begin{aligned}
\label{Fchange}
F_{\rm new} - F_{\rm old}
& = \frac{n}{n-1} \left( \| x \|^2 - \|y\|^2 - 2 \mu_{\rm new} \cdot (x-y) \right)\nonumber \\
&=\frac n{n-1} \left(\|x-\mu_{\rm new}\|^2-\|y-\mu_{\rm new}\|^2\right) \leq 0,\end{aligned}$$ since $y$ is, by definition, the farthest point from $\mu_{\rm new}$.
Consider $F(t) := F_N( {{\cal X}}_N (t) )$. Lemma \[lem1\] has the following immediate consequence.
\[cor1\] Let $N \geq 2$. Then $F (t+1) \leq F (t)$.
Corollary \[cor1\] shows that our Lyapunov function $F(t)$ is nonincreasing; later we show that $F(t) \to 0$ a.s. (see Lemma \[lem4\] below). First, we need to relate $F(t)$ to the [*diameter*]{} of the point set ${{\cal X}}'_N(t)$. For $n \geq 2$ and $x_1, \ldots, x_n \in {{\mathbb R}}^d$, write $$D_n ( x_1, \ldots, x_n ) := \max_{1 \leq i,j \leq n} \| x_i - x_j \|.$$
\[lem2\] Let $n \geq 2$ and $x_1, \ldots, x_n \in {{\mathbb R}}^d$. Then $$\frac{1}{2} D_n ( x_1, \ldots, x_n )^2 \leq G_n (x_1, \ldots, x_n ) \leq \frac{1}{2} (n-1) D_n ( x_1, \ldots, x_n )^2 .$$
The lower bound in Lemma \[lem2\] is sharp, and is attained by collinear configurations with two diametrically opposed points $x_i, x_j$ and all the other $n-2$ points at the midpoint $\mu_2 (x_i,x_j) = \mu_n (x_1, \ldots, x_n)$. The upper bound in Lemma \[lem2\] is not, in general, sharp; determining the sharp upper bound is a nontrivial problem. The bound $G_n (x_1, \ldots, x_n ) \leq \frac{n}{2} \left(\frac{d}{d+1} \right) D_n ( x_1, \ldots, x_n )^2$ [@wit] is also not always sharp. Witsenhausen [@wit] conjectured that the maximum is attained if and only if the points are distributed as evenly as possible among the vertices of a regular $d$-dimensional simplex of edge-length $D_n(x_1, \ldots, x_n)$; this conjecture was proved relatively recently [@pill; @bm].
Fix $x_1, \ldots, x_n \in {{\mathbb R}}^d$. For ease of notation, write $\mu = \mu_n (x_1, \ldots, x_n)$. First we prove the lower bound. For $n \geq 2$, using the second form of $G_n$ in (\[Gdef\]), $$\begin{aligned}
G_n (x_1, \ldots, x_n ) & = \sum_{i=1}^n \| x_i - \mu \|^2 \geq \| x_i - \mu \|^2 + \| x_j - \mu \|^2 ,\end{aligned}$$ where $(x_i,x_j)$ is a diameter, i.e., $D_n(x_1,\ldots,x_n ) = \| x_i - x_j \|$. By the $n=2$ case of (\[variational\]), $$\| x_i - \mu \|^2 + \| x_j - \mu \|^2 \geq 2 \| x_i - \mu_2 (x_i, x_j ) \|^2 = \frac{1}{2} \| x_i - x_j \|^2 .$$ This gives the lower bound. For the upper bound, from the first form of $G_n$ in (\[Gdef\]), $$G_n (x_1, \ldots, x_n ) \leq \frac{1}{n} \sum_{i=1}^n (i-1) D_n (x_1, \ldots, x_n)^2 ,$$ by the definition of $D_n$, which yields the result.
Let $D(t) := D_{N-1} ( {{\cal X}}_N ' (t))$.
\[rmk1\] By Lemma \[lem2\] (or (\[Gdef\])), $G_2 ( {{\cal X}}_3'(t)) = \frac{1}{2} D_2 ( {{\cal X}}_3' (t))^2$, so when $N=3$, Lemma \[lem1\] implies that $D (t+1) \leq D (t)$ a.s. as well. If $d=1$, it can be shown that $D (t)$ is nonincreasing also when $N=4$. In general, however, $D (t)$ can increase.
Let ${{\mathcal F}}_t := \sigma ( {{\cal X}}_N(0), {{\cal X}}_N(1), \ldots, {{\cal X}}_N(t) )$, the $\sigma$-algebra generated by the process up to time $t$. Let $B ( x; r)$ denote the closed Euclidean $d$-ball with centre $x \in {{\mathbb R}}^d$ and radius $r >0$. Define the events $$A_{t+1} := \{ U_{t+1} \in B ( \mu_{N-1} ( {{\cal X}}_N' (t) ) ; 3 D (t) ) \}, ~~
A'_{t+1} := \{ U_{t+1} \in B ( \mu_{N-1} ( {{\cal X}}_N' (t) ) ; D(t) /4 ) \}.$$
\[lem3\] There is an absolute constant $\gamma >0$ for which, for all $N \geq 3$ and all $t$, $$\label{events}
A'_{t+1} \subseteq \{ F(t+1) - F(t) \leq - \gamma N^{-1} F(t) \} \subseteq \{ F(t+1) - F(t) < 0 \} \subseteq A_{t+1} .$$ Moreover, there exist constants $c >0$ and $C<\infty$, depending only on $d$, for which, for all $N \geq 3$ and all $t$, a.s., $$\begin{aligned}
\label{eq5}
{{\mathbb P}}\left[ F (t+1) - F (t) \leq - \gamma N^{-1} F (t) \mid {{\mathcal F}}_t \right] & \geq
{{\mathbb P}}[ A'_{t+1} \mid {{\mathcal F}}_t ] \geq c N^{-d/2} ( F(t) )^{d/2} ; \\
\label{eq6}
{{\mathbb P}}\left[ F(t+1) - F(t) < 0 \mid {{\mathcal F}}_t \right] & \leq
{{\mathbb P}}[ A_{t+1} \mid {{\mathcal F}}_t ] \leq C ( F(t) )^{d/2} .\end{aligned}$$
For simplicity we write $X_1, \ldots, X_{N-1}$ instead of $X_{(1)} (t), \ldots, X_{(N-1)} (t)$ and $D$ instead of $D(t) = D_{N-1} (X_1, \ldots, X_{N-1})$. By definition of $D$, there exists some $i \in \{1,\ldots, N-1\}$ such that $\| \mu'_{\rm old} - X_i \| \geq D/2$, where $ \mu'_{\rm old} = \mu_{N-1} ( X_{1}, \ldots, X_{N-1} )$. Given ${{\mathcal F}}_t$, the event $A'_{t+1}$, that the new point $U := U_{t+1}$ falls in $B ( \mu'_{\rm old} ; D/4)$, has probability bounded below by $\theta_d D^d$, where $\theta_d>0$ depends only on $d$. Let $\mu_{\rm new} := \mu_N ( X_1, \ldots, X_{N-1}, U )$. Suppose that $A'_{t+1}$ occurs. Then, $$\label{eq0} \| \mu_{\rm new} - \mu'_{\rm old} \| = \frac{1}{N} \| U - \mu'_{\rm old} \| \leq \frac{D}{4N} \leq \frac{D}{12} ,$$ since $N \geq 3$. Hence, by (\[eq0\]) and the triangle inequality, $$\label{eq1}
\| U - \mu_{\rm new} \| \leq \| U - \mu'_{\rm old} \| + \| \mu_{\rm new} - \mu'_{\rm old} \|
\leq \frac{D}{4} + \frac{D}{12} = \frac{4D}{12} .$$ On the other hand, by another application of the triangle inequality and (\[eq0\]), $$\| \mu_{\rm new} - X_i \| \geq \| \mu'_{\rm old} - X_i \| - \| \mu_{\rm new} - \mu'_{\rm old} \|
\geq \frac{D}{2} - \frac{D}{12} = \frac{5D}{12} .$$ Then, by definition, the extreme point $Y := \{ X_1, \ldots, X_{N-1}, U \}^*$ satisfies $$\label{eq98}
\| Y - \mu_{\rm new} \| \geq \| \mu_{\rm new} - X_i \| \geq \frac{5D}{12} .$$ Hence from the $x=U$ case of (\[Fchange\]) with the bounds (\[eq1\]) and (\[eq98\]), we conclude that $$\label{eq2}
F (t+1) - F(t) \leq \frac{N}{N-1} \left( \left( \frac{4D}{12} \right)^2 - \left( \frac{5D}{12} \right)^2 \right) {{\mathbf 1}}(A'_{t+1} ) \leq -
\frac{9}{144} D^2 {{\mathbf 1}}(A'_{t+1} ) ,$$ for all $N \geq 3$; the first inclusion in (\[events\]) follows (with $\gamma = 9/72$) from (\[eq2\]) together with the fact that, by the second inequality in Lemma \[lem2\], $D^2 \geq 2 N^{-1} F(t)$. This in turn implies (\[eq5\]), using the fact that ${{\mathbb P}}[ A'_{t+1} \mid {{\mathcal F}}_t ] \geq \theta_d D^d$.
Next we consider the event $A_{t+1}$. Using the same notation as above, we have that $$\| \mu_{\rm new} - U \| \geq \| \mu'_{\rm old} - U \| - \| \mu_{\rm new} - \mu'_{\rm old}\|
= \left( 1 - \frac{1}{N} \right) \| \mu'_{\rm old} - U \| ,$$ by the equality in (\[eq0\]). Also, for any $k \in \{1, \ldots, N-1\}$, $$\| \mu_{\rm new} - X_k \| \leq \| \mu'_{\rm old} - X_k \| + \| \mu'_{\rm old} - \mu_{\rm new} \|
\leq D + \frac{1}{N} \| \mu'_{\rm old} - U \| ,$$ by (\[eq0\]) again. Combining these estimates we obtain, for any $k \in \{1,\ldots,N-1\}$, $$\| \mu_{\rm new} - U \| - \| \mu_{\rm new} - X_k \| \geq \left( 1 - \frac{2}{N} \right) \| \mu'_{\rm old} - U \| - D
\geq \frac{1}{3} \| \mu'_{\rm old} - U \| - D,$$ for $N \geq 3$. So in particular, $\| \mu_{\rm new} - U \| > \| \mu_{\rm new} - X_k \|$ for all $k \in \{1,\ldots, N-1\}$ provided $\| \mu'_{\rm old} - U \| > 3D$, i.e., $U \notin B ( \mu'_{\rm old} ; 3 D)$. In this case, $U$ is the extreme point among $U, X_1, \ldots, X_{N-1}$, i.e., $$\label{newextreme}
A_{t+1}^{\rm c} \subseteq \{ {{\cal X}}_N^* (t+1) = U_{t+1} \} .$$ In particular, on $A_{t+1}^{\rm c}$, $F(t+1) = F(t)$, and $F(t+1) < F(t)$ only if $A_{t+1}$ occurs, giving the final inclusion in (\[events\]). Since ${{\mathbb P}}[ A_{t+1} \mid {{\mathcal F}}_t ]$ is bounded above by $C_d D^d$ for a constant $C_d<\infty$ depending only on $d$, (\[eq6\]) follows from the first inequality in Lemma \[lem2\].
\[lem4\] Suppose that $N \geq 3$. Then, as $t\to\infty$, $F(t) \to 0$ a.s. and in $L^2$.
Let ${\varepsilon}>0$ and let $\sigma := \min \{ t \in {{\mathbb Z}_+}: F (t) \leq {\varepsilon}\}$, where ${{\mathbb Z}_+}:= \{0,1,2,\ldots\}$. Then by (\[eq5\]), there exists $\delta >0$ (depending on ${\varepsilon}$ and $N$) such that, a.s., ${{\mathbb P}}\left[ F (t+1) - F ( t ) \leq - \delta \mid {{\mathcal F}}_t \right] \geq \delta {{\mathbf 1}}{\{ t < \sigma \}}$. Hence, since $F(t+1) - F(t) \leq 0$ a.s. by Corollary \[cor1\], $$\label{eq4} {{\mathbb E}}\left[ F (t+1) - F ( t ) \mid {{\mathcal F}}_t \right] \leq - \delta^2 {{\mathbf 1}}{\{ t < \sigma \}}.$$ By Corollary \[cor1\], $F(t)$ is nonnegative and nonincreasing, and hence $F(t)$ converges a.s. as $t \to \infty$ to some nonnegative limit $F(\infty)$; the convergence also holds in $L^2$ since $F(t)$ is uniformly bounded. In particular, ${{\mathbb E}}[ F(t) ] \to {{\mathbb E}}[F (\infty) ]$. So taking expectations in (\[eq4\]) and letting $t \to \infty$ we obtain $$\limsup_{t \to \infty} \delta^2 {{\mathbb P}}[ \sigma > t ] \leq 0 ,$$ which implies that ${{\mathbb P}}[ \sigma > t] \to 0$ as $t \to \infty$. Thus $\sigma < \infty$ a.s., which together with the monotonicity of $F(t)$ (Corollary \[cor1\]) implies that $F(t) \leq {\varepsilon}$ for all $t$ sufficiently large. Since ${\varepsilon}>0$ was arbitrary, the result follows.
Recall the definition of $A_t$ and $A'_t$ from before Lemma \[lem3\]. Define $({{\mathcal F}}_t)$ stopping times $\tau_0 := 0$ and, for $n \in {{\mathbb N}}$, $\tau_n := \min \{ t > \tau_{n-1} :
A_{t} ~\textrm{occurs} \}$. Then $F (t) < F(t-1)$ can only occur if $t = \tau_n$ for some $n$. Since ${{\mathbb P}}[ A_{t+1} \mid {{\mathcal F}}_t ]$ is bounded below by a constant times $D(t)^d$, it is not hard to see that, provided $D(0) > 0$, $A_t$ occurs infinitely often, a.s., so that $\tau_n < \infty$ for all $n$.
\[lem5\] Let $N \geq 3$. There exists $\alpha >0$ such that, a.s., $D(\tau_n) \leq {{\mathrm{e}}}^{-\alpha n}$ for all $n$ sufficiently large.
We have from (\[eq2\]) and the second inequality in Lemma \[lem2\] that $$F(\tau_n ) - F(\tau_n -1) \leq - \delta F( \tau_n -1) {{\mathbf 1}}( A'_{\tau_n} ) ,$$ for some $\delta >0$. Note also that, by definition of the stopping times $\tau_n$, $F(\tau_{n} - 1) = F(\tau_{n-1})$. Hence, $${{\mathbb P}}[ F(\tau_n ) - F(\tau_{n-1} ) \leq - \delta F( \tau_{n-1}) \mid {{\mathcal F}}_{\tau_{n-1}} ]
\geq {{\mathbb P}}[ A'_{\tau_n} \mid {{\mathcal F}}_{\tau_{n-1}} ] \geq \delta ,$$ taking $\delta >0$ small enough, since, using the fact that ${{\mathbf 1}}(A_{\tau_n} ) =1$ a.s., $${{\mathbb P}}[ A'_{\tau_n} \mid {{\mathcal F}}_{\tau_{n-1}} ] = {{\mathbb E}}\left[ {{\mathbb P}}[ A'_{\tau_n} \mid {{\mathcal F}}_{\tau_n} ] {{\mathbf 1}}( A_{\tau_n} ) \mid {{\mathcal F}}_{\tau_{n-1}} \right]
= {{\mathbb E}}\left[ {{\mathbb P}}[ A'_{\tau_n} \mid A_{\tau_n} ] \mid {{\mathcal F}}_{\tau_{n-1}} \right] ,$$ where by definition of $A_t$ and $A'_t$, ${{\mathbb P}}[ A'_{\tau_n} \mid A_{\tau_n} ]$ is uniformly positive. Since $F(t+1) - F(t) \leq 0$ a.s. (by Corollary \[cor1\]) it follows that $${{\mathbb E}}\left[ F (\tau_{n}) - F(\tau_{n-1}) \mid {{\mathcal F}}_{\tau_{n-1}} \right] \leq -\delta^2 F(\tau_{n-1}) .$$ Taking expectations, we obtain ${{\mathbb E}}[ F (\tau_{n} )] \leq (1 -\delta^2 ) {{\mathbb E}}[ F ( \tau_{n-1} ) ]$, which implies that ${{\mathbb E}}[ F (\tau_n ) ] = O ( {{\mathrm{e}}}^{-cn})$, for some $c>0$ depending on $\delta$. Then by Markov’s inequality, ${{\mathbb P}}[ F( \tau_n ) \geq {{\mathrm{e}}}^{-cn/2} ] = O ({{\mathrm{e}}}^{-cn/2})$, which implies that $F(\tau_n) = O ( {{\mathrm{e}}}^{-cn/2} )$, a.s., by the Borel–Cantelli lemma. Then the first inequality in Lemma \[lem2\] gives the result.
The proof of Lemma \[lem5\] shows that ${{\mathbb P}}[ A'_{\tau_n} \mid {{\mathcal F}}_{\tau_{n-1}} ]$ is uniformly positive, so Lévy’s extension of the Borel–Cantelli lemma, with the fact that $\tau_n <\infty$ a.s. for all $n$, shows that $A'_t$ occurs for infinitely many $t$, a.s. With the proof of Lemma \[lem3\], this shows that ${{\cal X}}_N^* (t) \neq U_t$ infinitely often, as claimed in Remark \[rem0\].
Now we are almost ready to complete the proof of Theorem \[thm1\]. We state the main step in the remaining argument as the first part of the the next lemma, while the second part of the lemma we will need in Section \[support\] below. For ${\varepsilon}>0$, define the stopping time $\nu_{\varepsilon}:=
\min \{ t \in {{\mathbb N}}: F ( t ) < {\varepsilon}^2 \}$; for any ${\varepsilon}>0$, $\nu_{\varepsilon}<\infty$ a.s., by Lemma \[lem4\].
\[lem7\] Let $N \geq 3$. Then there exists $\xi_N \in [0,1]^d$ such that $\mu_{N-1} ( {{\cal X}}'_N (t) ) \to \xi_N$ a.s. and in $L^2$ as $t \to \infty$. Moreover, there exists an absolute constant $C$ such that for any ${\varepsilon}>0$, and any $t_0 \in {{\mathbb N}}$, on $\{ \nu_{\varepsilon}\leq t_0 \}$, a.s., $${{\mathbb E}}\Big[ \max_{t \geq t_0} \left\| \mu_{N-1} ( {{\cal X}}'_N (t) ) - \mu_{N-1} ( {{\cal X}}'_N (t_0)) \right\|
\mid {{\mathcal F}}_{t_0} \Big] \leq C {\varepsilon}.$$
Let $\mu' (t) := \mu_{N-1} ( {{\cal X}}'_N(t) )$. Observe that for $N \geq 3$, ${{\cal X}}'_N (t)$ and ${{\cal X}}'_N(t-1)$ have at least one point in common; choose one such point, and call it $Z(t)$. Then $\mu' (t) \in {\mathop{\mathrm{hull}}}{{\cal X}}_N'(t) \subseteq {\mathop{\mathrm{hull}}}{{\cal X}}_N (t)$, where ${\mathop{\mathrm{hull}}}{{\cal X}}$ denotes the convex hull of the point set ${{\cal X}}$. So $\| Z(t) - \mu'(t) \| \leq D(t)$. Similarly $\| Z(t) - \mu'(t-1) \| \leq D(t-1)$. By definition of $\tau_n$, $\mu'(t) = \mu'(t-1)$ and $D(t) = D(t-1)$ unless $t = \tau_n$ for some $n$, in which case $\mu'(\tau_n -1 ) = \mu'(\tau_{n-1} )$ and $D(\tau_n - 1) = D(\tau_{n-1})$. Hence, $$\begin{aligned}
\sum_{t \geq 1} \| \mu' (t) - \mu' (t-1) \| & =
\sum_{n \geq 1} \| \mu'(\tau_n) - \mu'(\tau_{n-1} ) \| \nonumber\\
& \leq \sum_{n \geq 1 } \left( \| \mu' (\tau_n ) - Z ( \tau_n ) \| + \| \mu' (\tau_{n-1} ) - Z ( \tau_n ) \| \right) ,
\label{eq41}\end{aligned}$$ by the triangle inequality. Then the preceding remarks imply that $$\sum_{t \geq 1} \| \mu' (t) - \mu' (t-1) \|
\leq \sum_{n \geq 1} ( D(\tau_n) + D(\tau_{n-1}) ) < \infty, ~{\rm a.s.},$$ by Lemma \[lem5\]. Hence there is some (random) $\xi_N \in [0,1]^d$ for which $\mu'(t) \to \xi_N$ a.s. as $t \to \infty$, and $L^2$ convergence follows by the bounded convergence theorem.
For the final statement in the lemma we use a variation of the preceding argument. Let $M := \max \{ n \in {{\mathbb Z}_+}: \tau_n \leq t_0 \}$. Then $F(t_0) = F(\tau_M)$ and $\mu' (\tau_M) = \mu' (t_0)$, so that on $\{ \nu_{\varepsilon}\leq t_0 \}$, we have $\{ \nu_{\varepsilon}\leq \tau_M \}$ as well. Hence (by Corollary \[cor1\]) $F ( \tau_M ) < {\varepsilon}^2$. A similar argument to that in the proof of Lemma \[lem5\] shows that, for $m \geq 0$, $${{\mathbb E}}[ F(\tau_{M+m} ) \mid {{\mathcal F}}_{t_0} ] \leq {{\mathrm{e}}}^{-cm} {{\mathbb E}}[ F(\tau_{M}) \mid {{\mathcal F}}_{t_0} ] \leq {\varepsilon}^2 {{\mathrm{e}}}^{-cm} ,$$ on $\{ \nu_{\varepsilon}\leq t_0 \}$, where $c>0$ depends on $N$ but not on $m$ or ${\varepsilon}$. Thus by Lemma \[lem2\], on $\{ \nu_{\varepsilon}\leq t_0 \}$, ${{\mathbb E}}[ D ( \tau_{M+m} ) ^2 \mid {{\mathcal F}}_{t_0} ] \leq 2 {\varepsilon}^2 {{\mathrm{e}}}^{-cm}$. Also, similarly to (\[eq41\]), $$\begin{aligned}
\max_{t \geq \tau_M} \| \mu' (t) - \mu' (\tau_M) \|^2 & \leq \sum_{t \geq \tau_M} \| \mu'(t) - \mu'(t-1) \|^2 \\
& \leq \sum_{m \geq 1} ( D (\tau_{M+m}) + D(\tau_{M+m-1} ) )^2 .\end{aligned}$$ Taking expectations and using the Cauchy–Schwarz inequality, we obtain, on $\{ \nu_{\varepsilon}\leq t_0 \}$, $${{\mathbb E}}\Big[ \max_{t \geq t_0 } \| \mu' (t) - \mu' (t_0) \|^2 \mid {{\mathcal F}}_{t_0} \Big] =
{{\mathbb E}}\Big[ \max_{t \geq \tau_M} \| \mu' (t) - \mu' (\tau_M) \|^2 \mid {{\mathcal F}}_{t_0} \Big]
\leq 8 {\varepsilon}^2 {{\mathrm{e}}}^c \sum_{m \geq 1} {{\mathrm{e}}}^{-cm} ,$$ which is a constant times ${\varepsilon}^2$. The result follows from Jensen’s inequality.
Again let $\mu' (t) := \mu_{N-1} ( {{\cal X}}'_N(t) )$. We have from Lemma \[lem7\] that $\mu'(t) \to \xi_N$ a.s. Now, for any $j \in \{1,\ldots, N-1\}$, by the triangle inequality, $$\| X_{(j)} (t) - \xi_N \| \leq \| X_{(j)} (t) - \mu' (t) \| + \| \mu'(t) - \xi_N \| \leq D (t) + \| \mu'(t) - \xi_N \| ,$$ which tends to $0$ a.s. as $t \to \infty$, since $D(t) \to 0$ a.s. by Lemma \[lem5\]. This establishes the first statement in (\[thm1a\]). Moreover, by (\[newextreme\]), ${{\cal X}}_N^*(t+1) \neq U_{t+1}$ only if $A_{t+1}$ occurs. On $A_{t+1}$, ${{\cal X}}_N^*(t+1)$ is one of the points of ${{\cal X}}_N'(t)$, and so in particular $\| {{\cal X}}_N^*(t+1) - \mu' (t) \| \leq D(t)$. In addition, on $A_{t+1}$, we have $\| U_{t+1} - \mu'(t) \| \leq 3 D(t)$. So by the triangle inequality, $$\left\| {{\cal X}}_N^*(t+1) - U_{t+1} \right\| \leq 4 D(t) {{\mathbf 1}}( A_{t+1} ) ,$$ which tends to $0$ a.s., again by Lemma \[lem5\]. This gives the final part of (\[thm1a\]).
The limit distribution in one dimension {#sec:1d}
=======================================
Overview and simulations
------------------------
Throughout this section we restrict attention to $d=1$. Of interest is the distribution of the limit $\xi_N$ in (\[thm1a\]), and its behaviour as $N \to \infty$. Simulations suggest that $\xi_N$ is highly dependent on the initial configuration: Figure \[fig1\] shows histogram estimates for $\xi_N$ from repeated simulations with a deterministic initial condition. In more detail, $10^8$ runs of each simulation were performed, each starting from the same initial condition; each run was terminated when $D(t) < 0.0001$ for the first time, and the value of $\mu_{N-1} ({{\cal X}}_N'(t))$ was output as an approximation to $\xi_N$ (cf Theorem \[thm1\]). Note that, by (\[events\]), in the simulations one may take the new points not $U[0,1]$ but uniform on a typically much smaller interval, which greatly increases the rate of updates to the core configuration.
Figure \[fig2\] shows sample results obtained with an initial condition of $N$ i.i.d. $U[0,1]$ random points. Now the histograms appear much simpler, although, of course, they can be viewed as mixtures of complicated multimodal histograms similar to those in Figure \[fig1\]. In the uniform case, it is natural to ask whether $\xi_N$ converge in distribution to some limit distribution as $N \to \infty$.
The form of the histograms in Figure \[fig2\] might suggest a Beta distribution (this is one sense in which the randomized beauty contest is “reminiscent of a Pólya urn” [@gkw p. 390]). An ad-hoc Kolmogorov–Smirnov analysis (see Table \[tab1\]) suggests that the distributions are indeed ‘close’ to Beta distributions, but different enough for the match to be unconvincing. Simulations for large $N$ are computationally intensive. We remark that it is not unusual for Beta or ‘approximate Beta’ distributions to appear as limits of schemes that proceed via iterated procedures on intervals: see for instance [@jk] and references therein.
$N$ $\beta $ $\kappa(\beta)$
----- ---------- -----------------
3 1.256 0.0010
10 1.392 0.0016
50 1.509 0.0018
100 1.539 0.0019
: $\kappa (\beta)$ is the Kolmogorov–Smirnov distance between a Beta($\beta,\beta$) distribution and the empirical distribution from the samples of size $10^8$ plotted in Figure \[fig2\], minimized over $\beta$ in each case.
\[tab1\]
In the rest of this section we study $\xi_N$ and its distribution. Our results on the limit distribution, in particular, leave several interesting open problems, including a precise description of the phenomena displayed by the simulations reported above. In Section \[sec:rdp\] we give an alternative (one might say ‘phenomenological’) characterization of the limit $\xi_N$, and contrast this with an appropriate rank-driven process in the sense of [@gkw]. In Section \[support\] we show that the distribution of $\xi_N$ is fully supported on $(0,1)$ and assigns positive probability to any proper interval, using a construction permitting transformations of configurations. Finally, Section \[sec:three\] is devoted to the case $N=3$, for which some explicit computations for the distribution of $\xi_N$ (in particular, its moments) are carried out.
A characterization of the limit {#sec:rdp}
-------------------------------
Let $$\pi_N (t) := \frac{1}{t} {\mathop{\#}}\left\{ s \in \{1,2,\ldots, t\} : {{\cal X}}_N^* (s) < \mu_{N} ( {{\cal X}}_N (s) ) \right\} ,$$ the proportion of times up to time $t$ for which the extreme point was the [*leftmost*]{} point (as opposed to the rightmost). The next result shows that $\pi_N (t)$ converges to the (random) limit $\xi_N$ given by Theorem \[thm1\]; we give the proof after some additional remarks.
\[left\] Let $d=1$ and $N \geq 3$. Then $\lim_{t\to\infty} \pi_N (t) = \xi_N$ a.s.
It is instructive to contrast this behaviour with a suitable [*rank-driven process*]{} (cf [@gkw]). Namely, fix a parameter $\pi \in (0,1)$. Take $N$ points in $[0,1]$, and at each step in discrete time replace either the leftmost point (with probability $\pi$) or else the rightmost point (probability $1-\pi$), independently at each step; inserted points are independent $U[0,1]$ variables. For this process, results of [@gkw] show that the marginal distribution of a typical point converges (as $t \to \infty$ and then $N \to \infty$) to a unit point mass at $\pi$ (cf Remark 3.2 in [@gkw]).
This leads us to one sense in which the randomized beauty contest is, to a limited extent, “reminiscent of a Pólya urn” [@gkw p. 390]. Recall that a Pólya urn consists of an increasing number of balls, each of which is either red or blue; at each step in discrete time, a ball is drawn uniformly at random from the urn and put back into the urn together with an extra ball of the same colour. The stochastic process of interest is the proportion of red balls, say; it converges to a random limit $\pi'$, which has a Beta distribution. The beauty contest can be viewed as occupying a similar relation to the rank-driven process described above as the Pólya urn process does to the simpler model in which, at each step, independently, either a red ball is added to the urn (with probability $\pi'$) or else a blue ball is added (probability $1-\pi'$).
Given $\tau_0, \tau_1, \tau_2, \ldots$, $\xi_N = \lim_{n \to \infty} \mu_{N-1} ( {{\cal X}}'_N (\tau_n) )$ is independent of $U_t$, $t \notin \{ \tau_0, \tau_1, \ldots \}$, since, by (\[newextreme\]), any such $U_t$ is replaced at time $t+1$. Let ${\varepsilon}>0$. By Theorem \[thm1\], there exists a random $T < \infty$ a.s. for which $\max_{1 \leq i \leq N-1} | \xi_N - X_{(i)} (t) | \leq {\varepsilon}$ for all $t \geq T$.
Since $\mu_N ({{\cal X}}_N (t+1)) = \frac{N-1}{N} \mu_{N-1} ( {{\cal X}}'_N (t) ) + \frac{1}{N} U_{t+1}$, we have that for $t \geq T$, using the triangle inequality, for any $i \in \{1,\ldots, N-1\}$, $$\begin{aligned}
| \mu_N ( {{\cal X}}_N (t+1) ) - X_{(i)} (t) | & \leq {\varepsilon}+ | \mu_N ( {{\cal X}}_N (t+1) ) - \xi_N | \\
& \leq {\varepsilon}+ \frac{N-1}{N} | \mu_{N-1} ( {{\cal X}}'_N (t) ) - \xi_N | + \frac{1}{N} | U_{t+1} - \xi_N | .\end{aligned}$$ Hence, for $t \geq T$, $$\label{88a}
\max_{1 \leq i \leq N-1}
\left| X_{(i)} (t) - \mu_{N} ( {{\cal X}}_N (t+1) ) \right|
\leq \frac{1}{N} | \xi_N - U_{t+1} | + 2 {\varepsilon}.$$ On the other hand, for $t \geq T$, $\mu_{N} ( {{\cal X}}_N (t+1) ) \geq \frac{N-1}{N} ( \xi_N - {\varepsilon}) + \frac{1}{N} U_{t+1}$, so that for $i \in \{ 1, \ldots, N-1\}$, $$\label{88b} \mu_{N} ( {{\cal X}}_N (t+1) ) - U_{t+1} \geq \frac{N-1}{N} ( \xi_N - U_{t+1} -{\varepsilon}) .$$ Suppose that $U_{t+1} < \xi_N - K {\varepsilon}$ for some $K \in (1,\infty)$. Then, from (\[88a\]) and (\[88b\]), $$\begin{aligned}
& ~~{} \left| \mu_N ( {{\cal X}}_N (t+1) ) - U_{t+1} \right| - \max_{1 \leq i \leq N-1} \left| X_{(i)} (t) - \mu_N ( {{\cal X}}_N (t+1) ) \right|
\\
& \geq \frac{N-2}{N} ( \xi_N - U_{t+1} ) - \frac{3N-1}{N} {\varepsilon}\\
& > \frac{{\varepsilon}}{N} \left( (N-2) K - 3N +1 \right ) .\end{aligned}$$ This last expression is positive provided $K \geq \frac{3N-1}{N-2}$, which is the case for all $N \geq 3$ with the choice $K=8$, say. Hence, with this choice of $K$, $\{ U_{t+1} < \xi_N - 8 {\varepsilon}\}$ implies that $U_{t+1}$ is farther from $\mu_{N+1} ( {{\cal X}}_N (t+1) )$ than is any of the points left over from ${{\cal X}}'_N(t)$. Write $L_t := \{ U_{t} < \xi_N - 8 {\varepsilon}\}$. Then we have shown that, for $t \geq T$, the event $L_t$ implies that $U_{t} = {{\cal X}}^*_N (t)$, and, moreover, $U_{t} < \mu_{N} ( {{\cal X}}_N (t) )$. Hence, for $t \geq T$, $$\pi_N (t) \geq \frac{1}{t} \sum_{s=T}^t {{\mathbf 1}}( L_s ) \geq \frac{1}{t} \sum_{\stackrel{s=T}{s \notin \{ \tau_0, \tau_1, \ldots\}}}^t {{\mathbf 1}}( L_s ).$$ Given $\tau_0, \tau_1, \ldots$, $U_s$, $s \notin \{ \tau_0, \tau_1, \ldots\}$ are independent of $T$ and $\xi_N$. For such an $s$, $U_s$ is uniform on $I_s := [0,1] \setminus B ( \mu_{N-1} ( {{\cal X}}'_N (s) ) ; 3 D(s) )$, and, for $s \geq T$, $D(s) \leq 2 {\varepsilon}$ so that $I_s \supseteq [0, \max \{ \xi_N - 8 {\varepsilon}, 0\} ] \cup [ \min\{ \xi_N + 8 {\varepsilon}, 1\} , 1]$. Hence, given $s \notin \{ \tau_0, \tau_1, \ldots\}$ and $s \geq T$, $${{\mathbb P}}[ L_s ] = {{\mathbb P}}[ U_{s} < \xi_N - 8 {\varepsilon}\mid U_s \in I_s ]
\geq
\xi_N - 8 {\varepsilon}.$$ Hence, considering separately the cases $\xi_N > 9 {\varepsilon}$ and $\xi_N \leq 9 {\varepsilon}$, the strong law of large numbers implies that $$\frac{1}{t} \sum_{\stackrel{s=T}{s \notin \{ \tau_0, \tau_1, \ldots\}}}^t {{\mathbf 1}}( L_s ) \geq \xi_N - 9 {\varepsilon},$$ for all $t$ sufficiently large; here we have used the fact that $t -T \to \infty$ a.s. as $t \to \infty$ and $\# \{ n \in {{\mathbb Z}_+}: \tau_n \leq t \} = o(t)$ a.s., which follows from (\[eq6\]) and Lemma \[lem4\]. Since ${\varepsilon}>0$ was arbitrary, it follows that $\liminf_{t \to \infty} t^{-1} \pi_N (t) \geq \xi_N$ a.s. The symmetrical argument considering events of the form $R_t := \{U_t > \xi_N + 8 {\varepsilon}\}$ shows that $\liminf_{t \to \infty} ( 1 - t^{-1} \pi_N (t) ) \geq 1 - \xi_N$ a.s., so $\limsup_{t \to \infty} t^{-1} \pi_N (t) \leq \xi_N$ a.s. Combining the two bounds gives the result.
The limit has full support {#support}
--------------------------
In this section, we prove that $\xi_N$ is fully supported on $(0,1)$ in the sense that ${{\rm ess} \inf}\xi_N = 0$, ${{\rm ess} \sup}\xi_N =1$, and $\xi_N$ assigns positive probability to any non-null interval. Let $$\label{spacing}
m_N (0) := \min \{ \| X_i(0) - X_j(0) \| : i,j \in \{0,1,\ldots,N+1\}, \, i \neq j \},$$ where we use the conventions $X_0(0) := 0$ and $X_{N+1} (0) := 1$. For $\rho >0$ let $S_\rho$ denote the ${{\mathcal F}}_0$-event $S_\rho := \{ m_N (0) \geq \rho \}$ that no point of ${{\cal X}}_N(0)$ is closer than distance $\rho$ to any other point of ${{\cal X}}_N(0)$ or to either of the ends of the unit interval.
\[limit\] Let $d =1$ and $N \geq 3$. Let $\rho \in (0,1)$. For any non-null interval subset $I$ of $[0,1]$, there exists $\delta >0$ (depending on $N$, $I$, and $\rho$) for which $$\label{lowerbound}
{{\mathbb P}}[ \xi_N \in I \mid {{\cal X}}_N (0) ] \geq \delta {{\mathbf 1}}( S_\rho) , {\ \textrm{a.s.}}$$ In particular, in the case where ${{\cal X}}_N(0)$ consists of $N$ independent $U[0,1]$ points, ${{\mathbb P}}[ \xi_N \in I ] >0$ for any non-null interval $I \subseteq [0,1]$.
We suspect, but have not been able to prove, that $\xi_N$ has a density $f_N$ with respect to Lebesgue measure, i.e., $\xi_N$ is absolutely continuous in the sense that for every ${\varepsilon}>0$ there exists $\delta>0$ such that ${{\mathbb P}}[ \xi_N \in A ] < {\varepsilon}$ for every $A$ with Lebesgue measure less than $\delta$. Were this so, then Proposition \[limit\] would show that we may take $f_N(x) >0$ for all $x \in (0,1)$. Note that ${{\mathbb P}}[ \xi_N \in A \mid {{\cal X}}_N (0) ]$ may be $0$ if ${{\cal X}}_N (0)$ contains non-distinct points: e.g. if $N \geq 3$ and ${{\cal X}}_N(0) = (x,x,\ldots,x,y)$, then ${{\cal X}}'_N(t) = (x,x,\ldots,x)$ for all $t$.
For $a \in [0,1]$, ${\varepsilon}>0$, and $t \in {{\mathbb N}}$, define the event $$E_{a,{\varepsilon}} ( t ) := \bigcap_{i=1}^N \left\{ | X_i (t) - a | < {\varepsilon}\right\} .$$ The main new ingredient needed to obtain Proposition \[limit\] is the following result.
\[move\] Let $N \geq 3$. For any $\rho \in (0,1)$ and ${\varepsilon}>0$ there exist $t_0 \in {{\mathbb N}}$ and $\delta_0 >0$ (depending on $N$, $\rho$, and ${\varepsilon}$) for which, for all $a \in [0,1]$, $${{\mathbb P}}[ E_{a,{\varepsilon}} (t_0) \mid {{\cal X}}_N(0) ] > \delta_0 {{\mathbf 1}}(S_\rho ) , {\ \textrm{a.s.}}$$
Fix $a \in [0,1]$. Let $\rho \in (0,1)$ and ${\varepsilon}>0$. It suffices to suppose that ${\varepsilon}\in (0,\rho)$, since $E_{a,{\varepsilon}} (t) \subseteq E_{a,{\varepsilon}'} (t)$ for ${\varepsilon}' \geq {\varepsilon}$. Suppose that $S_\rho$ occurs, so that $m_N (0) \geq \rho$ with $m_N(0)$ defined at (\[spacing\]). For ease of notation we list the points of ${{\cal X}}_N(0)$ in increasing order as $0<X_1< X_2 < \dots < X_N<1$. Let $M = \lfloor N/2\rfloor$.
Let $\nu={\varepsilon}/N^2$. The following argument shows how one can arrive at a configuration at a finite (deterministic) time $t_0$ where all of $X_1(t_0),\ldots,X_N(t_0)$ lie inside $(a-{\varepsilon},a+{\varepsilon})$ with a positive (though possibly very small) probability.
Let us call the points which are present at time $0$ *old points*; the points which will gradually replace this set will be called *new points*. We will first describe an event by which all the old points are removed and replaced by new points arranged approximately equidistantly in the interval $[X_M, X_{M+1}]$, and then we will describe an event by which such a configuration can migrate to the target interval.
[Step]{} 1. Starting from time $0$, iterate the following procedure until a new point becomes an extreme point. The construction is such that at each step, the extreme point is one of the old points, either at the extreme left or right of the configuration. At each step, the extreme old point is removed and replaced by a new $U[0,1]$ point to form the configuration at the next time unit. We describe an event of positive probability by requiring the successive new arrivals to fall in particular intervals, as follows. The first old point removed from the *right* is replaced by a new point in $( X_M+\nu,X_M+\nu+\delta )$, where $\delta \in (0,\nu)$ will be specified later. Subsequently, the $i$th point ($i \geq 2$) removed from the right is replaced by a new point in $( X_M+i\nu,X_M+i\nu+\delta )$. We call this subset of new points the *accumulation on the left*. On the other hand, the $i$th extreme point removed from the [*left*]{} ($i \in {{\mathbb N}}$) is replaced by a new point in $( X_{M+1}-i \nu,X_{M+1}- i \nu+\delta )$. This second subset of new points will be called the *accumulation on the right*.
During the first $M$ steps of this procedure, the new points are necessarily internal points of the configuration and so are never removed. Therefore, there will be a time $t_1\in [M,N]$ at which, for the first time, one of the new points becomes either the leftmost or rightmost point of ${{\cal X}}_N (t_1)$; suppose that it is the rightmost, since the argument in the other case is analogous. If at time $t_1$ the accumulation on the right is non-empty, we continue to perform the procedure described in [Step]{} 1, but now allowing ourselves to remove new points from the accumulation on the right. So we continue putting extra points on the accumulation on the left whenever the rightmost point is removed, and similarly putting extra points to the accumulation on the right whenever the leftmost point is removed, as described for [Step]{} 1. Eventually we will have either (a) a configuration where all the new points of the left or the right accumulation are completely removed, and there are still some of the old points left, or (b) a configuration where all old points are removed. The next step we describe separately for these two possibilities.
[Step]{} 2(a). Without loss of generality, suppose that the accumulation on the right is empty, so the configuration consists of $k$ points of the left accumulation and $N-k$ old points remaining to the left of $X_M$ (including $X_M$ itself). Note that [Step]{} 1 produces at least $M$ new points, so $M \leq k \leq N-1$, since by assumption we have at least one old point remaining. Let us now denote the points of the configuration $x_1 < x_2 < \cdots < x_N$ so that $x_{N-k} = X_M$, and by the construction in [Step]{} 1, $x_{N-k+i} \in ( X_M+i\nu,X_M+i\nu+\delta )$ for $i=1,2,\dots,k$. Provided that $k \leq N-2$, so that there are at least 2 old points, we will show that $x_1$ is necessarily the extreme point of the configuration. Indeed, writing $\mu=\mu_N(x_1,\ldots,x_N)$, using the fact that $x_{N-k+i} \geq x_{N-k} + i \nu$ for $1 \leq i \leq k$ and $x_{N} \leq x_{N-k} + k \nu + \delta$, we have $$\begin{aligned}
\mu -\frac{x_1+x_N}2 &
\geq \frac{x_1+\cdots+x_{N-k}+kx_{N-k}+ \frac{1}{2} \nu k(k+1) }{N}
- \frac{x_1+ x_{N-k}+\nu k+ \delta}{2} \\
& =
\frac{1}{2N} \left( 2 x_1 + \cdots + 2 x_{N-k} + (2k-N) x_{N-k} - N x_1 +
\nu k(k+1 -N) - \delta N \right) .\end{aligned}$$ The old points all have separation at least $\rho$, so for $1 \leq i \leq N-k$, $x_i \geq x_1 + (i-1) \rho$, and hence $$2 x_1+\cdots+2 x_{N-k}+(2 k - N) x_{N-k} \geq N x_1 + \rho (N-k-1) (N-k) + \rho (2k- N) (N-k-1) .$$ It follows, after simplification, that $$\begin{aligned}
\mu -\frac{x_1+x_N}2
&\ge \frac{1}{2N} \left( k (N-k-1) (\rho - \nu) - \delta N \right) \\
& \geq \frac{1}{2N} \left( (N-2) (\rho - \nu ) - \delta N \right) ,\end{aligned}$$ provided $1 \leq k \leq N-2$. By choice of $\nu$, we have $\nu \leq \rho /9$ and it follows that the last displayed expression is positive provided $\delta$ is small enough compared to $\rho$ ($\delta < \rho/4$, say). Hence $|x_1-\mu| > |x_N-\mu|$. Thus next we remove $x_1$. We replace it similarly to the procedure in [Step]{} 1, but now building up the accumulation on the [*left*]{}. We can thus iterate this step, removing old points from the left and building up the accumulation on the left, while keeping the accumulation on the right empty, until we get just one old point remaining (i.e. until $k=N-1$); this last old point will be $X_M$. At this stage, after a finite number of steps, we end up with a configuration where the set of points $x_1<x_2<\dots<x_N$ satisfies $x_i\in [ X_M +(i-1)\nu ,X_M + (i-1) \nu + \delta ]$, $i=1,2,\dots,N$.
[Step]{} 2(b). Suppose that the configuration is such that all old points have been removed but both left and right accumulations are non-empty. Repeating the procedure of [Step]{} 1, replacing rightmost points by building the left accumulation and leftmost points by building the right accumulation, we will also, in a finite number of steps, obtain a set points $x_i$ such that $x_i\in [ b+(i-1) \nu,b+(i-1) \nu+\delta ]$, $i=1,2,\dots,N$, for some $b\in [0,1]$.
[Step]{} 3. Now we will show how one can get to the situation where all points lie inside the interval $(a-{\varepsilon},a+{\varepsilon})$ starting from any configuration in which $$\begin{aligned}
\label{eqx*}
x_i \in [ b+i\nu,b+i\nu+\delta ],\ i=1,2,\dots,N-1 ,\end{aligned}$$ where $b \in [0,1]$ and $x_1 < \cdots < x_{N-1}$ are the core points of the configuration (i.e., with the extreme point removed). We have shown in [Step]{} 1 and [Step]{} 2 how we can achieve such a configuration in a finite time with a positive probability. Suppose that $a > b$; the argument for the other case is entirely analogous. We describe an event of positive probability by which the entire configuration can be moved to the right.
Having just removed the extreme point, we stipulate that the new point $y_1$ belong to $( b+N\nu-6\delta,b+N\nu-5\delta)$, so $y_1 > x_{N-1}$ is the new rightmost point provided $\delta < \nu/7$. Then to ensure that $x_1$, and not $y_1$, is the most extreme point we need $$\frac{x_1+y_1}2 -\left[b+\nu\frac{N+1}2\right]< \frac{x_1+\dots+x_{N-1}+y_1}N -\left[b+\nu\frac{N+1}2\right] .$$ The left-hand side of the last inequality is less than $-2\delta$ while the right-hand side is more than $ -\frac{6\delta}N$, so the inequality is indeed satisfied provided $N\ge 3$.
(150,20)(-2,0)
Hence at the next step $x_1$ is removed. Our new collection of core points is $x_2 < \cdots < x_{N-1} < y_1$. We stipulate that the next new point $y_2$ arrive in $( b+(N+1)\nu-18\delta,b+(N+1)\nu -17\delta)$. So again, for $\delta$ small enough ($\delta < \nu /13$ suffices), $y_2 > y_1$ and the newly added point ($y_1$) becomes the rightmost point in the configuration. Again, to ensure that the leftmost point ($x_2$) is now the extreme one, we require $$\frac{x_2+y_2}2 -\left[b+\nu\frac{N+3}2\right]< \frac{x_2+\cdots+x_{N-1}+y_1+y_2}N -\left[b+\nu\frac{N+3}2\right].$$ The left-hand side of the last inequality is less than $-8\delta$, while the right-hand side is more than $-24\delta/N$, and so the displayed inequality is true provided $N\ge 3$.
We will repeat this process until we remove the rightmost core point present at the start of [Step]{} 3, namely $x_{N-1}$, located in $[ b+(N-1)\nu,b+(N-1)\nu+\delta ]$. We will demonstrate how we can do this, in succession removing points from the left of the configuration and at each step replacing them by points on the right with careful choice of locations for the new points. We consecutively put new points $y_k$ at locations in intervals $$\Delta_k:=( b+(N-1+k)\nu-2\cdot 3^k \delta,b+(N-1+k)\nu-2\cdot 3^k \delta+\delta ) ,$$ for $k=1,2,\dots, N-1$. We have just shown that for $k=1,2$ this procedure will maintain the leftmost point ($x_k$) as the extreme one. Let us show that this is true for all $1 \leq k \leq N-1$, by an inductive argument. Indeed, suppose that the original points $x_1,x_2,\dots,x_{k-1}$ have been removed, the successive new points $y_j$ are located in $\Delta_j$, $j=1,2,\dots,k-1$, and that the replacement for the most recently removed point $x_{k-1}$ is the new point $y_k$. Place the new point $y_{k}$ in $\Delta_{k}$. Provided $\delta < \frac{\nu}{4 \cdot 3^{k-1} +1}$, $y_k > y_{k-1}$ and $y_k$ is the rightmost point of the new configuration, while the leftmost point is $x_{k}\in [ b+\nu k,b+\nu k+\delta ]$. Since $N\ge 3$ we have $$\begin{aligned}
\frac{x_{k}+y_{k}}{2}&\le b+\left[\frac{N-1}{2}+k\right]\nu- [3^{k}-1]\delta
\le b+\left[\frac{N-1}{2}+k\right]\nu- \frac{2(3+3^2+\dots+3^{k})\delta}{N}\\
&\le \frac{x_{k}+\dots+x_{N-1}+y_1+\dots+y_{k}}{N} ,\end{aligned}$$ thus ensuring that the leftmost point $x_k$, and not $y_{k}$, is the farthest from the centre of mass.
Thus, provided $\delta < 3^{-N} \nu$, say, we proceed to remove all the points $x_k$ and end up with a new collection of points $x_1',\dots,x_{N-1}'$ satisfying the property $$x_i'\in [ b'+i\nu,b'+i\nu+\delta'],\ i=1,2,\dots,N-1 ,$$ where $b'=b+(N-1)\nu- \delta'$ and $\delta':=3^N\delta$ ($>2\cdot 3^{N-1}\delta)$. Thus the situation is similar to the one in (\[eqx\*\]) but with $b$ replaced by $b' > b + \nu$, so the whole “grid” is shifted to the right. Hence, provided $\delta$ is small enough, and $\delta'$ and its subsequent analogues remain such that $\delta' < 3^{-N} \nu$, we can repeat the above procedure and move points to the right again, etc., a finite number of times (depending on $|b-a|/\nu$) until the moment when all the new points are indeed in $(a-{\varepsilon},a+{\varepsilon})$, and the probability of making all those steps is strictly positive. In particular, we can check that taking $ \delta < 3^{-2N/\nu} \nu$ will suffice.
All in all, we have performed a finite number of steps, which can be bounded above in terms of $N$, $\rho$, and ${\varepsilon}$ but independently of $a$, and each of which required a $U[0,1]$ variable to be placed in a small interval (of width less than $3^{-N} \nu$) and so has positive probability, which can be bounded below in terms of $N$, $\rho$, and ${\varepsilon}$. So overall the desired transformation of the configuration has positive probability depending on $N$, $\rho$, and ${\varepsilon}$, but not on $a$.
Write $\mu' (t) := \mu_{N-1} ( {{\cal X}}'_N(t) )$. Let $I \subseteq [0,1]$ be a non-null interval. We can (and do) choose $a \in (0,1)$ and ${\varepsilon}' >0$ such that $I' := [a-{\varepsilon}' , a+{\varepsilon}' ] \subseteq I$. Also take $I'' := [a -{\varepsilon}, a+ {\varepsilon}] \subset I'$ for ${\varepsilon}= (4 B C)^{-1} N^{-1/2} {\varepsilon}'$, where $C$ is the constant in Lemma \[lem7\] and $B \geq 1$ is an absolute constant chosen so that ${\varepsilon}< {\varepsilon}'/4$ for all $N \geq 3$. Fix $\rho \in (0,1)$. It follows from Lemma \[move\] that, for some $\delta_0 >0$ and $t_0 \in {{\mathbb N}}$, depending on ${\varepsilon}$, $$\nonumber
{{\mathbb P}}[ \{ \mu' (t_0) \in I'' \} \cap \{ D(t_0) \leq 2 {\varepsilon}\} \mid {{\cal X}}_N(0) ] \geq \delta_0 {{\mathbf 1}}( S_\rho) .$$ By Lemma \[lem2\], we have that $D(t_0) \leq 2 {\varepsilon}$ implies that $F(t_0) \leq 2 N {\varepsilon}^2 < ( {\varepsilon}' / (2BC) )^2$, so that $t_0 \geq \nu_{{\varepsilon}'/(2BC)}$, where $\nu_\cdot$ is as defined just before Lemma \[lem7\]. Applying Lemma \[lem7\] with this choice of $t_0$ and with the ${\varepsilon}$ there equal to ${\varepsilon}'/(2 B C)$, we obtain, by Markov’s inequality, $${{\mathbb P}}\left[ \max_{t \geq t_0 } | \mu' (t) - \mu' ( t_0 ) | \leq 3{\varepsilon}'/4 \mid {{\mathcal F}}_{t_0} \right] \geq 1/3 , {\ \textrm{a.s.}}$$ It follows that, given ${{\cal X}}_N(0)$, the event $$\{ \mu' (t_0) \in I'' \} \cap \{ D(t_0) \leq 2 {\varepsilon}\} \cap \{
| \xi_N - \mu' ( t_0 ) | \leq 3 {\varepsilon}' /4 \}$$ has probability at least $(\delta_0 /3) {{\mathbf 1}}( S_\rho)$, and on this event we have $| \xi_N - a| \leq {\varepsilon}+ (3{\varepsilon}'/4) < {\varepsilon}'$, so $\xi_N \in I$. Hence (\[lowerbound\]) follows.
For the final statement in the proposition, suppose that ${{\cal X}}_N(0)$ consists of independent $U[0,1]$ points. In this case $m_N(0)$ defined at (\[spacing\]) is the minimal spacing in the induced partition of $[0,1]$ into $N+1$ segments, which has the same distribution as $\frac{1}{N+1}$ times a single spacing, and in particular has density $f(x) = N (N+1) (1- (N+1) x)^{N-1}$ for $x \in [0, \frac{1}{N+1}]$ (cf Section \[appendix\]). Hence for any $\rho \in [0,\frac{1}{N+1}]$, we have ${{\mathbb P}}[ S_\rho ] = (1-(N+1) \rho)^N$, which is positive for $\rho = \frac{1}{2N}$, say. Thus taking expectations in (\[lowerbound\]) yields the final statement in the proposition.
Explicit calculations for $N=3$ {#sec:three}
-------------------------------
For this section we take $N=3$, the smallest nontrivial example. In this case we can perform some explicit calculations to obtain information about the distribution of $\xi_3$. In fact, we work with a slightly modified version of the model, avoiding certain ‘boundary effects’, to ease computation. Specifically, we do not use $U[0,1]$ replacements but, given ${{\cal X}}_3(t)$, we take $U_{t+1}$ to be uniform on the interval $U[ \min {{\cal X}}'_3 (t) - D(t) , \max {{\cal X}}'_3 (t) + D(t) ]$. If this interval is contained in $[0,1]$ for all $t$, this modification would have no effect on the value of $\xi_3$ realized (only speeding up the convergence), but the fact that now $U_{t+1}$ might be outside $[0,1]$ [*does*]{} change the model.
For this modified model, the argument for Theorem \[thm1\] follows through with minor changes, although we essentially reprove the conclusion of Theorem \[thm1\] in this case when we prove the following result, which gives an explicit description of the limit distribution. Here and subsequently ‘$\stackrel{d}{=}$’ denotes equality in distribution.
\[prop3\] Let $d = 1$ and $N = 3$ and work with the modified version of the process just described. Let ${{\cal X}}_3(0)$ consist of $3$ distinct points in $[0,1]$. Write $\mu := \mu_2 ( {{\cal X}}_3'(0) )$ and $D := D_2({{\cal X}}_3'(0))$. There exists a random $\xi_3 := \xi_3 ({{\cal X}}_3(0)) \in {{\mathbb R}}$ such that ${{\cal X}}'_3(t) {{\stackrel{a.s.}{~\longrightarrow~}}}( \xi_3, \xi_3)$ as $t \to \infty$. The distribution of $\xi_3$ can be characterized via $\xi_3 {{\stackrel{d}{~=~}}}\mu + D L$, where $L$ is independent of $(\mu, D)$, $L {{\stackrel{d}{~=~}}}-L$, and the distribution of $L$ is determined by the distributional solution to the fixed-point equation $$\label{Lfixed} L {{\stackrel{d}{~=~}}}\begin{cases}
- \frac{1+U}{2} + U L & \textrm{with~probability~} \frac{1}{3} \\
- \frac{2-U}{4} + \frac{U}{2} L & \textrm{with~probability~} \frac{1}{6} \\
\phantom{-} \frac{2-U}{4} + \frac{U}{2} L & \textrm{with~probability~} \frac{1}{6} \\
\phantom{-} \frac{1+U}{2} + U L & \textrm{with~probability~} \frac{1}{3} ,\end{cases}$$ where ${{\mathbb E}}[ |L|^k ] < \infty$ for all $k$, and $U \sim U[0,1]$. Writing $\theta_k := {{\mathbb E}}[ L^k]$, we have $\theta_k =0$ for odd $k$, and $\theta_2 = \frac{7}{12}$, $\theta_4 = \frac{375}{368}$, and $\theta_6 = \frac{76693}{22080}$. In particular, $$\label{xi2}
{{\mathbb E}}[ \xi_3 ] = {{\mathbb E}}[ \mu] , ~ {{\mathbb E}}[ \xi_3^2 ] = {{\mathbb E}}[ \mu^{ 2} ] + \frac{7}{12} {{\mathbb E}}[ D^{ 2} ],
~\textrm{and}~ {{\mathbb E}}[ \xi_3^2 ] = {{\mathbb E}}[ \mu^{ 3} ] + \frac{7}{4} {{\mathbb E}}[ \mu D^{ 2} ].$$ In the case where ${{\cal X}}_3(0)$ contains $3$ independent $U[0,1]$ points, ${{\mathbb E}}[ \xi_3^k ] = \frac{1}{2}, \frac{1}{3}, \frac{1}{4}$ for $k = 1,2,3$ respectively. If ${{\cal X}}_3(0) = ( \frac{1}{4}, \frac{1}{2}, \frac{3}{4} )$, then ${{\mathbb E}}[ \xi_3^k ]= \frac{1}{2},
\frac{29}{96}, \frac{13}{64}, \frac{873}{5888}$ for $k =1,2,3,4$.
We give the proof of Proposition \[prop3\] at the end of this section. First we state one consequence of the fixed-point representation (\[Lfixed\]).
\[Lcontinuous\] $L$ given by (\[Lfixed\]) has an absolutely continuous distribution.
It follows from (\[Lfixed\]) that $$\begin{aligned}
{{\mathbb P}}\left[ L=\tfrac 12\right ] &=
\tfrac 13 \, {{\mathbb P}}\left[U\left(L-\tfrac 12 \right)=1\right]
+\tfrac 16 \,{{\mathbb P}}\left[U\left(L+\tfrac 12 \right)=2\right]\\
& ~~{ }+\tfrac 16 \,{{\mathbb P}}\left[U\left(L-\tfrac 12 \right)=0\right]
+\tfrac 13 \,{{\mathbb P}}\left[U\left(L+\tfrac 12 \right)=0\right] .\end{aligned}$$ The first two terms on the right-hand side of the last display are zero, by an application of the first part of Lemma \[lemconts1\] with $X=U$, $Y=L \pm 1/2$, and $a=1,2$. Also, since $U >0$ a.s., ${{\mathbb P}}[ U (L \mp \frac{1}{2}) = 0 ] = {{\mathbb P}}[ L = \pm \frac{1}{2}]$, and, by symmetry, ${{\mathbb P}}[ L=1/2 ]={{\mathbb P}}[ L=-1/2]$. Thus we obtain $$\begin{aligned}
{{\mathbb P}}\left[ L=\tfrac 12\right ]
= \tfrac 16 \,{{\mathbb P}}\left[ L=\tfrac 12 \right]
+\tfrac 13 \,{{\mathbb P}}\left[L=-\tfrac 12 \right]
=\tfrac 12 \,{{\mathbb P}}\left[L=\tfrac 12 \right] .\end{aligned}$$ Hence ${{\mathbb P}}[L=1/2]={{\mathbb P}}[L=-1/2]=0$.
Each term on the right-hand side of (\[Lfixed\]) is of the form $\pm \frac{1}{2} + V ( L \pm \frac{1}{2} )$ where $V$ is an absolutely continuous random variable, independent of $L$ (namely $U$ or $U/2$). The final statement in Lemma \[lemconts1\] with the fact that ${{\mathbb P}}[ L = \pm 1/2]=0$ shows that each such term is absolutely continuous. Finally, Lemma \[lemconts2\] completes the proof.
In principle, the characterization (\[Lfixed\]) can be used to recursively determine all the moments ${{\mathbb E}}[ L^k] =\theta_k$, and the moments of $\xi_3$ may then be obtained by expanding ${{\mathbb E}}[ \xi_3 ^k ] = {{\mathbb E}}[ (\mu + D L)^k ]$. However, the calculations soon become cumbersome, particularly as $\mu$ and $D$ are, typically, not independent: we give some distributional properties of $(\mu, D)$ in the case of a uniform random initial condition in Section \[appendix\].
Before giving the proof of Proposition \[prop3\], we comment on some simulations. Figure \[fig4\] shows histogram estimates for the distribution of $\xi_3$ for two initial distributions (one deterministic and the other uniform random), and Table \[tab2\] reports corresponding moment estimates, which may be compared to the theoretical values given in Proposition \[prop3\]. In the uniform case, we only computed the first $3$ moments analytically, namely, $\frac{1}{2}$, $\frac{1}{3}$, $\frac{1}{4}$ as quoted in Proposition \[prop3\]; it is a curiosity that these coincide with the first 3 moments of the $U[0,1]$ distribution.
$k$ 1 2 3 4 5 6
---------- -------- -------- -------- -------- ----------- --------
0.0001 0.5833 0.0000 1.0192 $-0.0005$ 3.4765
U\[0,1\] 0.5000 0.3333 0.2500 0.2029 0.1739 0.1561
: Empirical $k$th moment values (to $4$ decimal places) computed from the simulations in Figure \[fig4\]. []{data-label="tab2"}
Let $\mu'(t) := \frac{1}{2} ( X_{(1)} (t) + X_{(2)} (t) )$ and $D(t) := | X_{(1)} (t) - X_{(2)} (t) |$ denote the mean and diameter of the core configuration, repeating our notation from above.
Consider separately the events that $U_{t+1}$ falls in each of the intervals $[ \min {{\cal X}}'_3 (t) - D(t) , \min {{\cal X}}'_3 (t))$, $[\min {{\cal X}}'_3 (t) , \min {{\cal X}}'_3 (t) + \frac{1}{2} D(t) )$, $[ \min {{\cal X}}'_3 (t) + \frac{1}{2} D(t) , \max {{\cal X}}'_3 (t) )$, $[ \max {{\cal X}}'_3(t) , \max {{\cal X}}'_3(t) + D(t)]$, which have probabilities $\frac{1}{3}, \frac16, \frac 16, \frac 13$ respectively. Given $(\mu'(t), D(t))$, we see, for $V_{t+1}$ a $U[0,1]$ variable, independent of $(\mu'(t), D(t))$, $$\begin{aligned}
\label{recur}
( \mu'(t+1) , D(t+1) ) & = \begin{cases}
(\mu'(t) - \frac{1+V_{t+1}}{2} D(t) , V_{t+1} D(t) ) & \textrm{ with probability } \frac{1}{3} \\
(\mu'(t) - \frac{2-V_{t+1}}{4} D(t) , \frac{1}{2} V_{t+1} D(t) ) & \textrm{ with probability } \frac{1}{6} \\
(\mu'(t) + \frac{2-V_{t+1}}{4} D(t) , \frac{1}{2} V_{t+1} D(t) ) & \textrm{ with probability } \frac{1}{6} \\
(\mu'(t) + \frac{1+V_{t+1}}{2} D(t) , V_{t+1} D(t) ) & \textrm{ with probability } \frac{1}{3}
\end{cases} .\end{aligned}$$ Writing $m_{k} (t) = {{\mathbb E}}[ D(t)^k \mid {{\cal X}}_3 (0) ]$ we obtain from the second coordinates in (\[recur\]) $$m_k (t+1) = \frac{2}{3} {{\mathbb E}}[ V_{t+1}^k] m_k (t) + \frac{1}{3} 2^{-k} {{\mathbb E}}[ V_{t+1}^k] m_k (t) ,$$ which implies that $$\label{mk}
m_k (t) = \left( \frac{1}{3(k+1)} ( 2 + 2^{-k} ) \right)^t D(0)^k .$$ For example, $m_1 (t) = (5/12)^t D(0)$ and $m_2(t) = (1/4)^t D(0)^2$.
Next we show that $\mu'(t)$ converges. From (\[recur\]), we have that $| \mu' (t+1) - \mu' (t) | \leq D(t)$, a.s., so to show that $\mu'(t)$ converges, it suffices to show that $\sum_{t=0}^\infty D(t) < \infty$ a.s. But this can be seen from essentially the same argument as Lemma \[lem5\], or directly from the fact that the sum has nonnegative terms and ${{\mathbb E}}\sum_{t=0}^\infty D(t) = \sum_{t=0}^\infty {{\mathbb E}}[ m_1(t) ]$, which is finite. Hence $\mu'(t)$ converges a.s. to some limit, $\xi_3$ say. Extending this argument a little, we have from (\[recur\]) that $| \mu'(t+1) | \leq | \mu'(t) | + D(t)$, a.s., and $D(t+1) \leq V_{t+1} D(t)$, a.s. Hence for $U_1, U_2, \ldots$ i.i.d. $U[0,1]$ random variables, we have $D(t) \leq V_1 \cdots V_t D(0)$ and $$| \mu'(t) | = \sum_{s=0}^{t-1} ( | \mu' (s+1)| -| \mu'(s) | ) \leq \left( 1 + \sum_{s=1}^\infty \prod_{r=1}^s V_r \right) D(0) =: ( 1 + Z ) D(0).$$ Here $Z$ has the so-called [*Dickman distribution*]{} (see e.g. [@pw1 §3]), which has finite moments of all orders. Hence ${{\mathbb E}}[ | \mu'(t)|^p \mid {{\cal X}}_3 (0) ]$ is bounded independently of $t$, so, for any $p \geq 1$, $(\mu'(t))^p$ is uniformly integrable, and hence $\lim_{t \to \infty} {{\mathbb E}}[ ( \mu' (t) )^k \mid {{\cal X}}_3 (0) ] = {{\mathbb E}}[ \xi_3^k \mid {{\cal X}}_3 (0) ]$ for any $k \in {{\mathbb N}}$.
We now want to compute the moments of $\xi_3$; by the previous argument, we can first work with the moments of $\mu'(t)$. Note that, from (\[recur\]), $$\begin{aligned}
&~~ {{\mathbb E}}[ (\mu'(t+1) - \mu'(t))^{ k} \mid {{\cal X}}_3 (t) ] \\
& = \frac{1+(-1)^k}{3} D(t)^{ k} {{\mathbb E}}\left[ \left( \frac{1+V_{t+1}}{2} \right)^{ k} \right]
+ \frac{1+(-1)^k}{6} D(t)^{ k} {{\mathbb E}}\left[ \left( \frac{1+V_{t+1}}{4} \right)^{ k} \right] \\
& = \frac{1+(-1)^k}{6} ( 2^{1- k} + 2^{-2k} ) \frac{2^{ k+1} -1}{ k+1} D(t)^{ k} ,\end{aligned}$$ using the fact that ${{\mathbb E}}[ (1+V_{t+1})^{ k} ] = \frac{2^{ k+1} -1}{ k+1}$. In particular, ${{\mathbb E}}[ ( \mu'(t+1) - \mu'(t) )^k \mid {{\cal X}}_3 (t) ] =0$ for odd $k$, so ${{\mathbb E}}[ \mu'(t) \mid {{\cal X}}_3 (0) ] = \mu'(0)$, and hence ${{\mathbb E}}[ \xi_3 ] = \lim_{t \to \infty} {{\mathbb E}}[ \mu'(t) ] = {{\mathbb E}}[ \mu'(0) ]$, giving the first statement in (\[xi2\]). In addition, $$\begin{aligned}
& ~~ {{\mathbb E}}[ (\mu'(t+1))^2 - (\mu'(t))^{ 2} \mid {{\cal X}}_3 (t) ] \\
& = 2 \mu'(t) {{\mathbb E}}[ \mu'(t+1) - \mu'(t) \mid {{\cal X}}_3 (t) ] + {{\mathbb E}}[ (\mu'(t+1) - \mu'(t))^{ 2} \mid {{\cal X}}_3 (t) ] \\
& = \frac{7}{16} D(t)^{ 2} .\end{aligned}$$ Hence $$\begin{aligned}
{{\mathbb E}}[ ( \mu'(t) )^{ 2} - (\mu'(0))^{ 2} \mid {{\cal X}}_3 (0) ]
& = \sum_{s=0}^{t-1} {{\mathbb E}}[ ( \mu'(s+1))^2 - (\mu'(s))^2 \mid {{\cal X}}_3 (0) ] \\
& = \frac{7}{16} \sum_{s=0}^{t-1} m_{ 2} (s) \\
& \to \frac{7}{16} \sum_{s=0}^\infty 4^{-s} D(0)^2 ,\end{aligned}$$ as $t \to \infty$, and the limit evaluates to $\frac{7}{12} D(0)^2$, so that ${{\mathbb E}}[ \xi_3^2 ] = \lim_{t \to \infty} {{\mathbb E}}[ (\mu'(t))^2 ] = {{\mathbb E}}[ (\mu'(0))^2 ] + \frac{7}{12} {{\mathbb E}}[ D(0) ^2 ]$, giving the second statement in (\[xi2\]).
Write $L ( \mu'(0), D(0) ) = \xi_3 ({{\cal X}}_3 (0))$ emphasizing the dependence on the initial configuration through $\mu'(0)$ and $D(0)$. Then by translation and scaling properties $$\label{scaling} L ( \mu'(0) , D(0) ) {{\stackrel{d}{~=~}}}\mu'(0) + D(0) L ( 0, 1) .$$ So we work with $L:= L(0,1)$ (which has the initial core points at $\pm \frac{1}{2}$).
We will derive a fixed-point equation for $L$. The argument is closely related to that for (\[recur\]). Conditioning on the first replacement and using the transformation relation (\[scaling\]), we obtain (\[Lfixed\]). From (\[Lfixed\]) we see that $|L|$ is stochastically dominated by $1 + U | L |$; iterating this, similarly to the argument involving the Dickman distribution above, we obtain that $|L|$ is stochastically dominated by $1+Z$, where $Z$ has the Dickman distribution, which is determined by its moments. Hence (\[Lfixed\]) determines a unique distribution for $L$ with ${{\mathbb E}}[ |L|^k ] < \infty$ for all $k$.
Writing (\[Lfixed\]) in functional form $L {{\stackrel{d}{~=~}}}\Psi ( L)$, we see that by symmetry of the form of $\Psi$, also $\Psi ( L) {{\stackrel{d}{~=~}}}- \Psi ( - L)$. Hence $-L {{\stackrel{d}{~=~}}}- \Psi (L) {{\stackrel{d}{~=~}}}\Psi (-L)$, so $-L$ satisfies the same distributional fixed-point equation as does $L$. Hence $L {{\stackrel{d}{~=~}}}-L$.
Writing $\theta_k := {{\mathbb E}}[ L^k]$, which we know is finite, we get $$\begin{aligned}
\theta_k & = \frac{1}{3} \sum_{j=0}^k {\binom k j} ( 1 + (-1) )^j \theta_{k-j} {{\mathbb E}}\left[ \left( \frac{1+U}{2} \right)^j U^{k-j} \right] \\
& ~~ {} + \frac{1}{6} \sum_{j=0}^k \binom k j ( 1 + (-1) )^j \theta_{k-j} {{\mathbb E}}\left[ \left( \frac{2-U}{4} \right)^j \left( \frac{U}{2}\right)^{k-j} \right] .\end{aligned}$$ Here $$\begin{aligned}
{{\mathbb E}}\left[ \left( \frac{1+U}{2} \right)^j U^{k-j} \right] & = 2^{-j} \sum_{\ell = 0}^j {\binom j \ell} \frac{1}{k-\ell +1 } =: a(k,j) ; \\
{{\mathbb E}}\left[ \left( \frac{2-U}{4} \right)^j \left( \frac{U}{2}\right)^{k-j} \right] & = 2^{-k} \sum_{\ell = 0}^j \binom j \ell \frac{(-1/2)^{j-\ell}}{k-\ell +1 } =: b(k,j).\end{aligned}$$ So we get $$\label{thetak}
\theta_k = \frac{1}{3} \sum_{j \textrm{ even}, \, j \leq k } \binom k j \theta_{k-j} ( 2 a (k,j) + b (k,j) ) .$$ In particular, as can be seen either directly by symmetry or by an inductive argument using (\[thetak\]), $\theta_k = 0$ for odd $k$. For even $k$, one can use (\[thetak\]) recursively to find $\theta_k$, obtaining for example the values quoted in the proposition.
Note that, by (\[scaling\]), ${{\mathbb E}}[ \xi_3^3 ] = {{\mathbb E}}[ ( \mu'(0) + L D(0) )^3 ]$, which, on expansion, gives the final statement in (\[xi2\]). The first 3 moments in the case of the uniform initial condition follow from (\[xi2\]) and Lemma \[initlem\]. For the initial condition with points $\frac{1}{4}, \frac{1}{2}, \frac{3}{4}$, we have $D(0) = \frac{1}{4}$ and $\mu'(0) = \chi \frac38 + (1-\chi) \frac58 = \frac58- \frac \chi 4$, where $\chi$ is the tie-breaker random variable taking values $0$ or $1$ each with probability $\frac{1}{2}$. It follows that ${{\mathbb E}}[ \mu'(0)^k] = \frac{1}{2} 8^{-k} (3^k + 5^k)$. Then, using (\[scaling\]), $${{\mathbb E}}[ \xi_3^k ] = {{\mathbb E}}[ ( \mu'(0) + (L/4))^k ] = \frac{1}{2} 8^{-k} \sum_{j=0}^k {\binom k j} 2^j \theta_j (3^{k-j}+5^{k-j}) .$$ We can now compute the four moments given in the proposition.
Appendix 1: Uniform spacings {#appendix}
============================
In this appendix we collect some results about uniform spacings which allow us to obtain distributional results about our uniform initial configurations. The basic results that we build on here can be found in Section 4.2 of [@pwong]; see the references therein for a fuller treatment of the theory of spacings.
Let $U_1, U_2, \ldots, U_n$ be independent $U[0,1]$ points. Denote the corresponding increasing order statistics $U_{[1]} \leq \cdots \leq U_{[n]}$, and define the induced [*spacings*]{} by $S_{n,i} := U_{[i]} - U_{[i-1]}$, $i=1,\ldots,n+1$, with the conventions $U_{[0]} := 0$ and $U_{[n+1]} := 1$. We collect some basic facts about the $S_{n,i}$. The spacings are exchangeable, and any $n$-vector, such as $(S_{n,1}, \ldots, S_{n,n})$, has the uniform density on the simplex $\Delta_n := \{ (x_1, \ldots, x_n) \in [0,1]^n : \sum_{i=1}^n x_i \leq 1 \}$.
We need some joint properties of up to 3 spacings. Any 3 spacings have density $f(x_1,x_2,x_3) = n (n-1)(n-2) (1-x_1-x_2-x_3)^{n-3}$ on $\Delta_3$. We will make use of the facts $$\begin{aligned}
\label{min1}
\min \{ S_{n,1} ,S_{n,2} \} & {{\stackrel{d}{~=~}}}\tfrac{1}{2} S_{n,1} , ~~~ (n \geq 1) , \\
( S_{n,1} , \min \{ S_{n,2} ,S_{n,3} \} ) & {{\stackrel{d}{~=~}}}( S_{n,1} , \tfrac{1}{2} S_{n,2} ) , ~~~ (n \geq 2) ;
\label{min2}\end{aligned}$$ see for example Lemma 4.1 of [@pwong]. Finally, for any $n \geq 1$ and $\alpha \geq 0, \beta \geq 0$, $$\label{multispace}
{{\mathbb E}}[ S_{n,1}^\alpha S_{n,2}^\beta ] = \frac{\Gamma (n+1) \Gamma (\alpha +1 ) \Gamma (\beta + 1) }{\Gamma (n + 1 +\alpha + \beta )} .$$ In particular ${{\mathbb E}}[ S_{n,1}^k ] = \frac{n!k!}{(n+k)!}$ for $k \in {{\mathbb N}}$.
Our main application in the present paper of the results on spacings collected above is to obtain the following result, which we use in Section \[sec:three\].
\[initlem\] Let $d=1$ and $N=3$. Suppose that ${{\cal X}}_3(0)$ consists of $3$ independent $U[0,1]$ points. Then $$( \mu_2 ({{\cal X}}'_3 (0) ) , D_2 ({{\cal X}}'_3(0)) ) {{\stackrel{d}{~=~}}}( (S_1 + \tfrac{1}{4} S_2 ) \zeta + (1 - S_1 - \tfrac{1}{4} S_2 ) (1 -\zeta) , \tfrac{1}{2} S_2 ) ,$$ where $\zeta$ is a Bernoulli random variable with ${{\mathbb P}}[ \zeta = 0] = {{\mathbb P}}[ \zeta = 1 ] = 1/2$. For any $k \in {{\mathbb Z}_+}$, $$\begin{aligned}
\label{momd}
{{\mathbb E}}[ ( D_2 ({{\cal X}}'_3(0)) )^k ] & = 2^{-k} \frac{6}{(k+1)(k+2)(k+3)} , \\
\label{mommu}
{{\mathbb E}}[ ( \mu_2 ({{\cal X}}'_3 (0) ) )^k ] & = \frac{4 (3k - 5 + (3^{k+3} - 1) 4^{-(k+1)} )}{(k+1)(k+2)(k+3)} .
\end{aligned}$$ So, for example, the first 3 moments of $D_2 ({{\cal X}}'_3(0))$ are $\frac{1}{8}$, $\frac{1}{40}$, and $\frac{1}{160}$, while the first 3 moments of $\mu_2 ({{\cal X}}'_3 (0) )$ are $\frac{1}{2}$, $\frac{51}{160}$, and $\frac{73}{320}$. Finally, ${{\mathbb E}}[ \mu_2 ({{\cal X}}'_3 (0) ) ( D_2 ({{\cal X}}'_3(0)) )^2 ] = \frac{1}{80}$.
The $3$ points of ${{\cal X}}_3(0)$ induce a partition of the interval $[0,1]$ into uniform spacings $S_{1}, S_{2}, S_{3}, S_{4}$, enumerated left to right (for this proof we suppress the first index in the notation above). For ease of notation, write $D:= D_2 ({{\cal X}}'_3(0))$ and $\mu := \mu_2 ({{\cal X}}'_3 (0) )$ for the duration of this proof. Then $D = \min \{ S_2, S_3 \} {{\stackrel{d}{~=~}}}S_1 / 2$, by (\[min1\]). Moreover, $\min \{S_2, S_3\}$ is equally likely to be either $S_2$ or $S_3$. In the former case, $\mu = S_1 + \frac{1}{2} \min \{S_2, S_3\}$, while in the latter case $\mu = 1- S_4 - \frac{1}{2} \min \{S_2, S_3\}$. Using (\[min2\]), we obtain the following characterization of the joint distribution of $\mu$ and $D$. $$\label{mud}
(\mu, D) {{\stackrel{d}{~=~}}}\begin{cases}
(S_1 + \frac{1}{4} S_2 , \frac{1}{2} S_2 ) & \textrm{ with probability } \frac{1}{2} \\
(1 - S_1 - \frac{1}{4} S_2 , \frac{1}{2} S_2 ) & \textrm{ with probability } \frac{1}{2} .\end{cases}$$ In particular, ${{\mathbb E}}[ D^k ] = 2^{-k} {{\mathbb E}}[ S_{1}^k ]$, which gives (\[momd\]) by the $n=3$, $\alpha =k$, $\beta= 0$ case of (\[multispace\]).
For the moments of $\mu$, we have from (\[mud\]) that $\mu$ has the distribution of $W: = S_1 + \frac{1}{4} S_2$ with probability $1/2$ or $1- W$ with probability $1/2$. So we have $$\begin{aligned}
{{\mathbb E}}[ \mu^ k ] & = \frac{1}{2} {{\mathbb E}}[ W^k ] + \frac{1}{2} {{\mathbb E}}[ (1-W)^k ]
= \frac{1}{2} w_k + \frac{1}{2} \sum_{j=0}^{ k } {\binom k j} (-1)^j w_j ,\end{aligned}$$ where $w_k := {{\mathbb E}}[ W^k]$. Since $w_k = {{\mathbb E}}[ (S_1 + \frac{1}{4} S_2)^k ]$, we compute $$w_k = \sum_{j=0}^k {\binom k j} 4^{-j} {{\mathbb E}}[ S_1^{k-j} S_2 ^{j } ] = 6 \frac{k!}{(k+3)!} \sum_{j=0}^k 4^{-j},$$ by the $n=3$, $\alpha = k-j$, $\beta = j$, case of (\[multispace\]). Thus we obtain $$w_k = \frac{8 (1-4^{-(k+1)})}{(k+1)(k+2)(k+3) } .$$ It follows that $${{\mathbb E}}[ \mu^k ] = \frac{1}{2} w_k + 4 \sum_{j=0}^k \frac{k!}{(j+3)! (k-j)!} (-1)^j - \sum_{j=0}^k \frac{k!}{(j+3)! (k-j)!} ( - 1/4)^j .$$ We deduce (\[mommu\]), after simplification, from the claim that, for any $z \in {{\mathbb R}}$, $$\begin{aligned}
\label{3sum}
S(z) & := \sum_{j=0}^k \frac{k!}{(j+3)!(k-j)!} (-z)^j \nonumber\\
& = \frac{k!}{2 z^3 (k+3)!} \left[ z^2 (k+2) (k+3) + 2 -2z (k+3) - 2 (1-z)^{k+3} \right] .\end{aligned}$$ Thus it remains to verify (\[3sum\]). To this end, note that $$\begin{aligned}
S(z) & = \frac{k!}{(k+3)!} \sum_{j=0}^k {\binom {k+3} {j+3}} (-z)^ j \\
& = \frac{k!}{(k+3)!} \left[ -z^{-3} \sum_{j=0}^{k+3} {\binom {k+3} {j} } (-z)^j + z^{-1} {\binom {k+3} 2} - z^{-2} {\binom {k+3} 1} + z^{-3} {\binom {k+3} 0 } \right] \\
& = \frac{k!}{z^3 (k+3)!} \left[ -(1-z)^{k+3} + \frac{1}{2} z^2 (k+2) (k+3) - z (k+3) + 1 \right] ,\end{aligned}$$ which gives the claim (\[3sum\]).
For the final statement in the lemma, we have from (\[mud\]) that $${{\mathbb E}}[ \mu D^2] = \frac{1}{2} {{\mathbb E}}[ (S_1 + \tfrac{1}{4} S_2 ) (\tfrac{1}{4} S_2^2 ) ] + \frac{1}{2} {{\mathbb E}}[ (1 - S_1 - \tfrac{1}{4} S_2 ) (\tfrac{1}{4} S_2^2 ) ]
= \frac{1}{8} {{\mathbb E}}[ S_2^2 ] = \frac{1}{80} ,$$ by (\[multispace\]).
We can also obtain explicit expressions for the densities of $D$ and $\mu$. Since $D {{\stackrel{d}{~=~}}}S_1/2$, the density of $D$ is $f_D(r) = 3 (1-2r)^2$ for $r \in [0,1/2]$. In addition, $\mu$ has density $f_\mu$ given by $$\label{mudensity}
f_\mu (r ) = \begin{cases}
4 r [ 3 (1-r) - 4r ] & \textrm{ if } r \in [0,1/4] \\
2 - 4 r (1-r) & \textrm{ if } r \in [1/4,3/4] \\
4 (1-r) [ 3 r - 4(1-r) ] & \textrm{ if } r \in [3/4,1]
\end{cases}.$$ Indeed, with the representation of $\mu$ as either $W$ or $1-W$ with probability $1/2$ of each, we have ${{\mathbb P}}[ \mu \leq r ] = \frac{1}{2} {{\mathbb P}}[ W \leq r ] + \frac{1}{2} ( 1 - {{\mathbb P}}[ W < 1-r ] )$. Assuming that $W$ has a density $f_W$ (which indeed it has, as we will show below), we get $$\label{den1}
f_\mu (r) = \frac{1}{2} f_W (r) + \frac{1}{2} f_W (1 -r) .$$ Using the fact that $W {{\stackrel{d}{~=~}}}S_1 + \frac{1}{4}S_2$, we can use the joint distribution of $(S_1, S_2)$ to calculate $$\begin{aligned}
{{\mathbb P}}[ W \leq r ] & = \int_0^1 {{\mathrm d}}x_1 \int_0^{1-x_1} {{\mathrm d}}x_2 6 (1-x_1 -x_2 ) {{\mathbf 1}}\{ x_1 + \frac{1}{4} x_2 \leq r \} \\
& = \int_0^r {{\mathrm d}}x_1 \int_0^{(4(r-x_1)) \wedge (1-x_1)} {{\mathrm d}}x_2 6 (1-x_1 -x_2 ) .\end{aligned}$$ After some routine calculation, we then obtain $${{\mathbb P}}[ W \leq r] = \begin{cases}
1 - \frac{4}{3} (1-r)^3 + \frac{1}{3} (1-4r)^3 & \textrm{ if } r \in [0,1/4] \\
1 - \frac{4}{3} (1-r)^3 & \textrm{ if } r \in [1/4,1] .\end{cases}$$ Hence $W$ has density $f_W$ given by $$f_W (r) = \begin{cases}
4 (1-r)^2 - 4 (1-4r)^2 & \textrm{ if } r \in [0,1/4] \\
4 (1-r)^2 & \textrm{ if } r \in [1/4,1] .\end{cases}$$ Then (\[mudensity\]) follows from (\[den1\]).
Appendix 2: Continuity of random variables {#appendix2}
==========================================
In this appendix we give some results that will allow us to deduce the absolute continuity of certain distributions specified as solutions to fixed-point equations: specifically, we use these results in the proof of Proposition \[Lcontinuous\]. The results in this section may well be known, but we were unable to find a reference for them in a form directly suitable for our application, and so we include the (short) proofs.
\[lemconts1\] Let $X$ and $Y$ be independent random variables such that $X$ has an absolutely continuous distribution. Then for any $a\ne 0$ we have ${{\mathbb P}}[ XY=a ] =0$. Morevoer, if ${{\mathbb P}}[ Y=0 ] =0$, then $XY$ is an absolutely continuous random variable.
For the moment assume that ${{\mathbb P}}[X<0]$, ${{\mathbb P}}[X>0]$, ${{\mathbb P}}[Y<0]$, and ${{\mathbb P}}[Y>0]$ are all positive. Take some $0 < c< d$. Then $$\begin{aligned}
{{\mathbb P}}[ XY\in (c,d) ] & = {{\mathbb P}}[ \log(X)+\log(Y)\in (\log c,\log d) \mid X>0, \, Y>0 ]
{{\mathbb P}}[ X>0] {{\mathbb P}}[ Y>0]
\\
& ~~{} +
{{\mathbb P}}[ \log(-X)+\log(-Y) \in (\log c,\log d) \mid X<0, \, Y<0 ]
{{\mathbb P}}[ X<0] {{\mathbb P}}[ Y<0] .\end{aligned}$$ Note that conditioning $X$ on the event $X>0$ (or $X<0$) preserves the continuity of $X$ and the independence of $X$ and $Y$. Then since the sum of two independent random variables at least one of which absolutely continuous is also absolutely continuous (see [@moran Theorem 5.9, p. 230]) we have $${{\mathbb P}}[ \log(X)+\log(Y)\in (\log c,\log d) \mid X>0, \, Y>0 ]
= \int_{\log c}^{\log_d} f_+(x) {{\mathrm d}}x,$$ and $${{\mathbb P}}[ \log(-X)+\log(-Y)\in (\log c,\log d) \mid X<0, \, Y<0 ]
=\int_{\log c}^{\log_d} f_-(x) {{\mathrm d}}x$$ for suitable probability densities $f_+$ and $f_-$. After the substitution $u = {{\mathrm{e}}}^x$, this yields $$\begin{aligned}
{{\mathbb P}}[ XY\in (c,d) ] &= \int_{c}^{d}
\frac{{{\mathbb P}}[ X>0] {{\mathbb P}}[Y>0 ] f_+(\log u)+{{\mathbb P}}[X<0 ]{{\mathbb P}}[Y<0 ]f_-(\log u)}u {{\mathrm d}}u.\end{aligned}$$ This expression is also valid if some of the probabilities for $X$ and $Y$ in the numerator of the integrand are zero. Therefore, we have, for any $0 < c < d$, $$\begin{aligned}
\label{fu}
{{\mathbb P}}[ XY\in (c,d) ] &= \int_{c}^{d} f(u) {{\mathrm d}}u,\end{aligned}$$ for some function $f(u)$ defined for $u>0$. A similar argument applies to the case $c<d < 0$; then (\[fu\]) is valid for any $c < d <0$ as well, extending $f(u)$ for strictly negative $u$. In particular, it follows that ${{\mathbb P}}[ XY=a ] =0$ for $a\ne 0$.
Now if ${{\mathbb P}}[Y=0 ]=0$, then ${{\mathbb P}}[ XY \neq 0 ] =1$. Then we can set $f(0)=0$ so that (\[fu\]) holds for *all* $c,d \in {{\mathbb R}}$.
\[lemconts2\] Suppose that a random variable $L$ satisfies the distributional equation $$L {{\stackrel{d}{~=~}}}\begin{cases}
Z_1 & \textrm{with~probability~} p_1 \\
\vdots & \\
Z_n & \textrm{with~probability~} p_n ,
\end{cases}$$ where $n\in {{\mathbb N}}$, $\sum_{i=1}^n p_i=1$, $p_i>0$, and each $Z_i$ is an absolutely continuous random variable. Then $L$ is absolutely continuous.
Suppose $Z_i$ has a density $f_i$. Then for any $-\infty\le a< b\le +\infty$ we have $$\begin{aligned}
{{\mathbb P}}[ L\in (a,b) ]
=\sum_{i=1}^n p_i {{\mathbb P}}[ Z_i \in (a,b) ]
=\sum_{i=1}^n p_i \int_a^b f_i(x) {{\mathrm d}}x
=\int_a^b \left[\sum_{i=1}^n p_i f_i(x) \right] {{\mathrm d}}x,\end{aligned}$$ which yields the statement of lemma.
Acknowledgements {#acknowledgements .unnumbered}
================
Parts of this work were done at the University of Strathclyde, where the third author was also employed, during a couple of visits by the second author, who is grateful for the hospitatlity of that institution.
[99]{}
C. Benassi and F. Malagoli, The sum of squared distances under a diameter constraint, in arbitrary dimension, [*Arch. Math.*]{} [**90**]{} (2008) 471–480.
E. De Giorgi and S. Reimann, The $\alpha$-beauty contest: Choosing numbers, thinking intervals, [*Games Econom. Behav.*]{} [**64**]{} (2008) 470–486.
P. Erdős, On the smoothness properties of a family of Bernoulli convolutions, [*Amer. J. Math.*]{} [**62**]{} (1940) 180–186.
M. Grinfeld, P.A. Knight, and A.R. Wade, Rank-driven Markov processes, [*J. Stat. Phys.*]{} [**146**]{} (2012) 378–407.
B.D. Hughes, [*Random Walks and Random Environments; Volume 1: Random Walks*]{}, Clarendon Press, Oxford, 1995.
N.L. Johnson and S. Kotz, Use of moments in studies of limit distributions arising from iterated random subdivisions of an interval, [*Statist. Probab. Lett.*]{} [**24**]{} (1995) 111–119.
J.M. Keynes, [*The General Theory of Employment, Interest and Money*]{}, Macmillan, London, 1936.
P.L. Krapivsky and S. Redner, Random walk with shrinking steps, [*Amer. J. Phys.*]{} [**72**]{} (2004) 591–598.
P.A.P. Moran, [*An Introduction to Probability Theory*]{}, Clarendon Press, Oxford, 1968 (paperback ed., with corrections, 2002).
H. Moulin, [*Game Theory for the Social Sciences*]{}, 2nd ed., New York University Press, New York, 1986.
R. Pemantle, A survey of random processes with reinforcement, [*Probab. Surv.*]{} [**4**]{} (2007) 1–79.
M.D. Penrose and A.R. Wade, Random minimal directed spanning trees and Dickman-type distributions, [*Adv. in Appl. Probab.*]{} [**36**]{} (2004) 691–714.
M.D. Penrose and A.R. Wade, Limit theory for the random on-line nearest-neighbor graph, [*Random Structures Algorithms*]{} [**32**]{} (2008) 125–156.
F. Pillichshammer, On the sum of squared distances in the Euclidean plane, [*Arch. Math.*]{} [**74**]{} (2000) 472–480.
H.S. Witsenhausen, On the maximum of the sum of squared distances under a diameter constraint, [*Amer. Math. Monthly*]{} [**81**]{} (1974) 1100–1101.
[^1]: Department of Mathematics and Statistics, University of Strathclyde, 26 Richmond Street, Glasgow G1 1XH, UK.
[^2]: Centre for Mathematical Sciences, Lund University, Box 118 SE-22100, Lund, Sweden, and Department of Mathematics, University of Bristol, University Walk, Bristol BS8 1TW, UK.
[^3]: Department of Mathematical Sciences, Durham University, South Road, Durham DH1 3LE, UK.
|
---
abstract: 'We study the space of periodic solutions of the elliptic $\sinh$-Gordon equation by means of spectral data consisting of a Riemann surface $Y$ and a divisor $D$. We show that the space $M_g^{\mathbf{p}}$ of real periodic finite type solutions with fixed period $\mathbf{p}$ can be considered as a completely integrable system $(M_g^{\mathbf{p}},\Omega,H_2)$ with a symplectic form $\Omega$ and a series of commuting Hamiltonians $(H_n)_{n \in \NN}$. In particular we relate the gradients of these Hamiltonians to the Jacobi fields $(\omega_n)_{n\in \NN_0}$ from the Pinkall-Sterling iteration. Moreover, a connection between the symplectic form $\Omega$ and Serre duality is established.'
address: 'Institut für Mathematik, Universität Mannheim, 68131 Mannheim, Germany.'
author:
- Markus Knopf
bibliography:
- 'diss\_bibliography.bib'
title: 'Periodic solutions of the sinh-Gordon equation and integrable systems'
---
Introduction
============
The elliptic *sinh-Gordon equation* is given by u + 2(2u) = 0, \[eq\_sinh\] where $\Delta$ is the Laplacian of $\RR^2$ with respect to the Euclidean metric and $u:\RR^2 \to \RR$ is a twice partially differentiable function which we assume to be real.\
The $\sinh$-Gordon equation arises in the context of particular surfaces of constant mean curvature (CMC) since the function $u$ can be extracted from the conformal factor $e^{2u}$ of a conformally parameterized CMC surface. The study of CMC tori in $3$-dimensional space forms was strongly influenced by algebro-geometric methods (as described in [@Bobenko_Its_Matveev]) that led to a complete classification by Pinkall and Sterling [@Pinkall_Sterling] for CMC-tori in $\RR^3$. Moreover, Bobenko [@bob3; @bob2] gave explicit formulas for CMC tori in $\RR^3$, $\SS^3$ and $\HH^3$ in terms of theta-functions and introduced a description of such tori by means of *spectral data*. We also refer the interested reader to [@bob1; @bob4]. Every CMC torus yields a doubly periodic solution $u:\RR^2 \to \RR$ of the $\sinh$-Gordon equation. With the help of differential geometric considerations one can associate to every CMC torus a hyperelliptic Riemann surface $Y$, the so-called *spectral curve*, and a holomorphic line bundle $E$ on $Y$ (the so-called *eigenline bundle*) that is represented by a certain divisor $D$. Hitchin [@Hitchin_Harmonic], and Pinkall and Sterling [@Pinkall_Sterling] independently proved that all doubly periodic solutions of the $\sinh$-Gordon equation correspond to spectral curves of finite genus. We say that solutions of that correspond to spectral curves of finite genus are of *finite type*.\
In the present setting we will relax the condition on the periodicity and demand that $u$ is only simply periodic with a fixed period. After rotating the domain of definition we can assume that this period is real. This enables us to introduce simply periodic *Cauchy data* with fixed period $\mathbf{p} \in \RR$ consisting of a pair $(u,u_y) \in C^{\infty}(\RR/\mathbf{p}\ZZ) \times C^{\infty}(\RR/\mathbf{p}\ZZ)$. Moreover, we demand that the corresponding solution $u$ of the $\sinh$-Gordon equation is of finite type.\
This paper summarizes the author’s PhD thesis [@Knopf_phd]. Its main goal is to identify the $\sinh$-Gordon equation as a completely integrable system (compare with [@Pedit_Schmitt]) and illustrate its features in the finite type situation.
Conformal CMC immersions into $\SS^3$
=====================================
The Lie groups $SL(2,\CC)$ and $SU(2)$
--------------------------------------
Let us consider the Lie group $SL(2,\CC) := \{A \in M_{2\times 2}(\CC) \left| \right. \det(A) = 1 \}$. The Lie algebra $\mathfrak{sl}_2(\CC) := \{B \in M_{2\times 2}(\CC) \left| \right. \trace(B) = 0\}$ of $SL(2,\CC)$ is spanned by the matrices $\epsilon_+,\epsilon_-, \epsilon$ with $$\epsilon_+ = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix},\;\; \epsilon_- = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix},\;\; \epsilon = \begin{pmatrix} i & 0 \\ 0 & -i \end{pmatrix}.$$
It will also be convenient to identify $\SS^3$ with the Lie group $SU(2) = \{A \in M_{2\times 2}(\CC) \left| \right. \det(A) = 1, \; \bar{A}^t = A^{-1} \}$. The Lie algebra of $SU(2)$ is denoted by $\mathfrak{su}(2)$ and a direct computation shows that $\mathfrak{su}(2) = \{B \in M_{2\times 2}(\CC) \left| \right. \trace(B) = 0, \; \bar{B}^t = -B \} \simeq \RR^3$.
Extended frames
---------------
We start with the following version of a result by Bobenko [@bob2] (cf. [@Kilian_Schmidt_infinitesimal], Theorem 1.1).
\[Alpha\_mit\_Lambda\] Let $u: \CC \to \RR$ and $Q: \CC \to \CC$ be smooth functions and define $$\alpha_{\lambda} = \frac{1}{2}
\begin{pmatrix}
u_z dz - u_{\bar{z}} d\bar{z} & i\lambda^{-1}e^{u}dz + i\overline{Q}e^{-u} d\bar{z} \\
iQe^{-u} dz + i\lambda e^u d\bar{z} & -u_z dz + u_{\bar{z}}d\bar{z}
\end{pmatrix}.$$ Then $2d\alpha_{\lambda} + [\alpha_{\lambda} \wedge \alpha_{\lambda}] = 0$ if and only if $Q$ is holomorphic, i.e. $Q_{\bar{z}} = 0$, and $u$ is a solution of the reduced Gauss equation 2u\_[z|[z]{}]{} + (e\^[2u]{} - Qe\^[-2u]{}) = 0. For any solution $u$ of the above equation and corresponding extended frame $F_\lambda$, and $\lambda_0,\lambda_1 \in \SS^1, \lambda_0 \neq \lambda_1$, i.e. $\lambda_k=e^{it_k}$ the map defined by the *Sym-Bobenko-formula* f = F\_[\_1]{}F\_[\_0]{}\^[-1]{} is a conformal immersion $f: \CC \to SU(2) \simeq \SS^3$ with constant mean curvature H = i = (t\_0 - t\_1), conformal factor $v = e^u / \sqrt{H^2+1}$, and Hopf differential $\widetilde{Q}dz^2$ with $\widetilde{Q} = -\frac{i}{4}(\lambda_1^{-1} - \lambda_0^{-1})Q$.
\[assumption\_Q\_const\] Let us assume that the Hopf differential $Q$ is constant with $|Q| = 1$.
The monodromy
-------------
The central object for the following considerations is the monodromy $M_{\lambda}$ of a frame $F_{\lambda}$.
Let $F_{\lambda}$ be an extended frame and assume that $\alpha_{\lambda} = F_{\lambda}^{-1}dF_{\lambda}$ has period $\mathbf{p} \in \CC$, i.e. $\alpha_{\lambda}(z+\mathbf{p}) = \alpha_{\lambda}(z)$. Then the **monodromy** of the frame $F_{\lambda}$ with respect to the period $\mathbf{p}$ is given by $$M_{\lambda}^{\mathbf{p}} := F_{\lambda}(z+\mathbf{p})F_{\lambda}^{-1}(z).$$
Note that we have $d M_{\lambda}^{\mathbf{p}} = 0$ and thus $M_{\lambda}^{\mathbf{p}}$ does not depend on $z$. Setting $F_{\lambda}(0) = \unity$ we get $$M_{\lambda} := M_{\lambda}^{\mathbf{p}} = F_{\lambda}(\mathbf{p})F_{\lambda}^{-1}(0) = F_{\lambda}(\mathbf{p}).$$
Isometric normalization
-----------------------
We can rotate the coordinate $z$ by a map $z \mapsto w(z) = e^{i\varphi}z$ in such a way that $\Im(\mathbf{p}) = 0$. As a consequence the extended frame $F_{\lambda}$ is multiplied with the matrix $$B_{w} = \begin{pmatrix} \delta^{-1/2} & 0 \\ 0 & \delta^{1/2} \end{pmatrix}$$ with $\delta = e^{i\varphi} \in \SS^1$. This corresponds to the isometric normalization described in [@Hauswirth_Kilian_Schmidt_1], Remark 1.5, and the corresponding gauged $\alpha_{\lambda}$ is of the form \_ =
u\_z dz - u\_[|[z]{}]{} d|[z]{} & i\^[-1]{}e\^[u]{}dz + ie\^[-u]{} d|[z]{}\
ie\^[-u]{} dz + i| e\^u d|[z]{} & -u\_z dz + u\_[|[z]{}]{}d|[z]{}
, \[eq\_alpha\] where the constant $\gamma \in \SS^1$ is given by $\gamma = \delta^{-1} Q = \bar{\delta}Q$.
The sinh-Gordon equation
------------------------
We can normalize the above parametrization with $\delta = 1$ and $|\gamma| = 1$ in by choosing the appropriate value for $Q \in \SS^1$. Then we can consider the system $$dF_{\lambda} = F_{\lambda}\alpha_{\lambda} \;\; \text{ with } \;\; F_{\lambda}(0) = \unity.$$ Since $|\gamma| = 1$, wee see that the compatibility condition $2d\alpha_{\lambda} + [\alpha_{\lambda} \wedge \alpha_{\lambda}] = 0$ from Theorem \[Alpha\_mit\_Lambda\] holds if and only if 2u\_[z|[z]{}]{} + (e\^[2u]{} - e\^[-2u]{}) = 2u\_[z|[z]{}]{} + (2u) = 0. \[eq\_sinh\] Thus the reduced Gauss equation turns into the **sinh-Gordon equation** in that situation. For the following we make an additional assumption.
\[assumption\_gamma\_1\] Let $\gamma = \delta = 1$. This yields $$\alpha_{\lambda} = \frac{1}{2}
\begin{pmatrix}
u_z dz - u_{\bar{z}} d\bar{z} & i\lambda^{-1}e^u dz + ie^{-u}d\bar{z} \\
ie^{-u}dz + i\lambda e^u d\bar{z} & -u_z dz + u_{\bar{z}}d\bar{z}
\end{pmatrix}.$$
If we evaluate $\alpha_{\lambda}$ along the vector fields $\tfrac{\partial}{\partial x}$ and $\tfrac{\partial}{\partial y}$ we obtain $$U_{\lambda} := \alpha_{\lambda}(\tfrac{\partial}{\partial x}), \;\;\;\; V_{\lambda} := \alpha_{\lambda}(\tfrac{\partial}{\partial y}).$$ These matrices will be important for the upcoming considerations. In particular $U_{\lambda}$ reads U\_ =
-i u\_y & i\^[-1]{}e\^u + ie\^[-u]{}\
ie\^u + ie\^[-u]{} & iu\_y
. \[eq\_U\_lambda\]
\[isomorphism\_U\_lambda\] Due to we can identify the tuple $(u,u_y)$ with the matrix $U_{\lambda}$.
A formal diagonalization of the monodromy $M_{\lambda}$
=======================================================
We want to diagonalize the monodromy $M_{\lambda}$ and therefore need to diagonalize $\alpha_{\lambda}$. A diagonalization for the Schrödinger-operator is done in [@Schmidt_infinite] based on a result from [@Haak_Schmidt_Schrader]. In order to adapt the techniques applied there we search for a $\lambda$-dependent periodic formal power series $\widehat{g}_{\lambda}(x)$ such that $$\widehat{\beta}_{\lambda} = \widehat{g}_{\lambda}^{-1}\alpha_{\lambda}\widehat{g}_{\lambda} + \widehat{g}_{\lambda}^{-1}\tfrac{d}{dx}\widehat{g}_{\lambda}$$ is a diagonal matrix, i.e. $$\widehat{\beta}_{\lambda}(x) = \begin{pmatrix} \sum_m (\sqrt{\lambda})^m b_m(x) & 0 \\ 0 & -\sum_m (\sqrt{\lambda})^m b_m(x) \end{pmatrix}$$ with $m \geq -1$. Since $F_{\lambda}(x) = \widehat{g}_{\lambda}(0)\widehat{G}_{\lambda}(x)\widehat{g}_{\lambda}(x)^{-1}$ (where $\widehat{G}_{\lambda}$ solves $\frac{d}{dx} \widehat{G}_{\lambda}(x) = \widehat{G}_{\lambda}(x) \widehat{\beta}_{\lambda}(x)$ with $\widehat{G}_{\lambda}(0) = \unity$) we get $$M_{\lambda} = F_{\lambda}(\mathbf{p}) = \widehat{g}_{\lambda}(0)\widehat{G}_{\lambda}(\mathbf{p})\widehat{g}_{\lambda}(\mathbf{p})^{-1} = \widehat{g}_{\lambda}(0)\widehat{G}_{\lambda}(\mathbf{p})\widehat{g}_{\lambda}(0)^{-1}$$ and due to $$\widehat{G}_{\lambda}(x) = \begin{pmatrix} \exp\left(\int_0^x \sum_m (\sqrt{\lambda})^m b_m(t)\, dt\right) & 0 \\ 0 & \exp\left(-\int_0^x \sum_m (\sqrt{\lambda})^m b_m(t)\, dt\right) \end{pmatrix}$$ we obtain for the eigenvalue $\mu$ of $M_{\lambda}$ $$\mu = \exp\left(\int_0^{\mathbf{p}} \sum_m (\sqrt{\lambda})^m b_m(t)\, dt\right) \;\;\; \text{ or equivalently } \;\;\; \ln\mu = \sum_m (\sqrt{\lambda})^m \int_0^{\mathbf{p}} b_m(t)\, dt.$$ Let us start with the following lemma that is obtained by a direct calculation.
\[lemma\_first\_gauge\] By performing a gauge transformation with $$g_{\lambda}(z) = \frac{1}{\sqrt{2}}\begin{pmatrix} e^{\frac{u}{2}} & 0 \\ 0 & \sqrt{\lambda}e^{-\frac{u}{2}}\end{pmatrix}\begin{pmatrix} 1 & -1 \\ 1 & 1 \end{pmatrix}$$ the frame $F_{\lambda}(z)$ is transformed into $F_{\lambda}(z)g_{\lambda}(z)$ and the map $G_{\lambda}(z) := g_{\lambda}(0)^{-1}F_{\lambda}(z)g_{\lambda}(z)$ solves $$dG_{\lambda} = G_{\lambda}\beta_{\lambda} \;\; \text{with } \beta_{\lambda} = g_{\lambda}^{-1}\alpha_{\lambda} g_{\lambda} + g_{\lambda}^{-1}dg_{\lambda} \text{ and } G_{\lambda}(0) = \unity.$$ Evaluating the form $\beta_{\lambda}$ along the vector field $\frac{\partial}{\partial x}$ and setting $y=0$ yields $\beta_{\lambda}(\frac{\partial}{\partial x}) = \tfrac{1}{\sqrt{\lambda}}\beta_{-1} + \beta_0 + \sqrt{\lambda}\beta_1$ with $$\beta_{-1} = \begin{pmatrix} \tfrac{i}{2} & 0 \\ 0 & -\tfrac{i}{2} \end{pmatrix}, \;\; \beta_0 = \begin{pmatrix} 0 & -u_z \\ -u_z & 0 \end{pmatrix}, \;\; \beta_1 = \begin{pmatrix} \frac{i}{2}\cosh(2u) & - \frac{i}{2}\sinh(2u) \\ \frac{i}{2}\sinh(2u) & -\frac{i}{2}\cosh(2u) \end{pmatrix}.$$
From the following theorem we obtain a periodic formal power series $\widetilde{g}_{\lambda}(x)$ of the form $\widetilde{g}_{\lambda}(x) = \unity + \sum_{m\geq 1} a_m(x)(\sqrt{\lambda})^m$ such that $\widehat{g}_{\lambda}(x) := g_{\lambda}(x) \widetilde{g}_{\lambda}(x)$ (with $g_{\lambda}(x)$ defined as in Lemma \[lemma\_first\_gauge\]) diagonalizes $\alpha_{\lambda}$ around $\lambda = 0$.
\[expansion\] Let $(u,u_y) \in C^{\infty}(\RR/\mathbf{p} \ZZ) \times C^{\infty}(\RR/\mathbf{p} \ZZ)$. Then there exist two series $$\begin{gathered}
a_1(x), a_2(x),\ldots \;\;\; \in \text{span}\{\epsilon_+,\epsilon_-\} \text{ of periodic off-diagonal matrices and} \\
b_1(x), b_2(x),\ldots \;\;\; \in \text{span}\{\epsilon\} \text{ of periodic diagonal matrices, respectively}\end{gathered}$$ such that $a_{m+1}(x)$ and $b_m(x)$ are differential polynomials in $u$ and $u_y$ with derivatives of order $m$ at most and the following equality for formal power series holds asympotically around $\lambda = 0$: $$\begin{gathered}
\beta_{\lambda}(x)\left(\unity + \sum_{m\geq 1} a_m(x)(\sqrt{\lambda})^m\right) + \sum_{m\geq 1} \frac{d}{dx}a_m(x)(\sqrt{\lambda})^m = \hspace{4cm} \nonumber \\
\hspace{5cm} \left(\unity + \sum_{m\geq 1} a_m(x)(\sqrt{\lambda})^m\right) \sum_{m \geq -1} b_m(x) (\sqrt{\lambda})^m.
\tag{$*$}
\label{eq_expansion}\end{gathered}$$ Here $b_{-1}(x)$ and $b_0(x)$ are given by $b_{-1}(x) \equiv \beta_{-1} = \tfrac{i}{2}\left(\begin{smallmatrix} 1 & 0 \\ 0 & -1 \end{smallmatrix}\right)$ and $b_0(x) \equiv \left(\begin{smallmatrix} 0 & 0 \\ 0 & 0 \end{smallmatrix}\right)$.
Since we only consider finite type solutions we can guarantee that the power series in indeed are convergent, see [@Knopf_phd Theorem 4.35].
We start the iteration with $b_0(x) \equiv \left(\begin{smallmatrix} 0 & 0 \\ 0 & 0 \end{smallmatrix}\right)$ and will inductively solve the given ansatz in all powers of $\sqrt{\lambda}$:
1. $(\sqrt{\lambda})^{-1}$: $\beta_{-1} = \beta_{-1}$. $\checkmark$
2. $(\sqrt{\lambda})^0$: $\beta_{-1}a_1(x) + \beta_0(x) = b_0(x) = 0$ and thus $$a_1(x) = -\beta_{-1}^{-1}\beta_0(x) = \begin{pmatrix} 0 & -i\del u \\ i\del u & 0 \end{pmatrix}.$$
3. $(\sqrt{\lambda})^{1}$: $\beta_{-1}a_2(x) + \beta_0(x)a_1(x) + \beta_1(x) + \frac{d}{dx}a_1(x) = b_1(x) + a_2(x)\beta_{-1} + a_1(x)b_0(x)$. Rearranging terms and sorting with respect to diagonal ($\mathrm{d}$) and off-diagonal ($\mathrm{off}$) matrices we get two equations: $$\begin{aligned}
b_1(x) & = & \beta_0(x)a_1(x) + \beta_{1,\mathrm{d}}(x) \\
& = & \begin{pmatrix}
-i(\del u)^2 + \frac{i}{2}\cosh(2u) & 0 \\
0 & i(\del u)^2 -\frac{i}{2} \cosh(2u)
\end{pmatrix}, \cr
[\beta_{-1},a_2(x)] & = & -\beta_{1,\mathrm{off}}(x) - \tfrac{d}{dx}a_1(x).\end{aligned}$$ In order to solve the second equation for $a_2(x)$ we make the following observation: set $a(x) = a_+(x) \epsilon_+ + a_-(x)\epsilon_-$. Since $[\epsilon,\epsilon_+] = 2i\epsilon_+$ and $[\epsilon,\epsilon_-] = -2i\epsilon_-$, we can define a linear map $\phi: \text{span} \{\epsilon_+,\epsilon_-\} \to \text{span} \{\epsilon_+,\epsilon_-\}$ by $$\phi(a(x)) := [\beta_{-1},a(x)] = [\tfrac{1}{2}\epsilon, a(x)] = ia_+(x)\epsilon_+ - ia_-(x)\epsilon_- \in \text{span} \{\epsilon_+,\epsilon_-\}.$$ Obviously $\ker(\phi) = \{ 0\}$ and thus $\phi$ is an isomorphism. Therefore we can uniquely solve the equation $[\beta_{-1},a_2(x)] = -\beta_{1,\mathrm{off}}(x) - \tfrac{d}{dx}a_1(x)$ and obtain $a_2(x)$.
We now proceed inductively for $m\geq 2$ and assume that we already found $a_m(x)$ and $b_{m-1}(x)$. Consider the equation $$\begin{gathered}
\beta_{-1}a_{m+1}(x) + \beta_0(x)a_m(x) + \beta_1(x)a_{m-1}(x) + \tfrac{d}{dx} a_m(x) = \hspace{2cm} \\
\hspace{5cm} b_m(x) + a_{m+1}(x)\beta_{-1} + \sum_{i=1}^m a_i(x)b_{m-i}(x)\end{gathered}$$ for the power $(\sqrt{\lambda})^m$. Rearranging terms and after decomposition in the diagonal ($\mathrm{d}$) and off-diagonal ($\mathrm{off}$) part we get $$\begin{aligned}
b_m(x) & = & \beta_0(x)a_m(x) + \beta_{1,\mathrm{off}}(x)a_{m-1}(x), \cr
[\beta_{-1},a_{m+1}(x)] & = & - \beta_{1,\mathrm{d}}(x)a_{m-1}(x)- \tfrac{d}{dx}a_m(x) + \sum_{i=1}^m a_i(x)b_{m-i}(x).\end{aligned}$$ From the discussion above we see that these equations can uniquely be solved and one obtains $a_{m+1}(x)$ and $b_m(x)$. By induction one therefore obtains a unique formal solution of with the desired properties.
With the help of Theorem \[expansion\] we can reproduce Proposition 3.6 presented in [@Kilian_Schmidt_infinitesimal].
\[expansion\_mu\] The logarithm $\ln\mu$ of the eigenvalue $\mu$ of the monodromy $M_{\lambda}$ has the following asymptotic expansion $$\begin{aligned}
\ln\mu & = & \tfrac{1}{\sqrt{\lambda}}\tfrac{i\mathbf{p}}{2} + \sqrt{\lambda}\int_0^{\mathbf{p}}\left(-i(\del u)^2 + \tfrac{i}{2}\cosh(2u)\right)\,dt + O(\lambda) \; \text{ at } \lambda = 0.
%\ln\mu & = & \sqrt{\lambda}\tfrac{i\mathbf{p}}{2} + \tfrac{1}{\sqrt{\lambda}}\int_0^{\mathbf{p}}\left(-i(\delbar u)^2 + \tfrac{i}{2}\cosh(2u)\right)\,dt + O(\lambda^{-1}) \;\text{ at } \lambda = \infty.\end{aligned}$$
From Theorem \[expansion\] we know that at $\lambda = 0$ we have $$\begin{aligned}
\ln\mu & = & \tfrac{1}{\sqrt{\lambda}}\tfrac{i\mathbf{p}}{2} + \sqrt{\lambda}\int_0^{\mathbf{p}}b_1(t)\, dt + \sum_{m\geq 2} (\sqrt{\lambda})^m \int_0^{\mathbf{p}} b_m(t)\, dt \\
& = & \tfrac{1}{\sqrt{\lambda}}\tfrac{i\mathbf{p}}{2} + \sqrt{\lambda}\int_0^{\mathbf{p}}\left(-i(\del u)^2 + \tfrac{i}{2}\cosh(2u)\right)\,dt + O(\lambda).\end{aligned}$$
Polynomial Killing fields for finite type solutions
===================================================
In the following we will consider the variable $y$ as a flow parameter. If we define the Lax operator $L_{\lambda}:= \tfrac{d}{dx} + U_{\lambda}$, the zero-curvature condition can be rewritten as Lax equation via $$\tfrac{d}{dy}U_{\lambda} - \tfrac{d}{dx}V_{\lambda} - [U_{\lambda},V_{\lambda}] = 0 \;\; \Longleftrightarrow \;\; \tfrac{d}{dy}L_{\lambda} = [L_{\lambda},V_{\lambda}].$$ This corresponds to the $\sinh$-Gordon flow and there exist analogous equations for the “higher” flows. In the finite type situation there exists a linear combination of these higher flows such that the corresponding Lax equation is stationary, i.e. there exists a map $W_{\lambda}$ such that $$\tfrac{d}{dy}L_{\lambda} = [L_{\lambda},W_{\lambda}] = \tfrac{d}{dx}W_{\lambda} + [U_{\lambda},W_{\lambda}] \equiv 0.$$ This leads to the following definition (cf. Definition 2.1 in [@Hauswirth_Kilian_Schmidt_1]).
\[killing1\] A pair $C^{\infty}(\RR/\mathbf{p} \ZZ) \times C^{\infty}(\RR/\mathbf{p} \ZZ) \ni (u,u_y) \simeq U_{\lambda}(\cdot,0)$ corresponding to a periodic solution of the $\sinh$-Gordon equation is of **finite type** if there exists $g \in \NN_0$ such that $$\Phi_{\lambda}(x) = \frac{\lambda^{-1}}{2}\begin{pmatrix} 0 & ie^u \\ 0 & 0 \end{pmatrix} + \sum_{n=0}^g \lambda^n
\begin{pmatrix} \omega_n & e^u\tau_n \\ e^u\sigma_n & -\omega_n \end{pmatrix}$$ is a solution of the Lax equation $$\frac{d}{dx}\Phi_{\lambda} = [\Phi_{\lambda},U_{\lambda}(\cdot,0)]$$ for some periodic functions $\omega_n, \tau_n, \sigma_n: \RR/\mathbf{p} \ZZ \to \CC$.
Let us justify why we can set $y=0$ in the above definition:
- For a solution $u$ of the $\sinh$-Gordon equation on a strip around the $y=0$ axis, that is of finite type in the sense of Definition 2.1 in [@Hauswirth_Kilian_Schmidt_1], we see that $(u(\cdot,0),u_y(\cdot,0)) \in C^{\infty}(\RR/\mathbf{p} \ZZ) \times C^{\infty}(\RR/\mathbf{p} \ZZ)$ is of finite type in the sense of Definition \[killing1\].
- On the other hand, every pair $(u,u_y) \in C^{\infty}(\RR/\mathbf{p} \ZZ) \times C^{\infty}(\RR/\mathbf{p} \ZZ)$ of finite type origins from a “global” finite type solution $u: \CC \to \RR$ in the sense of Definition 2.1 in [@Hauswirth_Kilian_Schmidt_1].
Given a map $\widetilde{\Phi}_{\lambda}$ of the form $$\widetilde{\Phi}_{\lambda}(z) = \frac{\lambda^{-1}}{2}\begin{pmatrix} 0 & ie^u \\ 0 & 0 \end{pmatrix} + \sum_{n=0}^g \lambda^n
\begin{pmatrix} \widetilde{\omega}_n & e^u\widetilde{\tau}_n \\ e^u\widetilde{\sigma}_n & -\widetilde{\omega}_n \end{pmatrix}$$ that is a solution of the Lax equation (according to Definition 2.1 in [@Hauswirth_Kilian_Schmidt_1]) $$d\widetilde{\Phi}_{\lambda} = [\widetilde{\Phi}_{\lambda},\alpha_{\lambda}] \Longleftrightarrow
\begin{cases}
\tfrac{d}{dx}\widetilde{\Phi}_{\lambda}(x,y) = [\widetilde{\Phi}_{\lambda}(x,y),U_{\lambda}(x,y)] \\
\tfrac{d}{dy}\widetilde{\Phi}_{\lambda}(x,y) = [\widetilde{\Phi}_{\lambda}(x,y),V_{\lambda}(x,y)]
\end{cases}$$ we obtain a map $\Phi_{\lambda}$ as in Definition \[killing1\] by setting $$\Phi_{\lambda}(x) := \widetilde{\Phi}_{\lambda}(x,0).$$ Let us omit the tilde in the following proposition.
\[prop\_pinkall\_sterling\] Suppose $\Phi_{\lambda}$ is of the form $$\Phi_{\lambda}(z) = \frac{\lambda^{-1}}{2}\begin{pmatrix} 0 & ie^u \\ 0 & 0 \end{pmatrix} + \sum_{n=0}^g \lambda^n
\begin{pmatrix} \omega_n & e^u\tau_n \\ e^u\sigma_n & -\omega_n \end{pmatrix}$$ for some $u: \CC \to \RR$, and that $\Phi_{\lambda}$ solves the Lax equation $d\Phi_{\lambda} = [\Phi_{\lambda},\alpha_{\lambda}]$. Then
1. The function $u$ is a solution of the $\sinh$-Gordon equation, i.e. $\Delta u + 2\sinh(2u) = 0$.
2. The functions $\omega_n$ are solutions of the homog. Jacobi equation $\Delta\omega_n + 4\cosh(2u)\omega_n = 0$.
3. The following iteration gives a formal solution of $d\Phi_{\lambda} = [\Phi_{\lambda},\alpha_{\lambda}]$. Let $\omega_n, \sigma_n,\tau_{n-1}$ with a solution $\omega_n$ of $\Delta\omega_n + 4\cosh(2u)\omega_n = 0$ be given. Now solve the system $$\tau_{n,\bar{z}} = ie^{-2u}\omega_n, \;\;\;\; \tau_{n,z} = 2iu_z\omega_{n,z} - i\omega_{n,zz}$$ for $\tau_{n,z}$ and $\tau_{n,\bar{z}}$. Then define $\omega_{n+1}$ and $\sigma_{n+1}$ by $$\omega_{n+1} := -i\tau_{n,z} - 2i u_z \tau_n, \;\;\;\; \sigma_{n+1} := e^{2u}\tau_n + 2i\omega_{n+1,\bar{z}}.$$
4. Each $\tau_n$ is defined up to a complex constant $c_n$, so $\omega_{n+1}$ is defined up to $-2ic_n u_z$.
5. $\omega_0 = u_z, \omega_{g-1} = c u_{\bar{z}}$ for some $c \in \CC$, and $\lambda^g\overline{\Phi_{1/\bar{\lambda}}}^t$ also solves $d\Phi_{\lambda} = [\Phi_{\lambda},\alpha_{\lambda}]$.
In [@Pinkall_Sterling] Pinkall-Sterling construct a series of solutions for the induction introduced in Proposition \[prop\_pinkall\_sterling\], (iii). From this **Pinkall-Sterling iteration** we obtain for the first terms of $\omega = \sum_{n\geq -1} \lambda^n \omega_n$ $$\begin{gathered}
\omega_{-1} = 0, \;\; \omega_0 = u_z = \tfrac{1}{2}(u_x - iu_y), \;\; \omega_1 = u_{zzz}-2(u_z)^3,\;\; \\
\omega_2 = u_{zzzzz} - 10u_{zzz}(u_z)^3 - 10(u_{zz})^2u_z + 6(u_z)^5,\; \ldots\end{gathered}$$
Potentials and polynomial Killing fields
----------------------------------------
We follow the exposition given in [@Hauswirth_Kilian_Schmidt_1], Section 2. For $g \in \NN_0$ consider the $3g+1$-dimensional real vector space $$\begin{gathered}
% \Lambda_{-1}^g\mathfrak{sl}(2,\CC)
\Lambda_{-1}^g\mathfrak{sl}_2(\CC) = \bigg\{ \xi_{\lambda} = \sum_{n=-1}^g \lambda^n\widehat{\xi}_n \,\bigg|\, \widehat{\xi}_{-1} \in i\RR \epsilon_+,
% \hspace{4cm} \\
% \hspace{4cm}
\widehat{\xi}_n = -\overline{\widehat{\xi}_{g-1-n}}^t \in \mathfrak{sl}_2(\CC) \text{ for } n=-1,\ldots,g\bigg\}\end{gathered}$$ and define an open subset of $\Lambda_{-1}^g\mathfrak{sl}_2(\CC)$ by $$\mathcal{P}_g := \{\xi_{\lambda} \in \Lambda_{-1}^g\mathfrak{sl}_2(\CC)\left|\right. \widehat{\xi}_{-1} \in i\RR^+ \epsilon_+, \trace(\widehat{\xi}_{-1}\widehat{\xi}_0) \neq 0\}.$$ Every $\xi_{\lambda} \in \mathcal{P}_g$ satisfies the so-called **reality condition** $$\lambda^{g-1}\overline{\xi_{1/\bar{\lambda}}}^t = -\xi_{\lambda}.$$
A **polynomial Killing field** is a map $\zeta_{\lambda}:\RR \to \mathcal{P}_g$ which solves $$\frac{d}{dx}\zeta_{\lambda} = [\zeta_{\lambda},U_{\lambda}(\cdot,0)] \;\;\; \text{ with } \;\;\; \zeta_{\lambda}(0) = \xi_{\lambda} \in \mathcal{P}_g.$$
For each initial value $\xi_{\lambda} \in \mathcal{P}_g$, there exists a unique polynomial Killing field given by $$\zeta_{\lambda}(x) := F_{\lambda}^{-1}(x)\xi_{\lambda} F_{\lambda}(x)$$ with $\frac{d}{dx}F_{\lambda}(x) = F_{\lambda}(x)U_{\lambda}(x,0)$, since there holds $$\begin{aligned}
\tfrac{d}{dx}\zeta_{\lambda} & = & \tfrac{d}{dx}\left(F_{\lambda}^{-1}\xi_{\lambda} F_{\lambda}\right) = -F_{\lambda}^{-1}(\tfrac{d}{dx}F_{\lambda})F_{\lambda}^{-1}\xi_{\lambda} F_{\lambda} + F_{\lambda}^{-1}\xi_{\lambda} (\tfrac{d}{dx}F_{\lambda}) \\
& = & -U_{\lambda}(\cdot,0)F_{\lambda}^{-1}\xi_{\lambda} F_{\lambda} + F_{\lambda}^{-1}\xi_{\lambda} F_{\lambda}U_{\lambda}(\cdot,0) \\
& = & [\zeta_{\lambda},U_{\lambda}(\cdot,0)].\end{aligned}$$
In order to obtain a polynomial Killing field $\zeta_{\lambda}$ from a pair $(u,u_y)$ of finite type we set $$\zeta_{\lambda}(x) := \Phi_{\lambda}(x) - \lambda^{g-1}\overline{\Phi_{1/\bar{\lambda}}}^t(x) \;\;\; \text{ and } \;\;\; \zeta_{\lambda}(0) =: \xi_{\lambda} = \Phi_{\lambda}(0) - \lambda^{g-1}\overline{\Phi_{1/\bar{\lambda}}}^t(0).$$
These polynomial Killing fields $\zeta_{\lambda}$ are *periodic* maps $\zeta_{\lambda}:\RR/\mathbf{p}\ZZ \to \mathcal{P}_g$.
Suppose we have a polynomial Killing field $$\zeta_{\lambda}(x) = \begin{pmatrix} 0 & \beta_{-1}(x) \\ 0 & 0 \end{pmatrix} \lambda^{-1} + \begin{pmatrix} \alpha_0(x) & \beta_{0}(x) \\ \gamma_0(x) & -\alpha_0(x) \end{pmatrix} \lambda^0 + \ldots + \begin{pmatrix} \alpha_g(x) & \beta_{g}(x) \\ \gamma_g(x) & -\alpha_g(x) \end{pmatrix} \lambda^g.$$ Then one can assign a matrix-valued form $U(\zeta_{\lambda})$ to $\zeta_{\lambda}$ defined by $$U(\zeta_{\lambda})=
\begin{pmatrix}
\alpha_0(x) - \overline{\alpha}_0(x) & \lambda^{-1} \beta_{-1}(x) - \overline{\gamma}_0(x) \\
-\lambda\overline{\beta}_{-1}(x) + \gamma_0(x) & -\alpha_0(x) + \overline{\alpha}_0(x)
\end{pmatrix}dx.$$
This shows that $(u,u_y) \simeq U_{\lambda}(\cdot,0)$ is uniquely defined by $\zeta_{\lambda}$.
The associated spectral data
============================
In this section we want to establish a 1:1-correspondence between pairs $(u,u_y)$ that originate from solutions of the $\sinh$-Gordon equation and the so-called spectral data $(Y(u,u_y),D(u,u_y))$ consisting of the spectral curve $Y(u,u_y)$ and a divisor $D(u,u_y)$ on $Y(u,u_y)$.
Spectral curve defined by $\xi_{\lambda} \in \mathcal{P}_g$
-----------------------------------------------------------
Let us introduce the definition of a spectral curve $Y$ that results from a periodic polynomial Killing field $\zeta_{\lambda}:\RR/\mathbf{p}\ZZ \to \mathcal{P}_g$.
Let $Y^*$ be defined by $$Y^* = \{(\lambda,\widetilde{\nu}) \in \CC^* \times \CC^* \left|\right. \det(\widetilde{\nu}\unity - \zeta_{\lambda}) = \widetilde{\nu}^2 + \det(\xi_{\lambda}) = 0\}$$ and suppose that the polynomial $a(\lambda) = -\lambda \det(\xi_{\lambda})$ has $2g$ pairwise distinct roots. By declaring $\lambda=0,\infty$ to be two additional branch points and setting $\nu = \widetilde{\nu}\lambda$ one obtains that $$Y := \{(\lambda,\nu) \in \CC\PP^1 \times \CC\PP^1 \left|\right. \nu^2 = \lambda a(\lambda)\}$$ defines a compact hyperelliptic curve $Y$ of genus $g$, the **spectral curve**. The genus $g$ of $Y$ is called the **spectral genus**.
Note that the eigenvalue $\widetilde{\nu}$ of $\xi_{\lambda}$ is given by $\widetilde{\nu} = \frac{\nu}{\lambda}$.
Divisor
-------
The spectral curve $Y$ encodes the eigenvalues of $\xi_{\lambda} \in \mathcal{P}_g$. By considering the eigenvectors of $\xi_{\lambda}$ one arrives at the following lemma.
\[lemma\_eigenbundle\] On the spectral curve $Y$ there exist unique meromorphic maps $v(\lambda,\nu)$ and $w(\lambda,\nu)$ from $Y$ to $\CC^2$ such that
1. For all $(\lambda,\nu) \in Y^*$ the value of $v(\lambda,\nu)$ is an eigenvector of $\xi_{\lambda}$ with eigenvalue $\nu$ and $w(\lambda,\nu)$ is an eigenvector of $\xi_{\lambda}^t$ with eigenvalue $\nu$, i.e. $$\xi_{\lambda}v(\lambda,\nu) = \nu v(\lambda,\nu), \;\;\; \xi_{\lambda}^t w(\lambda,\nu) = \nu w(\lambda,\nu).$$
2. The first component of $v(\lambda,\nu)$ and $w(\lambda,\nu)$ is equal to $1$, i.e. $v(\lambda,\nu) = (1, v_2(\lambda,\nu))^t$ and $w(\lambda,\nu) = (1, w_2(\lambda,\nu))^t$ on $Y$.
The holomorphic map $v:Y\to\CC\PP^1$ from Lemma \[lemma\_eigenbundle\] motivates the following definition.
\[definition\_eigen\_bundle\] Set the **divisor** $D(u,u_y)$ as $$D(u,u_y) = -\big(v(\lambda,\nu)\big)$$ and denote by $E(u,u_y)$ the holomorphic line bundle whose sheaf of holomorphic sections is given by $\mathcal{O}_{D(u,u_y)}$. Then $E(u,u_y)$ is called the **eigenline bundle**.
Characterization of spectral curves
-----------------------------------
Given a hyperelliptic Riemann surface $Y$ with branch points over $\lambda=0$ ($y_0$) and $\lambda = \infty$ ($y_{\infty}$) we can deduce conditions such that $Y$ is the spectral curve of a periodic finite type solution of the $\sinh$-Gordon equation.\
Let us recall the well-known characterization of such spectral curves (cf. [@Kilian_Schmidt_infinitesimal], Section 1.2, in the case of immersed CMC tori in $\SS^3$). Note that $M_{\lambda}$ and $\xi_{\lambda}$ commute, i.e. $[M_{\lambda},\xi_{\lambda}] = 0$. Thus it is possible to diagonalize them simultaneously and $\mu$ and $\nu$ can be considered as two different functions on the same Riemann surface $Y$.
\[theorem\_spectral\_curve\] Let $Y$ be a hyperelliptic Riemann surface with branch points over $\lambda=0$ ($y_0$) and $\lambda = \infty$ ($y_{\infty}$). Then $Y$ is the spectral curve of a periodic real finite type solution of the $\sinh$-Gordon equation if and only if the following three conditions hold:
1. Besides the hyperelliptic involution $\sigma$ the Riemann surface $Y$ has two further anti-holomorphic involutions $\eta$ and $\rho = \eta \circ \sigma$. Moreover, $\eta$ has no fixed points and $\eta(y_0) = y_{\infty}$.
2. There exists a non-zero holomorphic function $\mu$ on $Y\backslash\{y_0,y_{\infty}\}$ that obeys $$\sigma^* \mu = \mu^{-1},\;\; \eta^*\bar{\mu} = \mu,\;\; \rho^*\bar{\mu} = \mu^{-1}.$$
3. The form $d\ln\mu$ is a meromorphic differential of the second kind with double poles at $y_0$ and $y_{\infty}$ only.
Following the terminology of [@Hauswirth_Kilian_Schmidt_1; @Kilian_Schmidt_cylinders; @Kilian_Schmidt_infinitesimal], we will describe spectral curves of periodic real finite type solutions of the $\sinh$-Gordon equation via hyperelliptic curves of the form $$\nu^2 = \lambda\, a(\lambda) = -\lambda^2\det(\xi_{\lambda}) = (\lambda \widetilde{\nu})^2.$$ Here $\widetilde{\nu}$ is the eigenvalue of $\xi_{\lambda}$ and $\lambda: Y \to \CC\PP^1$ is chosen in a way such that $y_0$ and $y_{\infty}$ correspond to $\lambda = 0$ and $\lambda = \infty$ with $$\sigma^*\lambda = \lambda,\;\; \eta^*\bar{\lambda} = \lambda^{-1},\;\; \rho^*\bar{\lambda} = \lambda^{-1}.$$ Note that the function $\lambda: Y \to \CC\PP^1$ is fixed only up to a Möbius transformations of the form $\lambda \mapsto e^{2i\varphi}\lambda$. Moreover, $d\ln\mu$ is of the form $$d\ln\mu = \frac{b(\lambda)}{\nu}\frac{d\lambda}{\lambda},$$ where $b$ is a polynomial of degree $g+1$ with $\lambda^{g+1}\overline{b(\bar{\lambda}^{-1})} = -b(\lambda)$.
\[definition\_spectral\_data\] The **spectral curve data** of a periodic real finite type solution of the $\sinh$-Gordon equation is a pair $(a,b) \in \CC^{2g}[\lambda] \times \CC^{g+1}[\lambda]$ such that
1. $\lambda^{2g}\overline{a(\bar{\lambda}^{-1})} = a(\lambda)$ and $\lambda^{-g}a(\lambda) \leq 0$ for all $\lambda \in \SS^1$ and $|a(0)| = 1$.
2. On the hyperelliptic curve $\nu^2 = \lambda a(\lambda)$ there is a single-valued holomorphic function $\mu$ with essential singularities at $\lambda = 0$ and $\lambda = \infty$ with logarithmic differential $$d\ln\mu = \frac{b(\lambda)}{\nu}\frac{d\lambda}{\lambda}$$ with $b(0) = i\frac{\sqrt{a(0)}}{2} \mathbf{p}$ that transforms under the three involutions $$\begin{gathered}
\sigma: (\lambda,\nu) \mapsto (\lambda,-\nu),\;\; \rho: (\lambda,\nu) \mapsto (\bar{\lambda}^{-1},-\bar{\lambda}^{-1-g}\bar{\nu}), \;\;
\eta: (\lambda,\nu) \mapsto (\bar{\lambda}^{-1},\bar{\lambda}^{-1-g}\bar{\nu})\end{gathered}$$ according to $\sigma^*\mu = \mu^{-1},\, \rho^*\mu = \bar{\mu}^{-1}$ and $\eta^*\mu = \bar{\mu}$.
\[equivalent\_spectral\_data\] The conditions (i) and (ii) from Definition \[definition\_spectral\_data\] are equivalent to the following conditions (cf. Definition 5.10 in [@Hauswirth_Kilian_Schmidt_1]):
1. $\lambda^{2g}\overline{a(\bar{\lambda}^{-1})} = a(\lambda)$ and $\lambda^{-g}a(\lambda) \leq 0$ for all $\lambda \in \SS^1$ and $|a(0)| = 1$.
2. $\lambda^{g+1}\overline{b(\bar{\lambda}^{-1})} = -b(\lambda)$ and $b(0) = i\frac{\sqrt{a(0)}}{2} \mathbf{p}$.
3. $\int_{\alpha_i}^{1/\bar{\alpha}_i} \frac{b(\lambda)}{\nu}\frac{d\lambda}{\lambda} = 0$ for all roots $\alpha_i$ of $a$.
4. The unique function $h: \widetilde{Y} \to \CC$, where $\widetilde{Y} = Y \setminus\bigcup\gamma_i$ and $\gamma_i$ are closed cycles over the straight lines connecting $\alpha_i$ and $1/\bar{\alpha}_i$, obeying $\sigma^*h = -h$ and $dh=\frac{b(\lambda)}{\nu}\frac{d\lambda}{\lambda}$, satisfies $h(\alpha_i) \in \pi i \ZZ$ for all roots $\alpha_i$ of $a$.
The moduli space
----------------
Since a Möbius transformation of the form $\lambda \mapsto e^{2i\varphi}\lambda$ changes the spectral curve data $(a,b)$ but does not change the corresponding periodic solution of the $\sinh$-Gordon equation we introduce the following definition.
For all $g\in \NN_0$ let $\mathcal{M}_g(\mathbf{p})$ be the space of equivalence classes of spectral curve data $(a,b)$ from Definition \[definition\_spectral\_data\] with respect to the action of $\lambda \mapsto e^{2i\varphi}\lambda$ on $(a,b)$. $\mathcal{M}_g(\mathbf{p})$ is called the **moduli space** of spectral curve data for Cauchy data $(u,u_y)$ of periodic real finite type solutions of the $\sinh$-Gordon equation.
Each pair of polynomials $(a,b) \in \mathcal{M}_g(\mathbf{p})$ represents a spectral curve $Y_{(a,b)}$ for Cauchy data $(u,u_y)$ of a periodic real finite type solution of the $\sinh$-Gordon equation.
\[definition\_M\_g\_1\] Let $$\begin{aligned}
\mathcal{M}_g^1(\mathbf{p}) & := & \{(a,b) \in \mathcal{M}_g(\mathbf{p}) \left|\right. a \text{ has } 2g \text{ pairwise distinct roots and } \\
& & \hspace{2.9cm} (a,b) \text{ have no common roots}\}\end{aligned}$$ be the moduli space of non-degenerated smooth spectral curve data for Cauchy data $(u,u_y)$ of periodic real finite type solutions of the $\sinh$-Gordon equation.
The term “non-degenerated” in Definition \[definition\_M\_g\_1\] reflects the following fact (cf. [@Hauswirth_Kilian_Schmidt_2], Section 9): If one considers deformations of spectral curve data $(a,b)$, the corresponding integral curves have possible singularities, if $a$ and $b$ have common roots. By excluding the case of common roots of $(a,b)$, one can avoid that situation and identify the space of such deformations with certain polynomials $c \in \CC^{g+1}[\lambda]$ (see Section \[subsection\_spectral\_curve\_deformation\]).
By studying Cauchy data $(u,u_y)$ whose spectral curve $Y(u,u_y)$ corresponds to $(a,b) \in \mathcal{M}_g^1(\mathbf{p})$, we have the following benefits:
1. Since $(a,b) \in \mathcal{M}_g^1(\mathbf{p})$ correspond to Cauchy data $(u,u_y)$ of finite type, we can avoid difficult functional analytic methods for the asymptotic analysis of the spectral curves $Y$ at $\lambda = 0$ and $\lambda = \infty$.
2. Since $(a,b) \in \mathcal{M}_g^1(\mathbf{p})$ have no common roots, we obtain non-singular smooth spectral curves $Y$ and can apply the standard tools from complex analysis for their investigation.
Note, that these assumptions can be dropped in order to extend our results to the more general setting. This was done in [@Schmidt_infinite] for the case of the non-linear Schrödinger operator, for example.
Spectral data
-------------
With all this terminology at hand let us introduce the spectral data associated to a periodic real finite type solution of the $\sinh$-Gordon equation.
The **spectral data** of a periodic real finite type solution of the $\sinh$-Gordon equation is a pair $(Y(u,u_y),D(u,u_y))$ such that $Y(u,u_y)$ is a hyperelliptic Riemann surface of genus $g$ that obeys the conditions from Theorem \[theorem\_spectral\_curve\] and $D(u,u_y)$ is a divisor of degree $g+1$ on $Y(u,u_y)$ that obeys $\eta(D) - D = (f)$ for a meromorphic $f$ with $f\eta^*\bar{f} = -1$.
From the investigation of the so-called “Inverse Problem” it is well-known that the correspondence between $(u,u_y)$ and the spectral data is bijective, since $(Y,D)$ uniquely determine $\xi_{\lambda}$ and $\zeta_{\lambda}$ respectively (cf. [@Hitchin_Harmonic]).
Isospectral and non-isospectral deformations
============================================
We will now consider deformations that correspond to variations of the divisor $D$ or the spectral curve $Y$ and call them isospectral and non-isospectral deformations, respectively.
The projector $P$
-----------------
We will use the meromorphic maps $v:Y \to \CC^2$ and $w^t:Y \to \CC^2$ from Lemma \[lemma\_eigenbundle\] to define a matrix-valued meromorphic function on $Y$ by setting $P := \frac{v w^t}{w^t v}$. Given a meromorphic map $f$ on $Y$ we also define $$P(f) := \frac{v f w^t}{w^t v}.$$ It turns out that $P$ is a projector and that it is possible to extend the definition of the projector $P$ to a projector $P_x$ that is defined by $$P_x(f) := \frac{v(x) f w(x)^t}{w(x)^t v(x)} = F_{\lambda}^{-1}(x)P(f) F_{\lambda}(x).$$ Moreover, there holds $\zeta_{\lambda}(x) = P_x(\tfrac{\nu}{\lambda}) + \sigma^*P_x(\tfrac{\nu}{\lambda})$.
General deformations of $M_{\lambda}$ and $U_{\lambda}$
-------------------------------------------------------
In the next lemma we consider the situation of a general variation with isospectral and non-isospectral parts.
\[lemma\_var\_general\] Let $v_1, w_1^t$ be the eigenvectors for $\mu$ and $v_2, w_2^t$ the corresponding eigenvectors for $\tfrac{1}{\mu}$ of $M(\lambda)$. Then \[var\_general\] M()v\_1 + M()v\_1 = () v\_1 + v\_1 M()v\_2 + M()v\_2 = ()v\_2 + v\_2 if and only if $$\label{var_general_M}
\delta M(\lambda) = \left[\sum_{i=1}^2 \frac{(\delta v_i) w_i^t}{w_i^t v_i},M(\lambda)\right] + (P(\delta\mu) +\sigma^*P(\delta\mu)).
%\tag{$**$}$$
A direct calculation shows
$$\begin{aligned}
\left(\left[\sum_{i=1}^2 \frac{(\delta v_i) w_i^t}{w_i^t v_i},M(\lambda)\right]+ (P(\delta\mu) +\sigma^*P(\delta\mu))\right)v_1
& = & \left(\sum_{i=1}^2 \frac{(\delta v_i) w_i^t}{w_i^t v_i}\right)\mu v_1 - M(\lambda)\delta v_1 +(\delta\mu) v_1 \\
& = & \mu \delta v_1 - M(\lambda)\delta v_1 +(\delta\mu) v_1 \stackrel{\eqref{var_general}}{=} \delta M(\lambda)v_1.\end{aligned}$$
An analogous calculation for $v_2$ gives $$\delta M(\lambda)v_2 = \left(\left[\sum_{i=1}^2 \frac{(\delta v_i) w_i^t}{w_i^t v_i},M(\lambda)\right]+ (P(\delta\mu) +\sigma^*P(\delta\mu))\right)v_2$$ and the claim is proved.
The above considerations also apply for the equation $L_{\lambda}(x) v(x) = (\frac{d}{dx}+U_{\lambda})v(x)= \frac{\ln\mu}{\mathbf{p}} \cdot v(x)$ around $\lambda = 0$ and yield the following lemma.
\[general\_variation\_U\] Let $v_1(x), w_1^t(x)$ be the eigenvectors for $\mu$ and $v_2(x), w_2^t(x)$ the corresponding eigenvectors for $\tfrac{1}{\mu}$ of $M_{\lambda}(x)$ and $M_{\lambda}^t(x)$ respectively. Then $$\begin{aligned}
\label{var_L_general_1}
\delta U_{\lambda} v_1(x) + L_{\lambda}(x)\delta v_1(x) & = & (\tfrac{\delta\ln\mu}{\mathbf{p}}) v_1(x) + \tfrac{\ln\mu}{\mathbf{p}}\delta v_1(x), \\
%\tag{$*$}
%\label{var_L_general_2}
\delta U_{\lambda} v_2(x) + L_{\lambda}(x)\delta v_2(x) & = & -(\tfrac{\delta\ln\mu}{\mathbf{p}}) v_2(x) - \tfrac{\ln\mu}{\mathbf{p}}\delta v_2(x)\end{aligned}$$ around $\lambda = 0$ if and only if \[var\_general\_U\] U\_ = + (P\_x() +\^\*P\_x()).
Following the steps from the proof of Lemma \[lemma\_var\_general\] and keeping in mind that $\delta \mathbf{p} = 0$ in our situation yields the claim.
Equation reflects the decomposition of the tangent space into isospectral and non-isospectral deformations.
Infinitesimal isospectral deformations of $\xi_{\lambda}$ and $U_{\lambda}$
---------------------------------------------------------------------------
We will now investigate infinitesimal isospectral deformations. Note that there also exist global isospectral deformations that are described in [@Hauswirth_Kilian_Schmidt_1].
### The real part of $H^1(Y,\mathcal{O})$
Let us begin with the investigation of $H^1(Y,\mathcal{O})$, the Lie algebra of the Jacobian $\text{Jac}(Y)$.\
For this, consider disjoint open simply-connected neighborhoods $U_0,U_{\infty}$ of $y_0,y_{\infty}$. Setting $U := Y\backslash\{y_0,y_{\infty}\}$ we get a cover $\mathcal{U} := \{U_0,U,U_{\infty}\}$ of $Y$. The only non-empty intersections of neighborhoods from $\mathcal{U}$ are $U_0\backslash\{y_0\}$ and $U_{\infty}\backslash\{y_{\infty}\}$. Since $U_0$ and $U_{\infty}$ are simply connected we have $H^1(U_0,\mathcal{O}) = 0 = H^1(U_{\infty},\mathcal{O})$. Moreover, $H^1(U,\mathcal{O}) = 0$ since $U$ is a non-compact Riemann surface. This shows that $\mathcal{U}$ is a Leray cover and therefore $H^1(Y,\mathcal{O}) = H^1(\mathcal{U},\mathcal{O})$, see [@Fo Theorem 12.8].\
We will consider triples of the form $(f_0,0,f_{\infty})$ of meromorphic functions with respect to this cover $\mathcal{U}$. Then the Cech-cohomology is induced by all pairs of the form $(f_0 - 0, f_{\infty}-0) = (f_0,f_{\infty})$.
\[Lemma\_ML\] The equivalence classes $[h_i]$ of the $g$ tuples $h_i := (f_0^i,f_{\infty}^i)$ given by $h_i = (\nu\lambda^{-i}, -\nu\lambda^{-i})$ for $i=1,\ldots,g$ are a basis of $H^1(Y,\mathcal{O})$. In particular $f_{\infty} = -f_0$.
We know that for $i=1,\ldots,g$ the differentials $\omega_i = \frac{\lambda^{i-1}d\lambda}{\nu}$ span a basis for $H^0(Y,\Omega)$ and that the pairing $\langle\cdot,\cdot\rangle: H^0(Y,\Omega)\times H^1(Y,\mathcal{O}) \to \CC$ given by $(\omega,[h]) \mapsto \text{Res}(h\omega)$ is non-degenerate due to Serre duality [@Fo Theorem 17.9]. Therefore we can calculate the dual basis of $\omega_i$ with respect to this pairing and see $$\begin{aligned}
\langle \omega_i,[h_j]\rangle & = & \text{Res}_{\lambda = 0} f_0^j\omega_i + \text{Res}_{\lambda = \infty} f_{\infty}^j\omega_i
= \text{Res}_{\lambda = 0} \lambda^{i-j-1} d\lambda - \text{Res}_{\lambda = \infty} \lambda^{i-j-1} d\lambda \\
& = & \text{Res}_{\lambda = 0} \lambda^{i-j-1} d\lambda + \text{Res}_{\lambda = 0} \lambda^{-i+j+1} \frac{d\lambda}{\lambda^2}
= \text{Res}_{\lambda = 0} (\lambda^{i-j-1} + \lambda^{-i+j-1}) d\lambda \\
& = & 2\cdot \delta_{ij}.\end{aligned}$$ This shows $\text{span}_{\CC}\{[h_1],\ldots,[h_g]\} = H^1(Y,\mathcal{O})$ and concludes the proof.
Let $H^1_{\RR}(Y,\mathcal{O}) := \{[f] \in H^1(Y,\mathcal{O}) \left|\right. \overline{\eta^*f} = f\}$ be the real part of $H^1(Y,\mathcal{O})$ with respect to the involution $\eta$.
\[Lemma\_reality\_H1\] An element $[f] = [(f_0,f_{\infty})] \in H^1(Y,\mathcal{O})$ satisfies $\overline{\eta^*f} = f$ if and only if $f_{\infty} = \eta^*\bar{f}_0$. The corresponding $f_{\infty} = -f_0 = -\sum_{i=0}^{g-1} c_i \lambda^{-i-1} \nu$ satisfies $\bar{c}_i = -c_{g-1-i}$ for $i = 0,\ldots,g-1$. In particular $\dim_{\RR}H^1_{\RR}(Y,\mathcal{O}) = g$. Any element $[f] = [(f_0,\eta^*\bar{f}_0)] \in H^1_{\RR}(Y,\mathcal{O})$ can be represented by $f_0(\lambda,\nu)$ with $$f_0(\lambda,\nu) = \sum_{i=0}^{g-1} c_i \lambda^{-i-1} \nu
% + \eta^*(\sum_{i=0}^{g-1} \overline{c_i \lambda^{-i-1} \nu})$$ and $\bar{c}_i = -c_{g-1-i}$ for $i = 0,\ldots,g-1$.
The first part of the lemma is obvious. Now a direct calculation gives
$$-\eta^*(\sum_{i=0}^{g-1} \overline{c_i \lambda^{-i-1} \nu}) = -\sum_{i=0}^{g-1} \bar{c}_i \lambda^{i+1} \lambda^{-g-1}\nu = -\sum_{i=0}^{g-1} \bar{c}_i \lambda^{i-g} \nu
\stackrel{j := g-i-1}{=} -\sum_{j=0}^{g-1} \bar{c}_{g-1-j} \lambda^{-j-1} \nu = \sum_{i=0}^{g-1} c_i \lambda^{-i-1} \nu$$
if and only if $\bar{c}_i = -c_{g-1-i}$ for $i = 0,\ldots,g-1$. The subspace of elements $(c_0,\ldots,c_{g-1}) \in \CC^g$ that obey these conditions is a real $g$-dimensional subspace.
### The Krichever construction and the isospectral group action
The construction procedure for linear flows on $\text{Jac}(Y)$ is due to Krichever [@Krichever_algebraic]. In [@McIntosh_harmonic_tori] McIntosh desribes the Krichever construction for finite type solutions of the $\sinh$-Gordon equation. Let us consider the sequence 0 H\^1(Y,) H\^1(Y,) H\^1(Y,\^\*) (Y) H\^2(Y,) 0, \[eq\_sequence\] where $\text{Pic}(Y)$ is the Picard variety of $Y$.
\[def\_quaternionic\_divisor\] Let $\text{Pic}_d(Y)$ be the connected component of the Picard variety of $Y$ of divisors of degree $d$ and let $$\text{Pic}^{\RR}_d(Y) := \{D \in \text{Pic}_d(Y) \left|\right. \eta(D) - D = (f) \text{ for a merom. } f \text{ with } f\eta^*\bar{f} = -1\}$$ be the set of **quaternionic divisors** of degree $d$ with respect to the involution $\eta$.
Recall that $\text{Jac}(Y) \simeq \text{Pic}_0(Y)$. $\text{Pic}^{\RR}_{d}(Y)$ is also called the *real part* of $\text{Pic}_{d}(Y)$. If we consider $H^1_{\RR}(Y,\mathcal{O})$ and restrict to the connected component $\text{Pic}^{\RR}_0(Y)$ of $\text{Pic}(Y)$, we get a map $L: \RR^g \simeq H^1_{\RR}(Y,\mathcal{O}) \to \text{Pic}^{\RR}_0(Y), (t_0,\ldots,t_{g-1}) \mapsto L(t_0,\ldots,t_{g-1})$ from and Lemma \[Lemma\_reality\_H1\].\
It is well known that the divisor $D(u,u_y)$ from Definition \[definition\_eigen\_bundle\] satisfies $D(u,u_y) \in \text{Pic}^{\RR}_{g+1}(Y)$ (see [@Hitchin_Harmonic]). One can also show [@Knopf_phd Theorem 5.17] that the action of the tensor product on holomorphic line bundles induces a continuous commutative and transitive group action of $\RR^g$ on $\text{Pic}_{g+1}^{\RR}(Y)$, which is denoted by $$\pi: \RR^g \times \text{Pic}_{g+1}^{\RR}(Y) \to \text{Pic}_{g+1}^{\RR}(Y),\;\;\; ((t_0,\ldots,t_{g-1}),E) \mapsto \pi(t_0,\ldots,t_{g-1})(E)$$ with $$\pi(t_0,\ldots,t_{g-1})(E) = E \otimes L(t_0,\ldots,t_{g-1}).$$ Here $L(t_0,\ldots,t_{g-1})$ is the family in $\text{Pic}_0^{\RR}(Y)$ which is obtained by applying Krichever’s construction procedure.
### Loop groups and the Iwasawa decomposition
For real $r \in (0,1]$, denote the circle with radius $r$ by $\SS_r = \{\lambda \in \CC \left|\right. |\lambda| = r\}$ and the open disk with boundary $\SS_r$ by $I_r = \{\lambda \in \CC \left|\right. |\lambda| < r\}$. Moreover, the open annulus with boundaries $\SS_r$ and $\SS_{1/r}$ is given by $A_r = \{\lambda \in \CC \left|\right. r <|\lambda| < 1/r\}$ for $r \in (0,1)$. For $r=1$ we set $A_1 := \SS^1$. The loop group $\Lambda_r SL(2,\CC)$ of $SL(2,\CC)$ is the infinite dimensional Lie group of analytic maps from $\SS_r$ to $SL(2,\CC)$, i.e. $$\Lambda_r SL(2,\CC) = \mathcal{O}(\SS_r,SL(2,\CC)).$$ We will also need the following two subgroups of $\Lambda_r SL(2,\CC)$: First let $$\Lambda_r SU(2) = \{F \in \mathcal{O}(A_r,SL(2,\CC)) \left|\right. F|_{\SS^1} \in SU(2)\}.$$ Thus we have $$F_{\lambda} \in \Lambda_r SU(2) \; \Longleftrightarrow \; \overline{F_{1/\bar{\lambda}}}^t = F_{\lambda}^{-1}.$$ The second subgroup is given by $$\Lambda_r^+ SL(2,\CC) = \{B \in \mathcal{O}(I_r \cup \SS_r,SL(2,\CC)) \left|\right. B(0) = \left(\begin{smallmatrix} \rho & c \\ 0 & 1/\rho \end{smallmatrix}\right) \text{ for } \rho \in \RR^+ \text{ and } c \in \CC \}.$$ The normalization $B(0) = B_0$ ensures that $$\Lambda_r SU(2) \cap \Lambda_r^+ SL(2,\CC) = \{\unity\}.$$
Now we have the following important result due to Pressley-Segal [@Pressley_Segal] that has been generalized by McIntosh [@McIntosh_Toda].
\[theorem\_iwasawa\] The multiplication $\Lambda_r SU(2) \times \Lambda_r^+ SL(2,\CC) \to \Lambda_r SL(2,\CC)$ is a surjective real analytic diffeomorphism. The unique splitting of an element $\phi_{\lambda} \in \Lambda_r SL(2,\CC)$ into $$\phi_{\lambda} = F_{\lambda} B_{\lambda}$$ with $F_{\lambda} \in \Lambda_r SU(2)$ and $B_{\lambda} \in \Lambda_r^+ SL(2,\CC)$ is called **r-Iwasawa decomposition** of $\phi_{\lambda}$ or **Iwasawa decomposition** if $r = 1$.
The r-Iwasawa decomposition also holds on the Lie algebra level, i.e. $\Lambda_r \mathfrak{sl}_2(\CC) = \Lambda_r \mathfrak{su}_2 \oplus \Lambda_r^+ \mathfrak{sl}_2(\CC)$. This decomposition will play a very important role in the following.
In order to describe the infinitesimal isospectral deformations of $\xi_{\lambda}$ and $U_{\lambda}$, we follow the exposition of [@Audin], IV.2.e. Let us transfer those methods to our situation.
\[theorem\_vf\_isospectral\_xi\] Let $f_0(\lambda,\nu) = \sum_{i=0}^{g-1} c_i \lambda^{-i-1} \nu$ be the representative of $[(f_0,\eta^*\bar{f}_0)] \in H^1_{\RR}(Y,\mathcal{O})$ from Lemma \[Lemma\_reality\_H1\] and let $$A_{f_0} := P(f_0) + \sigma^*P(f_0) = \sum_{i=0}^{g-1} c_i \lambda^{-i} (P(\lambda^{-1}\nu) + \sigma^*P(\lambda^{-1}\nu)) = \sum_{i=0}^{g-1} c_i \lambda^{-i} \xi_{\lambda}$$ be the induced element in $\Lambda_r\mathfrak{sl}_2(\CC)$. Then the vector field of the isospectral action $\pi: \RR^g \times \text{Pic}_{g+1}^{\RR}(Y) \to \text{Pic}_{g+1}^{\RR}(Y)$ at $E(u,u_y)$ takes the value $$\dot{\xi}_{\lambda} = [A_{f_0}^+,\xi_{\lambda}] = -[\xi_{\lambda},A_{f_0}^-].$$ Here $A_{f_0} = A_{f_0}^+ + A_{f_0}^-$ is the Lie algebra decomposition of the Iwasawa decomposition.
We write $\nu$ for $\widetilde{\nu}$. Obviously there holds $A_{f_0} v = f_0 v$. If we write $\widetilde{v} = e^{\beta_0(t)}v$ for local sections $\widetilde{v}$ of $\mathcal{O}_D \otimes L(t_0,\ldots,t_{g-1})$ and $v$ of $\mathcal{O}_D$ with $\beta_0(t) = \sum_{i=0}^{g-1}c_i t_i \lambda^{-i-1} \nu$ we get $$\begin{aligned}
\delta \widetilde{v} & = & \dot{\beta}_0(t)\widetilde{v} + e^{\beta_0(t)}\delta v = f_0\widetilde{v} + e^{\beta_0(t)}\delta v \\
& = & A_{f_0} \widetilde{v} + e^{\beta_0(t)}\delta v \\
& = & e^{\beta_0(t)}(A_{f_0} v + \delta v).\end{aligned}$$ Moreover, $(f_0) \geq 0$ on $Y^*$. This shows that $A_{f_0} v +\delta v$ is a vector-valued section of $\mathcal{O}_{D}$ on $Y^*$ and defines a map $A_{f_0}^+$ such that $$A_{f_0} v + \delta v = A_{f_0}^+ v$$ holds. Since $A_{f_0} v = f_0 v$ we also obtain the equations $$\begin{cases}
\xi_{\lambda} v = \nu v, \\
\xi_{\lambda}(A_{f_0}^+ v -\delta v) = \nu(A_{f_0}^+ v -\delta v).
\end{cases}$$ This implies $$\xi_{\lambda}\delta v + [A_{f_0}^+,\xi_{\lambda}]v = \nu \delta v.$$ Differentiating the equation $\xi_{\lambda} v = \nu v$ we additionally obtain $$\dot{\xi}_{\lambda} v + \xi_{\lambda}\delta v = \dot{\nu}v + \nu\delta v = \nu\delta v.$$ Combining the last two equations yields $$\dot{\xi}_{\lambda} v = [A_{f_0}^+,\xi_{\lambda}]v$$ and concludes the proof since this equation holds for a basis of eigenvectors.
\[remark\_Af\_minus\] The decomposition of $A_f \in \Lambda_r\mathfrak{sl}(2,\CC) = \Lambda_r \mathfrak{su}_2 \oplus \Lambda^+_{r}\mathfrak{sl}_2(\CC)$ yields $A_{f_0} = A_{f_0}^+ + A_{f_0}^-$ and therefore $A_{f_0} v + \delta v = A_{f_0}^+ v$ implies $$\delta v = -A_{f_0}^- v.$$ In particular $A_{f_0}^-$ is given by $A_{f_0}^- = -\sum \frac{\delta v w^t}{w^t v}$.
We want to extend Theorem \[theorem\_vf\_isospectral\_xi\] to obtain the value of the vector field induced by $\pi: \RR^g \times \text{Pic}_{g+1}^{\RR}(Y) \to \text{Pic}_{g+1}^{\RR}(Y)$ for $U_{\lambda}$.
\[theorem\_vf\_isospectral\_U\] Let $f_0(\lambda,\nu) = \sum_{i=0}^{g-1} c_i \lambda^{-i-1} \nu$ be the representative of $[(f_0,\eta^*\bar{f}_0)] \in H^1_{\RR}(Y,\mathcal{O})$ from Lemma \[Lemma\_reality\_H1\] and let $$A_{f_0}(x) := P_x(f_0) + \sigma^*P_x(f_0) = \sum_{i=0}^{g-1} c_i \lambda^{-i} (P_x(\lambda^{-1}\nu) + \sigma^*P_x(\lambda^{-1}\nu)) = \sum_{i=0}^{g-1} c_i \lambda^{-i} \zeta_{\lambda}(x)$$ be the induced map $A_{f_0}: \RR \to \Lambda_r\mathfrak{sl}_2(\CC)$. Then the vector field of the isospectral action $\pi: \RR^g \times \text{Pic}_{g+1}^{\RR}(Y) \to \text{Pic}_{g+1}^{\RR}(Y)$ at $E(u,u_y)$ takes the value $$\delta U_{\lambda}(x) = [A_{f_0}^+(x),L_{\lambda}(x)] = [L_{\lambda}(x),A_{f_0}^-(x)].$$ Here $A_{f_0}(x) = A_{f_0}^+(x) + A_{f_0}^-(x)$ is the Lie algebra decomposition of the Iwasawa decomposition.
Obviously $A_{f_0}(x) v(x) = f_0 v(x)$. In analogy to the proof of Theorem \[theorem\_vf\_isospectral\_xi\] we obtain a map $A_{f_0}^+(x)$ such that $$A_{f_0}(x) v(x) + \delta v(x) = A_{f_0}^+(x) v(x)$$ holds. Since $A_{f_0}(x) v(x) = f_0 v(x)$ we also obtain the equations $$\begin{cases}
L_{\lambda}(x) v(x) = \tfrac{\ln\mu}{\mathbf{p}} v(x), \\
L_{\lambda}(x) (A_{f_0}^+(x) v(x) -\delta v(x)) = \tfrac{\ln\mu}{\mathbf{p}}(A_{f_0}^+(x) v(x) -\delta v(x))
\end{cases}$$ around $\lambda = 0$. This implies $$L_{\lambda}(x)\delta v(x) + [A_{f_0}^+(x),L_{\lambda}(x)]v(x) = \tfrac{\ln\mu}{\mathbf{p}} \delta v(x).$$ Differentiating the equation $L_{\lambda}(x) v(x) = \tfrac{\ln\mu}{\mathbf{p}} v(x)$ we additionally obtain $$\delta L_{\lambda}(x) v(x) + L_{\lambda}(x) \delta v(x) = \tfrac{\delta\ln\mu}{\mathbf{p}} v(x) + \tfrac{\ln\mu}{\mathbf{p}} \delta v(x) = \tfrac{\ln\mu}{\mathbf{p}} \delta v(x).$$ Combining the last two equations yields $$\delta L_{\lambda}(x) v(x) = \delta U_{\lambda}(x) v(x) = [A_{f_0}^+(x),L_{\lambda}(x)]v(x).$$
Infinitesimal deformations of spectral curves {#subsection_spectral_curve_deformation}
---------------------------------------------
Let $\Sigma_g^{\mathbf{p}}$ denote the space of smooth hyperelliptic Riemann surfaces $Y$ of genus $g$ with the properties described in Theorem \[theorem\_spectral\_curve\], such that $d\ln\mu$ has no roots at the branchpoints of $Y$.
We will now investigate deformations of $\Sigma_g^{\mathbf{p}} \simeq \mathcal{M}^1_g(\mathbf{p})$. For this, following the expositions of [@Grinevich_Schmidt; @Hauswirth_Kilian_Schmidt_2; @Kilian_Schmidt_cylinders], we derive vector fields on open subsets of $\mathcal{M}^1_g(\mathbf{p})$ and parametrize the corresponding deformations by a parameter $t \in [0,\varepsilon)$. We consider deformations of $Y(u,u_y)$ that preserve the periods of $d\ln\mu$. We already know that $$\begin{gathered}
\int_{a_i} d\ln\mu = 0 \;\; \text{ and } \;\; \int_{b_i} d\ln\mu \in 2\pi i \ZZ \;\; \text{ for } i = 1,\ldots,g.\end{gathered}$$ Considering the Taylor expansion of $\ln\mu$ with respect to $t$ we get $$\ln\mu(t) = \ln\mu(0) + t\, \del_t\ln\mu(0) + O(t^2)$$ and thus $$d \ln\mu(t) = d \ln\mu(0) + t\, d \del_t \ln\mu(0) + O(t^2).$$ Given a closed cycle $\gamma \in H_1(Y,\ZZ)$ we have $$\int_{\gamma} d \ln\mu(t) = \int_{\gamma} d \ln\mu(0) + t \int_{\gamma} d \del_t \ln\mu(0) + O(t^2)$$ and therefore $$\frac{d}{dt}\left(\int_{\gamma} d \ln\mu(t)\right)\bigg|_{t=0} = \int_{\gamma} d \del_t \ln\mu(0) = 0.$$ This shows that deformations resulting from the prescription of $\del_t \ln\mu|_{t=0}$ at $t=0$ preserve the periods of $d\ln\mu$ infinitesimally along the deformation and thus are *isoperiodic*. If we set $\dot{\lambda} = 0$ we can consider $\ln\mu$ as a function of $\lambda$ and $t$ and get $$\ln\mu(\lambda,t) = \ln\mu(\lambda,0) + t\, \del_t\ln\mu(\lambda,0) + O(t^2)$$ as well as $$\del_{\lambda}\ln\mu(\lambda,t) = \del_{\lambda}\ln\mu(\lambda,0) + t\, \del^2_{\lambda\,t} \ln\mu(\lambda,0) + O(t^2).$$ On the compact spectral curve $Y$ the function $\del_{\lambda}\ln\mu$ is given by $$\del_{\lambda}\ln\mu = \frac{b(\lambda)}{\lambda \, \nu}$$ and the compatibility condition $\del^2_{t\, \lambda} \ln\mu|_{t=0} = \del^2_{\lambda\, t} \ln\mu|_{t=0}$ will lead to a deformation of the spectral data $(a,b)$ or equivalently to a deformation of the spectral curve $Y$ with its differential $d\ln\mu$. Therefore this deformation is *non-isospectral*. In the following we will investigate which conditions $\delta\ln\mu := (\del_t \ln\mu)|_{t=0}$ has to obey in order obtain such a deformation.\
[**The Whitham deformation.**]{} Following the ansatz given in [@Hauswirth_Kilian_Schmidt_2] we consider the function $\ln\mu$ as a function of $\lambda$ and $t$ and write $\ln\mu$ locally as $$\ln\mu =
\begin{cases}
f_{\alpha_i}(\lambda)\sqrt{\lambda-\alpha_i} + \pi i n_i & \text{at a zero } \alpha_i \text{ of } a, \\
f_0(\lambda)\lambda^{-1/2} + \pi i n_0 & \text{at } \lambda = 0, \\
f_{\infty}(\lambda)\lambda^{1/2} + \pi i n_{\infty} & \text{at } \lambda = \infty.
\end{cases}$$ Here we choose small neighborhoods around the branch points such that each neighborhood contains at most one branch point. Moreover, the functions $f_{\alpha_i},f_0,f_{\infty}$ do not vanish at the corresponding branch points. If we write $\dot{g}$ for $(\del_t g)|_{t=0}$ we get $$(\del_t\ln\mu)|_{t=0} =
\begin{cases}
\dot{f}_{\alpha_i}(\lambda)\sqrt{\lambda-\alpha_i} - \frac{\dot{\alpha_i}f_{\alpha_i}(\lambda)}{2\sqrt{\lambda - \alpha_i}} & \text{at a zero } \alpha_i \text{ of } a, \\
\dot{f}_0(\lambda) \lambda^{-1/2} & \text{at } \lambda = 0, \\
\dot{f}_{\infty}(\lambda)\lambda^{1/2} & \text{at } \lambda = \infty.
\end{cases}$$ Since the branches of $\ln\mu$ differ from each other by an element in $2\pi i \ZZ$ we see that $\delta\ln\mu = (\del_t \ln\mu)|_{t=0}$ is a single-valued meromorphic function on $Y$ with poles at the branch points of $Y$, i.e. the poles of $\delta\ln\mu$ are located at the zeros of $a$ and at $\lambda = 0$ and $\lambda = \infty$. Thus we have $$\delta\ln\mu = \frac{c(\lambda)}{\nu}$$ with a polynomial $c$ of degree at most $g+1$. Since $\eta^*\delta\ln\mu = \delta\ln\bar{\mu}$ and $\eta^*\nu = \bar{\lambda}^{-g-1}\bar{\nu}$ the polynomial $c$ obeys the reality condition \^[g+1]{} = c(). \[reality\_condition\_c\] Differentiating $\nu^2 = \lambda a$ with respect to $t$ we get $2\nu\dot{\nu} = \lambda \dot{a}$. The same computation for the derivative with respect to $\lambda$ gives $2\nu\nu' = a + \lambda a'$. Now a direct calculation shows $$\begin{aligned}
\del^2_{t\, \lambda} \ln\mu|_{t=0} & = & \del_t \left(\frac{b}{\lambda\, \nu}\right)\bigg|_{t=0} = \frac{\dot{b}\lambda\nu - b\lambda\dot{\nu}}{\lambda^2\nu^2} = \frac{2\dot{b}a-b\dot{a}}{2\nu^3}, \\
\del^2_{\lambda\, t} \ln\mu|_{t=0} & = & \del_{\lambda}\left(\frac{c}{\nu}\right) = \frac{c'\nu - c\nu'}{\nu^2} = \frac{2c'\nu^2 - 2c\nu\nu'}{2\nu^3} = \frac{2c'\lambda a - ca -c\lambda a'}{2\nu^3}.\end{aligned}$$ The compatibility condition $\del^2_{t\, \lambda} \ln\mu|_{t=0} = \del^2_{\lambda\, t} \ln\mu|_{t=0}$ holds if and only if -2a + b = -2ac’ + ac + a’c. \[compatibility\_deformations\] Equation is the so-called **Whitham equation**. Both sides of this equation are polynomials of degree at most $3g+1$ and therefore describe relations for $3g+2$ coefficients. If we choose a polynomial $c$ that obeys the reality condition we obtain a vector field on $\mathcal{M}^1_g(\mathbf{p})$. Since $(a,b) \in \mathcal{M}^1_g(\mathbf{p})$ have no common roots, the polynomials $a,b,c$ in equation uniquely define a tangent vector $(\dot{a},\dot{b})$ (see [@Hauswirth_Kilian_Schmidt_2], Section 9). In the following we will specify such polynomials $c$ that lead to deformations which do not change the period $\mathbf{p}$ of $(u,u_y)$ (cf. [@Hauswirth_Kilian_Schmidt_2], Section 9).\
[**Preserving the period $\mathbf{p}$ along the deformation.**]{} If we evaluate the compatibility equation at $\lambda = 0$ we get $$-2\dot{b}(0)a(0) + b(0)\dot{a}(0) = a(0)c(0).$$ Moreover, $$\dot{\mathbf{p}} = 2 \frac{d}{dt}\left(\frac{b(0)}{i\sqrt{a(0)}}\right)\bigg|_{t=0} = \frac{-2\dot{b}(0)a(0) + b(0)\dot{a}(0)}{-i (a(0))^{3/2}} = i\frac{c(0)}{\sqrt{a(0)}}.$$ This proves the following lemma.
\[lemma\_fixed\_period\] Vector fields on $\mathcal{M}^1_g(\mathbf{p})$ that are induced by polynomials $c$ obeying preserve the period $\mathbf{p}$ of $(u,u_y)$ if and only if $c(0) = 0$.
Let us take a closer look at the space of polynomials $c$ that induce a Whitham deformation.
\[lemma\_coefficients\_c\] For the coefficients of the polynomial $c(\lambda) = \sum_{i=0}^{g+1} c_i \lambda^i$ obeying there holds $c_i = \bar{c}_{g+1-i}$ for $i=0,\ldots,g+1$.
\[dimension\_moduli\_space\] The space of polynomials c corresponding to deformations of spectral curve data $(a,b) \in \mathcal{M}^1_g(\mathbf{p})$ (with fixed period $\mathbf{p}$) is $g$-dimensional.
The space of polynomials $c$ of degree at most $g+1$ obeying the reality condition is $(g+2)$-dimensional. From Lemma \[lemma\_fixed\_period\] we know that $\dot{\mathbf{p}} = 0$ if and only if $c(0) = 0$. This yields the claim.
$\mathcal{M}_g^1(\mathbf{p})$ is a smooth $g$-dimensional manifold
------------------------------------------------------------------
From Proposition \[dimension\_moduli\_space\] we know that the space of polynomials $c$ corresponding to deformations of $\mathcal{M}^1_g(\mathbf{p})$ with fixed period $\mathbf{p}$ is $g$-dimensional. In the following we want to show that $\mathcal{M}^1_g(\mathbf{p})$ is a real $g$-dimensional manifold. Therefore, we follow the terminology introduced by Carberry and Schmidt in [@Carberry_Schmidt].\
Let us recall the conditions that characterize a representative $(a,b) \in \CC^{2g}[\lambda]\times \CC^{g+1}[\lambda]$ of an element in $\mathcal{M}_g(\mathbf{p})$:
1. $\lambda^{2g}\overline{a(\bar{\lambda}^{-1})} = a(\lambda)$ and $\lambda^{-g}a(\lambda) < 0$ for all $\lambda \in \SS^1$ and $|a(0)| = 1$.
2. $\lambda^{g+1}\overline{b(\bar{\lambda}^{-1})} = -b(\lambda)$.
3. $f_i(a,b) := \int_{\alpha_i}^{1/\bar{\alpha}_i} \frac{b}{\nu} \frac{d\lambda}{\lambda} = 0$ for the roots $\alpha_i$ of $a$ in the open unit disk $\DD \subset \CC$.
4. The unique function $h: \widetilde{Y} \to \CC$ with $\sigma^*h = -h$ and $dh=\frac{b}{\nu} \frac{d\lambda}{\lambda}$ satisfies $h(\alpha_i) \in \pi i \ZZ$ for all roots $\alpha_i$ of $a$.
Let $\mathcal{H}^g$ be the set of polynomials $a \in \CC^{2g}[\lambda]$ that satisfy condition (i) and whose roots are pairwise distinct.
Every $a \in \mathcal{H}^g$ corresponds to a smooth spectral curve. Moreover, every $a \in \mathcal{H}^g$ is uniquely determined by its roots.
For every $a\in \mathcal{H}^g$ let the space $\mathcal{B}_a$ be given by $$\mathcal{B}_a := \{b \in \CC^{g+1}[\lambda] \left|\right. b \text{ satisfies conditions (ii) and (iii)}\}.$$
Since (iii) imposes $g$ linearly independent constraints on the $(g+2)$-dimensional space of polynomials $b \in \CC^{g+1}[\lambda]$ obeying the reality condition (ii) we get
\[prop\_dim\_B\_a\] $\dim_{\RR} \mathcal{B}_a = 2$. In particular every $b_0 \in \CC$ uniquely determines an element $b \in \mathcal{B}_a$ with $b(0) = b_0$.
Using the Implicit Function Theorem one obtains the following proposition.
The set $$M:= \{(a,b) \in \CC^{2g}[\lambda]\times \CC^{g+1}[\lambda] \left|\right. a \in \mathcal{H}^g, (a,b) \text{ have no common roots and } b \text{ satisfies (ii)}\}$$ is an open subset of a $(3g+2)$-dimensional real vector space. Moreover, the set $$N:= \{(a,b) \in M \left|\right. f_i(a,b) = 0 \text{ for } i=1,\ldots,g\}$$ defines a real submanifold of $M$ of dimension $2g+2$ that is parameterized by $(a,b(0))$. If $b(0) = b_0$ is fixed we get a real submanifold of dimension $2g$.
The results in [@Hauswirth_Kilian_Schmidt_1; @Hauswirth_Kilian_Schmidt_2] yield the following theorem (cf. Lemma 5.3 in [@Hauswirth_Kilian_Schmidt_3]).
\[theorem\_Mg\] For a fixed choice $n_1,\ldots,n_g \in \ZZ$ the map $h = (h_1,\ldots,h_g): N \to (i\RR/2\pi i \ZZ)^g \simeq (\SS^1)^g$ with $$h_j: N \to i\RR/2\pi i \ZZ, \;\; (a,b) \mapsto h_j(a,b):= \ln\mu(\alpha_j) - \pi i n_j$$ is smooth and its differential $dh$ has full rank. In particular $\mathcal{M}^1_g(\mathbf{p}) = h^{-1}[0]$ defines a real submanifold of dimension $g$. Here we consider $b(0) = b_0$ as fixed, i.e. $\dim_{\RR}(N) = 2g$.
Let us consider an integral curve $(a(t),b(t))$ for the vector field $X_c$ that corresponds to a Whitham deformation that is induced by a polynomial $c$ obeying the reality condition . Then there holds $h(a(t),b(t)) \equiv \text{const.}$ along this deformation and therefore $$dh(a(t),b(t))\cdot (\dot{a}(t),\dot{b}(t)) = dh(a(t),b(t))\cdot X_c(a(t),b(t)) = 0.$$ From Proposition \[dimension\_moduli\_space\] we know that the space of polynomials $c$ that correspond to a deformation with fixed period $\mathbf{p}$ is $g$-dimensional. We will now show that the map c X\_c(a(t),b(t)) = ((t),(t)) h(a(t),b(t)) \[eq\_c\_deformations\] is one-to-one and onto. The first part of the claim is obvious. For the second part consider the functions $$f_{b_j}(a,b) := \int_{b_j} d\ln\mu = \int_{b_j} \frac{b(\lambda)}{\nu}\frac{d\lambda}{\lambda} = \ln\mu(\alpha_j) = \pi i n_j \in \pi i \ZZ$$ along $(a(t),b(t))$, where the $b_j$ are the $b$-cycles of $Y$. Taking the derivative yields $$\frac{d}{dt} f_{b_j}(a,b)\big|_{t=0} = \int_{b_j} \frac{d}{dt}\left(\frac{b(\lambda)}{\nu}\right)\bigg|_{t=0} \frac{d\lambda}{\lambda} = \int_{b_j} \del_t (\del_{\lambda}\ln \mu) |_{t=0} \, d\lambda = 0= 0.$$ Morover, for the $a$-cycles $a_j$ we have $$f_{a_j}(a,b) = f_j(a,b) = \int_{a_j} d\ln\mu = \int_{a_j} \frac{b(\lambda)}{\nu}\frac{d\lambda}{\lambda} = 0$$ and consequently $$\frac{d}{dt} f_{a_j}(a,b)\big|_{t=0} = \int_{a_j} \frac{d}{dt}\left(\frac{b(\lambda)}{\nu}\right)\bigg|_{t=0} \frac{d\lambda}{\lambda} = \int_{a_j} \del_t (\del_{\lambda}\ln \mu) |_{t=0} \, d\lambda = 0.$$ Since all integrals of $\del_t (\del_{\lambda}\ln \mu) |_{t=0}$ vanish, there exists a meromorphic function $\phi$ with $$d\phi = \del_t (\del_{\lambda}\ln \mu) |_{t=0} \,d\lambda.$$ Due to the Whitham equation this function is given by $\phi = (\del_t\ln \mu)|_{t=0}= \frac{c}{\nu}$. Thus the map in is bijective. This shows $\dim (\ker dh) = g$ and consequently $\dim (\text{im}\,dh) = g$ as well. Therefore $dh:\RR^{2g} \to \RR^g$ has full rank and the claim follows.
The phase space $(M_g^{\mathbf{p}},\Omega)$
===========================================
In the following we will define the phase space of our integrable system. We need some preparation and first recall the generalized Weierstrass representation [@DPW]. Set $$\Lambda_{-1}^{\infty}\mathfrak{sl}_2(\CC) = \{ \xi_{\lambda} \in \mathcal{O}(\CC^*,\mathfrak{sl}_2(\CC)) \left|\right. (\lambda \xi_{\lambda})_{\lambda = 0} \in \CC^* \epsilon_+\}.$$ A **potential** is a holomorphic $1$-form $\xi_{\lambda}dz$ on $\CC$ with $\xi_{\lambda} \in \Lambda_{-1}^{\infty}\mathfrak{sl}_2(\CC)$. Given such a potential one can solve the holomorphic ODE $d\phi_{\lambda} = \phi_{\lambda}\xi_{\lambda}$ to obtain a map $\phi_{\lambda}: \CC \to \Lambda_r SL(2,\CC)$. Then Theorem \[theorem\_iwasawa\] yields an extended frame $F_{\lambda}: \CC \to \Lambda_r SU(2)$ via the $r$-Iwasawa decomposition $$\phi_{\lambda} = F_{\lambda} B_{\lambda}.$$ It is proven in [@DPW] that each extended frame can be obtained from a potential $\xi_{\lambda} dz$ by the Iwasawa decomposition. Note, that we have the inclusions $$\mathcal{P}_g \subset \Lambda_{-1}^{\infty}\mathfrak{sl}_2(\CC) \subset \Lambda_r \mathfrak{sl}_2(\CC).$$ An extended frame $F_{\lambda}: \CC \to \Lambda_r SU(2)$ is of **finite type**, if there exists $g \in \NN$ such that the corresponding potential $\xi_{\lambda}dz$ satisfies $\xi_{\lambda} \in \mathcal{P}_g \subset \Lambda_{-1}^{\infty}\mathfrak{sl}_2(\CC)$. We say that a polynomial Killing field has minimal degree if and only if it has neither roots nor poles in $\lambda \in \CC^*$. We will need the following proposition that summarizes two results by Burstall-Pedit [@Burstall_Pedit_1; @Burstall_Pedit_2].
\[proposition\_minimal\_degree\] For an extended frame of finite type there exists a unique polynomial Killing field of minimal degree. There is a smooth 1:1 correspondence between the set of extended frames of finite type and the set of polynomial Killing fields without zeros.
Consider the map $A: \xi_{\lambda} \mapsto A(\xi_{\lambda}) := -\lambda \det \xi_{\lambda}$ (see [@Hauswirth_Kilian_Schmidt_1]) and set $\mathcal{P}_g^1(\mathbf{p}) := A^{-1}[\mathcal{M}_g^1(\mathbf{p})]$. Moreover, denote by $C_{\mathbf{p}}^{\infty} := C^{\infty}(\RR/\mathbf{p}\ZZ)$ the Frechet space of real infinitely differentiable functions of period $\mathbf{p} \in \RR^+$. The above discussion yields an injective map $$\phi: \mathcal{P}_g^1(\mathbf{p}) \subset \Lambda_{-1}^{\infty}\mathfrak{sl}_2(\CC) \to \phi[\mathcal{P}_g^1(\mathbf{p})] \subset C_{\mathbf{p}}^{\infty} \times C_{\mathbf{p}}^{\infty},\; \xi_{\lambda} \mapsto (u(\xi_{\lambda}), u_y(\xi_{\lambda})).$$
Let $M_g^{\mathbf{p}}$ denote the space of $(u,u_y) \in C_{\mathbf{p}}^{\infty} \times C_{\mathbf{p}}^{\infty}$ (with fixed period $\mathbf{p}$) such that $(u,u_y)$ is of finite type in the sense of Def. \[killing1\], where $\Phi_{\lambda}$ is of fixed degree $g \in \NN_0$, and $\zeta_{\lambda}(0) \in \mathcal{P}_g^1(\mathbf{p})$ with $\zeta_{\lambda} = \Phi_{\lambda} - \lambda^{g-1}\overline{\Phi_{1/\bar{\lambda}}}^t$, i.e. $M_g^{\mathbf{p}} := \phi[\mathcal{P}_g^1(\mathbf{p})]$.
Now we are able to prove the following lemma.
\[lemma\_phi\_embedding\] The map $\phi: \mathcal{P}_g^1(\mathbf{p}) \to M_g^{\mathbf{p}},\; \xi_{\lambda} \mapsto (u(\xi_{\lambda}), u_y(\xi_{\lambda}))$ is an embedding.
From the previous discussion we know that $\phi: \mathcal{P}_g^1(\mathbf{p}) \to M_g^{\mathbf{p}}$ is bijective. We show that $\phi^{-1}: M_g^{\mathbf{p}} \to \mathcal{P}_g^1(\mathbf{p})$ is continuous. Assume that $g$ is the minimal degree for $\xi_{\lambda} \in \mathcal{P}^1_g(\mathbf{p})$ (see Proposition \[proposition\_minimal\_degree\]). Then the Jacobi fields $$(\omega_0,\del_y \omega_0),\ldots,(\omega_{g-1},\del_y\omega_{g-1}) \in C^{\infty}(\CC/\mathbf{p}\ZZ) \times C^{\infty}(\CC/\mathbf{p}\ZZ)$$ are linearly independent over $\CC$ with all their derivatives up to order $2g+1$. We will now show that they stay linearly independent if we restrict them to $\RR$. For this, suppose that they are linearly dependent on $\RR$ with all their derivatives up to order $2g+1$. Since $u$ solves the elliptic $\sinh$-Gordon equation with analytic coefficients $u$ is analytic on $\CC$ [@Lewy]. Thus the $(\omega_i,\del_y \omega_i)$ are analytic as well since they only depend on $u$ and its $k$-th derivatives with $k \leq 2i +1 \leq 2g+1$ (see [@Pinkall_Sterling], Proposition 3.1). Thus they stay linearly dependent on an open neighborhood and the subset $M \subset \CC$ of points such that these functions are linearly dependent is open and closed. Therefore $M = \CC$, a contradiction!\
By considering all derivatives of $(u,u_y)$ up to order $2g+1$ we get a small open neighborhood $U$ of $(u,u_y) \in C^{\infty}(\RR/\mathbf{p}\ZZ) \times C^{\infty}(\RR/\mathbf{p}\ZZ)$ such that the functions $$(\widetilde{\omega}_0,\del_y \widetilde{\omega}_0),\ldots,(\widetilde{\omega}_{g-1},\del_y\widetilde{\omega}_{g-1}) \in C^{\infty}(\RR/\mathbf{p}\ZZ) \times C^{\infty}(\RR/\mathbf{p}\ZZ)$$ remain linearly independent for $(\widetilde{u},\widetilde{u}_y) \in U$. Given $(u,u_y) \in U$ there exist numbers $a_0,\ldots,a_{g-1}$ such that the $g$ vectors $$((\omega_0(a_j),\del_y \omega_0(a_j)),\ldots,(\omega_{g-1}(a_j),\del_y\omega_{g-1}(a_j)))^t$$ are linearly independent. Recall that $(\omega_g,\del_y \omega_g) = \sum_{i=0}^{g-1} c_i (\omega_i,\del_y \omega_i)$ in the finite type situation, which assures the existence of a polynomial Killing field. Inserting these $a_0,\ldots,a_{g-1}$ into the equation $(\omega_g,\del_y \omega_g) = \sum_{i=0}^{g-1} c_i (\omega_i,\del_y \omega_i)$ we obtain an invertible $g\times g$ matrix and can calculate the $c_i$. This shows that the coefficients $c_i$ continuously depend on $(u,u_y) \in U$. Thus for $(u,u_y) \in M_g^{\mathbf{p}}$ and $\varepsilon > 0$ there exists a $\delta_{\varepsilon} > 0$ such that $$\|\xi_{\lambda}(u,u_y) - \xi_{\lambda}(\widetilde{u},\widetilde{u}_y)\| < \varepsilon$$ holds for all Cauchy data $(\widetilde{u},\widetilde{u}_y) \in M_g^{\mathbf{p}}$ with $\|(u,u_y) - (\widetilde{u},\widetilde{u}_y)\| < \delta_{\varepsilon}$, where the norm is given by the supremum of the first $2g+1$ derivatives.
Let us study the map $$Y: M_g^{\mathbf{p}} \to \Sigma_g^{\mathbf{p}} \simeq \mathcal{M}^1_g(\mathbf{p}), \;\; (u,u_y) \mapsto Y(u,u_y)$$ that appears in the diagram $$\xymatrix{
\mathcal{P}_g^1(\mathbf{p}) \ar[d]_{\phi} \ar[dr]^A
& \\
M_g^{\mathbf{p}} \ar[r]^{Y} & \Sigma_g^{\mathbf{p}}}$$
\[proposition\_fiber\_bundle\] The map $Y: M_g^{\mathbf{p}} \to \Sigma_g^{\mathbf{p}} \simeq \mathcal{M}^1_g(\mathbf{p}), \; (u,u_y) \mapsto Y(u,u_y)$ is a principle bundle with fiber $\text{Iso}(Y(u,u_y)) \simeq \text{Pic}_{g+1}^{\RR}(Y(u,u_y)) \simeq (\SS^1)^g$. In particular $M_g^{\mathbf{p}}$ is a manifold of dimension $2g$.
Due to Theorem \[theorem\_Mg\] the space $\mathcal{M}^1_g(\mathbf{p})$ is a smooth $g$-dimensional manifold. From Proposition 4.12 in [@Hauswirth_Kilian_Schmidt_1] we know that the mapping $$A: \mathcal{P}_g^1(\mathbf{p}) \to \mathcal{M}_g^1(\mathbf{p}),\; \xi_{\lambda} \mapsto -\lambda \det(\xi_{\lambda})$$ is a principal fiber bundle with fiber $(\SS^1)^g$ and thus $\mathcal{P}_g^1(\mathbf{p})$ is a manifold of dimension $2g$. Due to Lemma \[lemma\_phi\_embedding\] the map $\phi: \mathcal{P}_g^1(\mathbf{p}) \to M_g^{\mathbf{p}}$ is an embedding and thus $M_g^{\mathbf{p}}$ is a manifold of dimension $2g$ as well.
Note, that the structure of such “finite-gap manifolds” is also investigated in [@Dorfmeister_Wu_Loop_groups] and [@Bobenko_Kuksin_Sine; @Kuksin_Hamiltonian_PDE].
Hamiltonian formalism
---------------------
It will turn out that $M_g^{\mathbf{p}}$ can be considered as a symplectic manifold with a certain symplectic form $\Omega$. To see this, we closely follow the exposition of [@McKean] and consider the phase space of $(q,p) \in C_{\mathbf{p}}^{\infty} \times C_{\mathbf{p}}^{\infty}$ equipped with the symplectic form $$\Omega((\delta q, \delta p),(\widetilde{\delta q},\widetilde{\delta p})) = \int_0^{\mathbf{p}} \left(\delta q(x)\widetilde{\delta p}(x) - \widetilde{\delta q}(x)\delta p(x) \right)\,dx$$ and the Poisson bracket $$\{f,g\} = \int_0^{\mathbf{p}} \langle\nabla f, J \nabla g\rangle \,dx \; \text{ with } \; J=\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}.$$ Here $f$ and $g$ are functionals of the form $h:C_{\mathbf{p}}^{\infty} \times C_{\mathbf{p}}^{\infty} \to \RR,\; (q,p)\mapsto h(q,p)$ and $\nabla h$ denotes the corresponding gradient of $h$ in the function space $C_{\mathbf{p}}^{\infty} \times C_{\mathbf{p}}^{\infty}$. Note, that there holds $\{f,g\} = \Omega(\nabla f, \nabla g)$. If we consider functionals $H,f$ on the function space $M = C_{\mathbf{p}}^{\infty} \times C_{\mathbf{p}}^{\infty}$ we have $$df(X) = \int_0^{\mathbf{p}} \langle \nabla f, X\rangle \, dx$$ and $$X(H)= J \nabla H.$$ Since $X(H)$ is a vector field it defines a flow $\Phi: O \subset M \times \RR \to M$ such that $\Phi((q_0,p_0),t)$ solves $$\frac{d}{dt}\Phi((q_0,p_0),t) = X(H)(\Phi((q_0,p_0),t)) \; \text{ with } \; \Phi((q_0,p_0),0) = (q_0,p_0).$$ In the following we will write $(q(t),p(t))^t := \Phi((q_0,p_0),t)$ for integral curves of $X(H)$ that start at $(q_0,p_0)$. A direct calculation shows $$\frac{d}{dt} f(q(t),p(t))\bigg|_{t=0} = df(X(H)) = \int_0^{\mathbf{p}} \langle \nabla f, J \nabla H \rangle \, dx = \{f,H\}$$ and we see again that $f$ is an integral of motion if and only if $f$ and $H$ are in involution. Set $(q,p) = (u,u_y)$, where $u$ is a solution of the $\sinh$-Gordon equation, i.e. $$\Delta u + 2\sinh(2u) = u_{xx} + u_{yy} + 2\sinh(2u) = 0.$$ Setting $t = y$ we can investigate the so-called $\sinh$-Gordon flow that is expressed by $$\frac{d}{d y}\begin{pmatrix} u \\ u_y \end{pmatrix} =
\begin{pmatrix} u_y \\ -u_{xx}-2\sinh(2u) \end{pmatrix} =
J\nabla H_2 = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \begin{pmatrix} \frac{\del H_2}{\del q} \\ \frac{\del H_2}{\del p} \end{pmatrix}$$ with the Hamiltonian $$H_2(q,p) = \int_0^\mathbf{p} \tfrac{1}{2} p^2-\tfrac{1}{2}(q_x)^2+\cosh(2q) \, dx = \int_0^\mathbf{p} \tfrac{1}{2} (u_y)^2 - \tfrac{1}{2}(u_x)^2 + \cosh(2u) \, dx$$ and corresponding gradient $$\nabla H_2 = \left(q_{xx}+2\sinh(2q),p\right)^t = \left(u_{xx}+2\sinh(2u),u_y\right)^t.$$
\[remark\_periodic\_flow\] Since we have a loop group splitting (the $r$-Iwasawa decomposition) in the finite type situation, all corresponding flows can be integrated. Thus the flow $(q(y),p(y))^t=(u(x,y),u_y(x,y))^t$ that corresponds to the $\sinh$-Gordon flow is defined for all $y \in \RR$.
Due to Remark \[remark\_periodic\_flow\] $q(y) = u(x,y)$ is a periodic solution of the $\sinh$-Gordon equation with $u(x+\mathbf{p},y) = u(x,y)$ for all $(x,y) \in \RR^2$. The Hamiltonian $H_2$ is an integral of motion, another one is associated with the flow of translation (here we set $t = x$) induced by the functional $$H_1(q,p) = \int_0^{\mathbf{p}} pq_x \,dx = \int_0^{\mathbf{p}} u_y u_x \,dx \;\;\text{ with }\;\; \frac{d}{d x}\begin{pmatrix} u \\ u_y \end{pmatrix} =
\begin{pmatrix} u_x \\ u_{yx}\end{pmatrix} =
J\nabla H_1.$$
Polynomial Killing fields and integrals of motion
=================================================
Following [@Kilian_Schmidt_infinitesimal], we will now describe how the functions $\varphi((\lambda,\mu),z) := F_{\lambda}^{-1}(z)v(\lambda,\mu)$ and $\psi((\lambda,\mu),z) := \left(\begin{smallmatrix} 0 & i \\ -i & 0 \end{smallmatrix}\right)\sigma^*\varphi((\lambda,\mu),z)$ can be used to describe the functions $\omega_n,\sigma_n,\tau_n$ from the Pinkall-Sterling iteration.
\[proposition\_omega\_lambda\] Define $\omega := \psi_1\varphi_1 - \psi_2\varphi_2$. Then
1. The functon $h = \psi^t \varphi$ satisfies $dh=0$.
2. The function $\omega$ is in the kernel of the Jacobi operator and can be supplemented to a parametric Jacobi field with corresponding (up to complex constants) $$\tau = \frac{i\psi_2\varphi_1}{e^u},\;\;\; \sigma = -\frac{i\psi_1\varphi_2}{e^u}.$$
\[proposition\_expansion\_omega\] Let $h = \psi^t\varphi$. Then the entries of $$P(z) := \frac{\psi \varphi^t}{\psi^t \varphi} = \frac{1}{h}
\begin{pmatrix}
\psi_1\varphi_1 & \psi_1 \varphi_2 \\
\psi_2\varphi_1 & \psi_2 \varphi_2
\end{pmatrix}$$ have the asymptotic expansions at $\lambda=0$ $$\begin{gathered}
-\frac{i\omega}{2h} = \frac{1}{\sqrt{\lambda}} \sum_{n=1}^{\infty} \omega_n (-\lambda)^n,\;\; -\frac{i\tau}{h} = \frac{1}{\sqrt{\lambda}}\sum_{n=0}^{\infty} \tau_n (-\lambda)^n, \;\;
-\frac{i\sigma}{h} = \frac{1}{\sqrt{\lambda}}\sum_{n=1}^{\infty} \sigma_n (-\lambda)^n.\end{gathered}$$
Consider the asymptotic expansion $$\ln\mu = \frac{1}{\sqrt{\lambda}}\frac{i\mathbf{p}}{2} + \sqrt{\lambda}\sum_{n \geq 0} c_n \lambda^{n} \;\; \text{ at } \; \lambda = 0$$ and set $H_{2n+1} := (-1)^{n+1}\Re(c_{n})$ and $H_{2n+2} := (-1)^{n+1}\Im(c_{n})$ for $n\geq 0$.
Since $$\ln\mu = \tfrac{1}{\sqrt{\lambda}}\tfrac{i\mathbf{p}}{2} + \sqrt{\lambda}\int_0^{\mathbf{p}}\left(-i(\del u)^2 + \tfrac{i}{2}\cosh(2u)\right)\,dt + O(\lambda)$$ at $\lambda = 0$ we see that the functions $H_1, H_2$ are given by $$\begin{aligned}
H_1 & = & \int_0^{\mathbf{p}} \tfrac{1}{2} u_y u_x \,dx, \\
H_2 & = & -\int_0^\mathbf{p} \tfrac{1}{4} (u_y)^2 - \tfrac{1}{4}(u_x)^2 + \tfrac{1}{2}\cosh(2u) \, dx.\end{aligned}$$ These functions are proportional to the Hamiltonians that induce the flow of translation and the $\sinh$-Gordon flow respectively.
We will now illustrate the link between the Pinkall-Sterling iteration from Proposition \[prop\_pinkall\_sterling\] and these functions $H_n$ (which we call **Hamiltonians** from now on) and show that the functions $H_n$ are pairwise in involution. Recall the formula $$\frac{d}{dt}H((u,u_y) + t(\delta u,\delta u_y))|_{t=0} = dH_{(u,u_y)}(\delta u,\delta u_y) = \Omega(\nabla H(u,u_y),(\delta u,\delta u_y))$$ from the last section. First we need the following lemma.
\[lemma\_variation\_ln\_mu\] For the map $\ln\mu$ we have the variational formula $$\frac{d}{dt}\ln\mu((u,u_y) + t(\delta u,\delta u_y))\bigg|_{t=0} = \int_0^{\mathbf{p}}\frac{1}{\varphi^t\psi}\psi^t\delta U_{\lambda} \varphi \,dx$$ with $$\delta U_{\lambda} = \frac{1}{2}
\begin{pmatrix}
-i\delta u_y & i\lambda^{-1} e^u \delta u -i e^{-u}\delta u \\
i\lambda e^u\delta u -i e^{-u}\delta u & i\delta u_y
\end{pmatrix}.$$
We follow the ansatz presented in [@McKean], Section $6$, and obtain for $F_{\lambda}(x)$ solving $\frac{d}{dx}F_{\lambda} = F_{\lambda}U_{\lambda}$ with $F_{\lambda}(0) = \unity$ the variational equation $$\frac{d}{dx}\frac{d}{dt}F_{\lambda}(\delta u,\delta u_y)\bigg|_{t=0} = \left(\frac{d}{dt}F_{\lambda}(\delta u,\delta u_y)\bigg|_{t=0}\right) U_{\lambda} + F_{\lambda}\delta U_{\lambda}$$ with $$\left(\frac{d}{dt}F_{\lambda}(\delta u,\delta u_y)\bigg|_{t=0}\right)(0) = \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix}$$ and $$\delta U_{\lambda} = \frac{1}{2}
\begin{pmatrix}
-i\delta u_y & i\lambda^{-1} e^u \delta u -i e^{-u}\delta u \\
i\lambda e^u\delta u -i e^{-u}\delta u & i\delta u_y
\end{pmatrix}.$$ The solution of this differential equation is given by $$\left(\frac{d}{dt}F_{\lambda}(\delta u,\delta u_y)\bigg|_{t=0}\right)(x) = \left(\int_0^x F_{\lambda}(y) \delta U_{\lambda}(y) F_{\lambda}^{-1}(y) \,dy\right) F_{\lambda}(x)$$ and evaluating at $x = \mathbf{p}$ yields $
\frac{d}{dt}M_{\lambda}(\delta u,\delta u_y)|_{t=0} = \left(\int_0^{\mathbf{p}} F_{\lambda}(y) \delta U_{\lambda}(y) F_{\lambda}^{-1}(y) \,dy\right) M_{\lambda}.
$ Due to Lemma \[lemma\_var\_general\] there holds $$\delta M_{\lambda} = \frac{d}{dt}M_{\lambda}(\delta u,\delta u_y)\bigg|_{t=0}= \left[\sum_{i=1}^2 \frac{(\delta v_i) w_i^t}{w_i^t v_i},M_{\lambda}\right] + (P(\delta\mu) +\sigma^*P(\delta\mu)).$$ If we multiply the last equation with $\frac{w_1^t}{w_1^t v_1}$ from the left and $v_1$ from the right we get $$\mu \frac{w_1^t\delta v_1}{w_1^t v_1} - \mu \frac{w_1^t \delta v_1}{w_1^t v_1} + \delta \mu \frac{w_1^t v_1}{w_1^t v_1} = \delta \mu = \mu \int_0^{\mathbf{p}}\frac{1}{\varphi^t\psi}\psi^t\delta U_{\lambda} \varphi \,dx$$ and therefore $
\frac{d}{dt}\ln\mu(\delta u,\delta u_y)|_{t=0} = \int_0^{\mathbf{p}}\frac{1}{\varphi^t\psi}\psi^t\delta U_{\lambda} \varphi \,dx.
$ This proves the claim.
We now will apply Lemma \[lemma\_variation\_ln\_mu\] and Corollary \[expansion\_mu\] to establish a link between solutions $\omega_n$ of the homogeneous Jacobi equation from Proposition \[prop\_pinkall\_sterling\] and the Hamiltonians $H_n$.
\[theorem\_hamiltonians\] For the series of Hamiltonians $(H_n)_{n\in \NN}$ and solutions $(\omega_n)_{n\in \NN_0}$ of the homogeneous Jacobi equation from the Pinkall-Sterling iteration there holds $$\nabla H_{2n+1} = (\Re(\omega_n(\cdot,0)),\Re(\del_y \omega_n(\cdot,0))) \; \text{ and } \; \nabla H_{2n+2} = (\Im(\omega_n(\cdot,0)),\Im(\del_y \omega_n(\cdot,0))).$$
Considering the result of Lemma \[lemma\_variation\_ln\_mu\] a direct calculation gives $$\begin{aligned}
\frac{d}{dt}\ln\mu(\delta u,\delta u_y)\bigg|_{t=0} & = & \int_0^{\mathbf{p}}\frac{1}{\varphi^t\psi}\psi^t\delta U_{\lambda} \varphi \,dx \\
& \stackrel{h=\varphi^t \psi}{=} & \int_0^{\mathbf{p}}\frac{1}{2h}\psi^t
\begin{pmatrix}
-i\delta u_y & i\lambda^{-1} e^u \delta u -i e^{-u}\delta u \\
i\lambda e^u\delta u -i e^{-u}\delta u & i\delta u_y
\end{pmatrix}
\varphi \,dx \\
& = & \int_0^{\mathbf{p}} \frac{1}{2h} ((\psi_2\varphi_1(i\lambda e^u - ie^{-u}) + \psi_1\varphi_2(i\lambda^{-1}e^u-ie^{-u})) \delta u \\
& & \hspace{1.3cm} -\, i(\psi_1\varphi_1 - \psi_2\varphi_2)\delta u_y) \,dx \\
& \stackrel{\text{Prop. }\ref{proposition_omega_lambda}}{=} & \int_0^{\mathbf{p}} \frac{1}{2h} \left((\lambda e^{2u}\tau - \tau -\lambda^{-1}e^{2u}\sigma +\sigma)\delta u -i\omega \, \delta u_y \right) \,dx \\
& \stackrel{\text{Prop. }\ref{prop_pinkall_sterling}}{=}& \int_0^{\mathbf{p}} \frac{1}{2h} \left(-(\del\omega -\delbar\omega)\delta u -i\omega \, \delta u_y \right) \,dx \\
& = & \int_0^{\mathbf{p}} \frac{1}{2h} \left(i\omega_y \, \delta u -i\omega \, \delta u_y \right) \,dx =
\Omega(-\tfrac{i}{2h} (\omega, \omega_y), (\delta u, \delta u_y)) \\
& = & \Omega\left(\tfrac{-i}{2h}(\Re(\omega), \Re(\omega_y)), (\delta u, \delta u_y)\right) + i\Omega\left(\tfrac{-i}{2h}(\Im(\omega),\Im(\omega_y)), (\delta u, \delta u_y)\right) \\
& = & \sqrt{\lambda}\sum_{n\geq 0}(-1)^{n+1} \Omega\left((\Re(\omega_n), \Re(\del_y\omega_n)), (\delta u, \delta u_y)\right) \lambda^{n} \\
& & +\, i\sqrt{\lambda}\sum_{n\geq 0} (-1)^{n+1} \Omega\left((\Im(\omega_n),\Im(\del_y\omega_n)), (\delta u, \delta u_y)\right) \lambda^{n}\end{aligned}$$ around $\lambda = 0$ due to Proposition \[proposition\_expansion\_omega\]. On the other hand we know from Corollary \[expansion\_mu\] that we have the following asymptotic expansion of $\ln\mu$ around $\lambda = 0$ $$\begin{aligned}
\ln\mu & = & \tfrac{1}{\sqrt{\lambda}}\tfrac{i\mathbf{p}}{2} + \sqrt{\lambda}\int_0^{\mathbf{p}}\left(-i(\del u)^2 + \tfrac{i}{2}\cosh(2u)\right)\,dt + O(\lambda) \\
& = & \tfrac{1}{\sqrt{\lambda}}\tfrac{i\mathbf{p}}{2} + \sqrt{\lambda}\sum_{n\geq 0} c_n \lambda^{n} \\
& = & \tfrac{1}{\sqrt{\lambda}}\tfrac{i\mathbf{p}}{2} + \sqrt{\lambda}\sum_{n\geq 0} (-1)^{n+1} H_{2n+1} \lambda^n + i\sqrt{\lambda}\sum_{n\geq 0} (-1)^{n+1} H_{2n+2} \lambda^{n}.\end{aligned}$$ Thus we get $$\begin{aligned}
\frac{d}{dt}\ln\mu(\delta u,\delta u_y)\bigg|_{t=0} & = & \sqrt{\lambda}\sum_{n\geq 0} (-1)^{n+1} \Omega\left(\nabla H_{2n+1}, (\delta u, \delta u_y)\right) \lambda^{n} \\
& & +\, i\sqrt{\lambda}\sum_{n\geq 0} (-1)^{n+1} \Omega\left(\nabla H_{2n+2}, (\delta u, \delta u_y)\right) \lambda^{n}\end{aligned}$$ and a comparison of the coefficients of the two power series yields the claim.
An inner product on $\Lambda_r\mathfrak{sl}_2(\CC)$
===================================================
We already introduced a differential operator $L_{\lambda}:= \frac{d}{dx} + U_{\lambda}$ such that the $\sinh$-Gordon flow can be expressed in commutator form, i.e. $$\tfrac{d}{dy} L_{\lambda} = \tfrac{d}{dy}U_{\lambda} = [L_{\lambda},V_{\lambda}] = \tfrac{d}{dx}V_{\lambda} + [U_{\lambda},V_{\lambda}].$$
In the following we will translate the symplectic form $\Omega$ with respect to the identification $(u,u_y) \simeq U_{\lambda}$. First recall that the span of $\{\epsilon_+,\epsilon_-, \epsilon\}$ is $\mathfrak{sl}_2(\CC)$ and that the inner product $$\langle \cdot,\cdot \rangle: \mathfrak{sl}_2(\CC) \times \mathfrak{sl}_2(\CC) \to \CC, \; (\alpha,\beta) \mapsto \langle \alpha,\beta \rangle := \trace(\alpha\cdot \beta)$$ is non-degenerate. We will now extend the inner product $\langle\cdot,\cdot\rangle$ to a non-degenerate inner product $\langle\cdot,\cdot \rangle_{\Lambda}$ on $\Lambda_r\mathfrak{sl}_2(\CC) = \Lambda_r \mathfrak{su}_2(\CC) \oplus \Lambda^+_{r}\mathfrak{sl}_2(\CC)$.
\[lemma\_manin\] The map $\langle\cdot,\cdot \rangle_{\Lambda}: \Lambda_r\mathfrak{sl}_2(\CC) \times \Lambda_r\mathfrak{sl}_2(\CC) \to \RR$ given by $$(\alpha,\beta) \mapsto \langle\alpha,\beta \rangle_{\Lambda} := \Im \left(\text{Res}_{\lambda = 0} \tfrac{d\lambda}{\lambda}\trace(\alpha\cdot\beta)\right)$$ is bilinear and non-degenerate. Moreover, there holds $$\langle\cdot,\cdot \rangle_{\Lambda}|_{\Lambda_r \mathfrak{su}_2(\CC) \times \Lambda_r \mathfrak{su}_2(\CC)} \equiv 0 \; \text{ and } \; \langle\cdot,\cdot \rangle_{\Lambda}|_{\Lambda^+_{r}\mathfrak{sl}_2(\CC) \times \Lambda^+_{r}\mathfrak{sl}_2(\CC)} \equiv 0,$$ i.e. $\Lambda_r \mathfrak{su}_2(\CC)$ and $\Lambda^+_{r}\mathfrak{sl}_2(\CC)$ are isotropic subspaces of $\Lambda_r\mathfrak{sl}_2(\CC)$ with respect to $\langle\cdot,\cdot \rangle_{\Lambda}$.
The bilinearity of $\langle\cdot,\cdot \rangle_{\Lambda}$ follows from the bilinearity of $\trace(\cdot)$. Now consider a non-zero element $\xi = \sum_{i \in \mathcal{I}} \lambda^i \xi_i \in \Lambda_r \mathfrak{sl}_2(\CC)$ and pick out an index $j \in \mathcal{I}$ such that $\xi_j \neq 0$. Setting $\widetilde{\xi} = i \lambda^{-j} \overline{\xi}^t_j$ we obtain $$\langle \xi, \widetilde{\xi}\rangle_{\Lambda} = \Im \left(\text{Res}_{\lambda = 0} \tfrac{d\lambda}{\lambda}\trace(\xi\cdot\widetilde{\xi})\right) = \trace(\xi_j \cdot \overline{\xi}^t_j) \in \RR^+$$ since $\xi_j \neq 0$. This shows that $\langle\cdot,\cdot \rangle_{\Lambda}$ is non-degenerate, i.e. the first part of the lemma.\
We will now prove the second part of the lemma, namely that $\Lambda_r \mathfrak{su}_2(\CC)$ and $\Lambda^+_{r}\mathfrak{sl}_2(\CC)$ are isotropic subspaces of $\Lambda_r\mathfrak{sl}_2(\CC)$ with respect to $\langle\cdot,\cdot \rangle_{\Lambda}$.\
First we consider $\alpha^+ = \alpha^+_{\lambda} = \sum_i \lambda^i \alpha_i^+,\; \widetilde{\alpha}^+ = \sum_i \lambda^i \widetilde{\alpha}_i^+ \in \Lambda_r \mathfrak{su}_2(\CC)$ with $$\alpha^+_{\lambda} = -\overline{\alpha^+_{1/\bar{\lambda}}}^t \; \text{ and } \; \widetilde{\alpha}^+_{\lambda} = -\overline{\widetilde{\alpha}^+_{1/\bar{\lambda}}}^t.$$ Then one obtains $$\begin{aligned}
\langle \alpha^+,\widetilde{\alpha}^+\rangle_{\Lambda} & = & \Im \left(\text{Res}_{\lambda = 0} \tfrac{d\lambda}{\lambda}\trace(\alpha^+\cdot\widetilde{\alpha}^+)\right) \\
& = & \Im(\trace(\alpha_{-1}^+\cdot\widetilde{\alpha}_1^+ + \alpha_0^+\cdot \widetilde{\alpha}_0^+ + \alpha_1^+\cdot \widetilde{\alpha}_{-1}^+)).\end{aligned}$$ A direct calculation gives $$\begin{aligned}
\overline{\trace(\alpha_{-1}^+ \widetilde{\alpha}_1^+ + \alpha_0^+ \widetilde{\alpha}_0^+ + \alpha_1^+ \widetilde{\alpha}_{-1}^+)} & = & \trace((-\overline{\alpha}_{-1}^+)^t (-\overline{\widetilde{\alpha}}_1^+)^t + (-\overline{\alpha}_0^+)^t (-\overline{\widetilde{\alpha}}_0^+)^t + (-\overline{\alpha}_1^+)^t (-\overline{\widetilde{\alpha}}_{-1}^+)^t) \\
& \stackrel{!}{=} & \trace(\alpha_{-1}^+ \widetilde{\alpha}_1^+ + \alpha_0^+ \widetilde{\alpha}_0^+ + \alpha_1^+ \widetilde{\alpha}_{-1}^+)\end{aligned}$$ and thus $\trace(\alpha_{-1}^+ \widetilde{\alpha}_1^+ + \alpha_0^+ \widetilde{\alpha}_0^+ + \alpha_1^+ \widetilde{\alpha}_{-1}^+) \in \RR$. This shows $$\langle \alpha^+,\widetilde{\alpha}^+\rangle_{\Lambda} = \Im(\trace(\alpha_{-1}^+\cdot\widetilde{\alpha}_1^+ + \alpha_0^+\cdot \widetilde{\alpha}_0^+ + \alpha_1^+\cdot \widetilde{\alpha}_{-1}^+)) = 0.$$
Now consider $\beta^- = \sum_{i\geq 0} \lambda^i\beta_i^-,\; \widetilde{\beta}^- = \sum_{i\geq 0} \lambda^i\widetilde{\beta}_i^- \in \Lambda^+_{r}\mathfrak{sl}_2(\CC)$ where $\beta_0^-, \widetilde{\beta}_0^-$ are of the form $$\beta_0^- = \begin{pmatrix} h_0 & e_0 \\ 0 & -h_0 \end{pmatrix}, \;\; \widetilde{\beta}_0^- = \begin{pmatrix} \widetilde{h}_0 & \widetilde{e}_0 \\ 0 & -\widetilde{h}_0 \end{pmatrix}$$ with $h_0,\widetilde{h}_0 \in \RR$ and $e_0,\widetilde{e}_0 \in \CC$. Then one gets $$\begin{aligned}
\langle \beta^-,\widetilde{\beta}^-\rangle_{\Lambda} & = & \Im \left(\text{Res}_{\lambda = 0} \tfrac{d\lambda}{\lambda}\trace(\beta^-\cdot\widetilde{\beta}^-)\right) \\
& = & \Im(\trace(\beta_0^-\cdot \widetilde{\beta}_0^-)) \\
& = & \Im(2 h_0\widetilde{h}_0) \\
% & = & \Im \trace \begin{pmatrix} h_0 \widetilde{h}_0 & * \\ 0 & h_0 \widetilde{h}_0 \end{pmatrix} \\
& = & 0.\end{aligned}$$ This yields the second claim and concludes the proof.
Recall the notion of *Manin triples* $(\mathfrak{g},\mathfrak{p},\mathfrak{q})$ that were introduced by Vladimir Drinfeld [@Drinfeld]. They consist of a Lie algebra $\mathfrak{g}$ with a non-degenerate invariant symmetric bilinear form, together with two isotropic subalgebras $\mathfrak{p}$ and $\mathfrak{q}$ such that $\mathfrak{g} = \mathfrak{p} \oplus \mathfrak{q}$ as a vector space. Thus Lemma \[lemma\_manin\] shows that $(\Lambda_r\mathfrak{sl}_2(\CC), \Lambda_r \mathfrak{su}_2(\CC), \Lambda^+_{r}\mathfrak{sl}_2(\CC))$ is a Manin triple.
The symplectic form $\Omega$ and Serre duality
==============================================
This section incorporates our previous results and establishes a connection between the symplectic form $\Omega$ and Serre Duality [@Fo Thm. 17.9]. Moreover, we will show that $(M_g^{\mathbf{p}},\Omega,H_2)$ is a completely integrable Hamiltonian system.
Let $H^0_{\RR}(Y,\Omega) := \{\omega \in H^0(Y,\Omega) \left|\right. \overline{\eta^*\omega} = -\omega\}$ be the real part of $H^0(Y,\Omega)$ with respect to the involution $\eta$.
Since $\eta$ is given by $(\lambda,\mu) \mapsto (1/\bar{\lambda},\bar{\mu})$ we have $$\eta^*\overline{\delta\ln\mu} = \delta\ln\mu \;\; \text{ and } \;\; \eta^*\overline{\tfrac{d\lambda}{\lambda}} = -\tfrac{d\lambda}{\lambda}.$$ Let us define the map $\omega: T_{(u,u_y)}M_g^{\mathbf{p}} \to H^0_{\RR}(Y(u,u_y),\Omega)$ by $$(\delta u, \delta u_y) \mapsto \omega(\delta u, \delta u_y) := \delta \ln\mu(\delta u, \delta u_y) \frac{d\lambda}{\lambda}.$$
\[remark\_dY\] Due to Theorem \[theorem\_Mg\] we can identify the space $T_{Y(u,u_y)}\Sigma_g^{\mathbf{p}}$ of infinitesimal non-isospectral (but iso-periodic) deformations of $Y(u,u_y)$ with the space $H^0_{\RR}(Y(u,u_y),\Omega)$ via the map $c \mapsto \omega(c) := \frac{c}{\nu}\frac{d\lambda}{\lambda} = \delta\ln\mu \frac{d\lambda}{\lambda}$. Therefore $\omega$ can be identified with $dY$, the derivative of $Y: M_g^{\mathbf{p}} \to \Sigma_g^{\mathbf{p}}$. Due to Proposition \[proposition\_fiber\_bundle\], the map $Y: M_g^{\mathbf{p}} \to \Sigma_g^{\mathbf{p}}$ is a submersion. Thus the map $\omega: T_{(u,u_y)}M_g^{\mathbf{p}} \to H^0_{\RR}(Y(u,u_y),\Omega)$ is surjective.
Let $L_{(u,u_y)} \subset T_{(u,u_y)} M_g^{\mathbf{p}}$ be the kernel of the the map $\omega: T_{(u,u_y)} M_g^{\mathbf{p}} \to H^0_{\RR}(Y(u,u_y), \Omega)$, i.e. $L_{(u,u_y)} := \ker(\omega)$.
Now we are able to formulate and prove the main result. The proof is based on the ideas and methods presented in the proof of [@Schmidt_infinite], Theorem 7.5.
\[pairing\]
1. There exists an isomorphism of vector spaces $d\Gamma_{(u,u_y)}: H^1_{\RR}(Y(u,u_y), \mathcal{O}) \to L_{(u,u_y)}$.
2. For all $[f] \in H^1_{\RR}(Y(u,u_y), \mathcal{O})$ and all $(\delta u, \delta u_y) \in T_{(u,u_y)}M_g^{\mathbf{p}}$ the equation $$\Omega(d\Gamma_{(u,u_y)}([f]),(\delta u,\delta u_y)) = i\, \text{Res}([f] \, \omega(\delta u,\delta u_y))
\label{master}$$ holds. Here the right hand side is defined as in the Serre Duality Theorem [@Fo].
3. $(M_g^{\mathbf{p}},\Omega,H_2)$ is a Hamiltonian system. In particular, $\Omega$ is non-degenerate on $M_g^{\mathbf{p}}$.
<!-- -->
1. From the Serre Duality Theorem [@Fo Thm. 17.9] we know that the pairing $\text{Res}: H^1(Y(u,u_y), \mathcal{O}) \times H^0(Y(u,u_y), \Omega) \to \CC$ is non-degenerate.
2. $L_{(u,u_y)} \subset T_{(u,u_y)}M_g^{\mathbf{p}}$ is a maximal isotropic subspace with respect to the symplectic form $\Omega: T_{(u,u_y)}M_g^{\mathbf{p}} \times T_{(u,u_y)}M_g^{\mathbf{p}} \to \RR$, i.e. $L_{(u,u_y)}$ is Lagrangian.
<!-- -->
1. Let $[f] = [(f_0,\eta^*\bar{f}_0)] \in H^1_{\RR}(Y,\mathcal{O})$ be a cocycle with representative $f_0$ as defined in Lemma \[Lemma\_reality\_H1\]. Then, defining $A_{f_0}(x) := P_x(f_0) + \sigma^*P_x(f_0)$, we get $$A_{f_0}(x) = \sum_{i=0}^{g-1} c_i \lambda^{-i} (P_x(\lambda^{-1}\nu) + \sigma^*P_x(\lambda^{-1}\nu)) = \sum_{i=0}^{g-1} c_i \lambda^{-i} \zeta_{\lambda}(x)$$ and also $$\delta U_{\lambda}(x) = [A_{f_0}^+(x),L_{\lambda}(x)] = [L_{\lambda}(x),A_{f_0}^-(x)]$$ due to Theorem \[theorem\_vf\_isospectral\_U\]. Moreover, $$\delta v(x) = -A_{f_0}^-(x) v(x)$$ holds due to Remark \[remark\_Af\_minus\] with $A_{f_0}^-(x) = -\sum \frac{(\delta v_i(x)) w_i^t(x)}{w_i^t(x) v_i(x)}$. From Lemma \[general\_variation\_U\] we know that in general $$\delta U_{\lambda}(x) = \left[L_{\lambda}(x),-\sum_{i=1}^2 \frac{(\delta v_i(x)) w_i^t(x)}{w_i^t(x) v_i(x)}\right] + (P_x(\tfrac{\delta\ln\mu}{\mathbf{p}}) +\sigma^*P_x(\tfrac{\delta\ln\mu}{\mathbf{p}})).$$ Since $\delta U_{\lambda}(x) = [L_{\lambda}(x),A_{f_0}^-(x)]$ we see that $\delta\ln\mu(\delta u^{f_0}, \delta u_y^{f_0}) = 0$ and consequently $$(\delta u^{f_0}, \delta u_y^{f_0}) \in \ker(\omega).$$ Thus we have an injective map $d\Gamma_{(u,u_y)}: H^1_{\RR}(Y(u,u_y), \mathcal{O}) \to L_{(u,u_y)}$. Due to Remark \[remark\_dY\] we know that $\omega: T_{(u,u_y)} M_g^{\mathbf{p}} \to H^0_{\RR}(Y(u,u_y), \Omega)$ is surjective. Since $\dim H^0_{\RR}(Y(u,u_y), \Omega) = g$ there holds $\dim L_{(u,u_y)} = \dim \ker(\omega) = g$ and thus $d\Gamma_{(u,u_y)}: H^1_{\RR}(Y(u,u_y),\mathcal{O}) \to L_{(u,u_y)}$ is an isomorphism of vector spaces.
2. For an isospectral variation of $U_{\lambda}(x)$ we have $$\delta U_{\lambda}(x) = [L_{\lambda},B^-(x)] = \tfrac{d}{dx}B^-(x) - [U_{\lambda}(x),B^-(x)]$$ with a map $B^-(x): \RR \to \Lambda_r^+ \mathfrak{sl}_2(\CC)$. In the following $\oplus$ will denote the $\Lambda_r \mathfrak{su}_2(\CC)$-part of $\Lambda_r \mathfrak{sl}_2(\CC) = \Lambda_r \mathfrak{su}_2(\CC) \oplus \Lambda_r^+ \mathfrak{sl}_2(\CC)$ and $\ominus$ will correspond to the second summand $\Lambda_r^+ \mathfrak{sl}_2(\CC)$. Since $\delta U_{\lambda}(x)$ lies in the $\oplus$-part and $\frac{d}{dx}B^-(x)$ lies in the $\ominus$-part, we see that $\delta U_{\lambda}(x)$ is equal to the $\oplus$-part of the commutator expression $-[U_{\lambda}(x),B^-(x)]$. Writing $U$ for $U_{\lambda}(x)$ and $B^-$ for $B^-(x)$ we get $$\begin{aligned}
\delta U & = & \lambda^{-1}\delta U_{-1} + \lambda^0\delta U_0 + \lambda\delta U_1 \\
& \stackrel{!}{=} & \big(\lambda^{-1}[B_0^-,U_{-1}] + \lambda^0([B_1^-,U_{-1}] + [B_0^-,U_0]) \\
& & \hspace{1mm} +\, \lambda([B_2^-,U_{-1}]+[B_1^-,U_0]+[B_0^-,U_1])\big)_{\oplus}.\end{aligned}$$ Thus we arrive at three equations $$\begin{gathered}
\delta U_{-1} = [B_0^-,U_{-1}], \;\;\; \delta U_0 = \left([B_1^-,U_{-1}] + [B_0^-,U_0]\right)_{\oplus}, \\
\delta U_1 = \left([B_2^-,U_{-1}]+[B_1^-,U_0]+[B_0^-,U_1]\right)_{\oplus}.\end{gathered}$$ Recall that $U_{\lambda}$ is given by $$U_{\lambda} = \frac{1}{2}
\begin{pmatrix}
-i u_y & i\lambda^{-1}e^u + ie^{-u} \\
i\lambda e^u + ie^{-u} & iu_y
\end{pmatrix}$$ and consequently $$\delta U_{\lambda} = \frac{1}{2}
\begin{pmatrix}
-i\delta u_y & i\lambda^{-1} e^u \delta u -i e^{-u}\delta u \\
i\lambda e^u\delta u -i e^{-u}\delta u & i\delta u_y
\end{pmatrix}.$$ We can now use the above equations in order to obtain relations on the coefficients $B_0^-$ and $B_1^-$ of $B^- = \sum_{i\geq 0} \lambda^i B_i^-$ where $B_0^-$ is of the form $$B_0^- = \begin{pmatrix} h_0 & e_0 \\ 0 & -h_0 \end{pmatrix} \; \text{ with } \; h_0 \in \RR,\, e_0 \in \CC,$$ and $B_1^-$ is of the form $B_1^- = \left(\begin{smallmatrix} h_1 & e_1 \\ f_1 & -h_1 \end{smallmatrix}\right)$. Since $\delta U_{-1} = [B_0^-,U_{-1}]$ a direct calculation yields $$\begin{gathered}
B_0^-=\begin{pmatrix}\tfrac{1}{2}\delta u & e_0 \\ 0 & -\tfrac{1}{2}\delta u \end{pmatrix}.\end{gathered}$$ Moreover, the sum $[B_1^-,U_{-1}] + [B_0^-,U_0]$ is given by $$\begin{pmatrix}
\tfrac{i}{2}e_0 e^{-u} - \tfrac{i}{2}f_1 e^u & ih_1 e^u + ie_0 u_y + \tfrac{i}{2}e^{-u}\delta u \\
-\tfrac{i}{2}e^{-u}\delta u & -\tfrac{i}{2}e_0 e^{-u} + \tfrac{i}{2}f_1 e^u
\end{pmatrix}.$$ For the diagonal entry of $[B_1^-,U_{-1}] + [B_0^-,U_0]$ the $\oplus$-part is given by the imaginary part and therefore $$-\tfrac{i}{2}\delta u_y = \tfrac{i}{2}\Re(e_0) e^{-u} - \tfrac{i}{2}\Re(f_1) e^u.$$ Thus we get $$\delta u_y = \Re(f_1) e^u - \Re(e_0) e^{-u}.$$ For $B^-(x) := A_{f_0}^-(x)$ we obtain $$\begin{aligned}
\Im \left(\trace(\delta U_0 A_{f_0, 0}^{-}) + \trace(\delta U_{-1}A_{f_0 ,1}^{-})\right) & = & \Im\left(-i\delta u_y h_0 - \tfrac{i}{2}e_0 e^{-u}\delta u + \tfrac{i}{2}f_1 e^u\delta u\right) \\
& = & -\delta u_y \Re(h_0) +\tfrac{1}{2} \delta u \left(\Re(f_1) e^u- \Re(e_0) e^{-u} \right) \\
& = & \dfrac{1}{2}\left( \delta u \delta u_y^{f_0} - \delta u^{f_0} \,\delta u_y\right).\end{aligned}$$ Now a direct calculation gives $$\begin{aligned}
\frac{1}{2}\Omega(d\Gamma_{(u,u_y)}([f]),(\delta u,\delta u_y)) & = & \frac{1}{2}\int_0^{\mathbf{p}} (\delta u^{f_0}\delta u_y - \delta u \,\delta u_y^{f_0}) dx \\
& = & -\int_0^{\mathbf{p}} \Im \left(\trace(\delta U_0 A_{f_0, 0}^{-}) + \trace(\delta U_{-1}A_{f_0 ,1}^{-})\right) dx \\
& = & -\int_0^{\mathbf{p}} \langle \delta U_{\lambda}(x),A_{f_0}^-(x) \rangle_{\Lambda} dx \\
& \stackrel{\delta U_{\lambda} \in \oplus}{=} & -\int_0^{\mathbf{p}} \langle \delta U_{\lambda}(x),A_{f_0}(x) \rangle_{\Lambda} dx.\end{aligned}$$ Setting $\widehat{P}_x(\tfrac{\delta\ln\mu}{\mathbf{p}}) := P_x(\tfrac{\delta\ln\mu}{\mathbf{p}}) + \sigma^*P_x(\tfrac{\delta\ln\mu}{\mathbf{p}})$, we further obtain $$\begin{aligned}
\frac{1}{2}\Omega(d\Gamma_{(u,u_y)}([f]),(\delta u,\delta u_y)) & \stackrel{\text{Lem. } \ref{general_variation_U}}{=} & -\int_0^{\mathbf{p}} \langle [L_{\lambda}(x),B^-(x)],A_{f_0}(x) \rangle_{\Lambda} dx \\
& & -\, \int_0^{\mathbf{p}} \langle \widehat{P}_x(\tfrac{\delta\ln\mu}{\mathbf{p}}), A_{f_0}(x) \rangle_{\Lambda} dx \\
& = & \int_0^{\mathbf{p}} \langle [B^-(x),L_{\lambda}(x)], A_{f_0}(x) \rangle_{\Lambda} dx \\
& & -\, \int_0^{\mathbf{p}} \langle \widehat{P}_x(\tfrac{\delta\ln\mu}{\mathbf{p}}), A_{f_0}(x) \rangle_{\Lambda} dx.\end{aligned}$$ Recall, that $\trace([B^-(x), L_{\lambda}(x)]\cdot A_{f_0}(x)) = \trace(B^-(x)\cdot[L_{\lambda}(x),A_{f_0}(x)])$. Moreover, there holds $[L_{\lambda}(x), A_{f_0}(x)] = 0$ and we get $$\begin{aligned}
\frac{1}{2}\Omega(d\Gamma_{(u,u_y)}([f]),(\delta u,\delta u_y)) & = & \int_0^{\mathbf{p}} \langle B^-(x), [L_{\lambda}(x), A_{f_0}(x)]\rangle_{\Lambda} dx \\
& & -\, \int_0^{\mathbf{p}} \langle \widehat{P}_x(\tfrac{\delta\ln\mu}{\mathbf{p}}), A_{f_0}(x) \rangle_{\Lambda} dx \\
& = & -\, \int_0^{\mathbf{p}} \langle \widehat{P}_x(\tfrac{\delta\ln\mu}{\mathbf{p}}), A_{f_0}(x) \rangle_{\Lambda} dx \\
& = & -\int_0^{\mathbf{p}} \langle A_{f_0}(x),\widehat{P}_x(\tfrac{\delta\ln\mu}{\mathbf{p}})\rangle_{\Lambda} dx.\end{aligned}$$ Writing out the last equation yields $$\begin{aligned}
\Omega(d\Gamma_{(u,u_y)}([f]),(\delta u,\delta u_y)) & = & -\dfrac{2}{\mathbf{p}}\int_0^{\mathbf{p}} \Im\left(\text{Res}_{y_0} \tfrac{d\lambda}{\lambda}\trace(A_{f_0}(x)\widehat{P}_x(\delta\ln\mu))\right) dx \\
& = & -2\Im\left(\text{Res}_{y_0} (f_0 \cdot \delta \ln\mu \tfrac{d\lambda}{\lambda})\right) \\
& = & i\left(\text{Res}_{y_0} (f_0 \cdot \delta \ln\mu \tfrac{d\lambda}{\lambda}) - \text{Res}_{y_0} \overline{(f_0 \cdot \delta \ln\mu \tfrac{d\lambda}{\lambda})}\right) \\
& = & i\left(\text{Res}_{y_0} (f_0 \cdot \delta \ln\mu \tfrac{d\lambda}{\lambda}) - \text{Res}_{y_{\infty}} \eta^*\overline{(f_0 \cdot \delta \ln\mu \tfrac{d\lambda}{\lambda})}\right) \\
& = & i\left(\text{Res}_{y_0} (f_0 \cdot \delta \ln\mu \tfrac{d\lambda}{\lambda}) + \text{Res}_{y_{\infty}} (f_{\infty} \cdot \delta \ln\mu \tfrac{d\lambda}{\lambda})\right)\end{aligned}$$ and thus $$\Omega(d\Gamma_{(u,u_y)}([f]),(\delta u, \delta u_y)) = i\, \text{Res}([f]\,\omega(\delta u,\delta u_y)).$$
3. In order to prove (iii) we have to show that $\Omega$ is non-degenerate on $M_g^{\mathbf{p}}$. From equation we know that $$\Omega((\delta u, \delta u_y), (\delta \widetilde{u}, \delta \widetilde{u}_y)) = 0 \; \text{ for } \; (\delta u, \delta u_y), (\delta \widetilde{u}, \delta \widetilde{u}_y) \in L_{(u,u_y)} = \ker(\omega).$$ Moreover, $\omega: T_{(u,u_y)}M_g^{\mathbf{p}} \to H^0_{\RR}(Y(u,u_y),\Omega)$ is surjective since $\dim T_{(u,u_y)}M_g^{\mathbf{p}} = 2g$ and $\dim\ker(\omega) = g = \dim H^0_{\RR}(Y(u,u_y),\Omega)$. Thus we have $$T_{(u,u_y)}M_g^{\mathbf{p}}/\ker(\omega) \simeq H^0_{\RR}(Y(u,u_y),\Omega)$$ and there exists a basis $\{ \delta a_1,\ldots,\delta a_g,\delta b_1,\ldots,\delta b_g\}$ of $T_{(u,u_y)}M_g^{\mathbf{p}}$ such that $$\text{span}\{\delta a_1,\ldots,\delta a_g\} = \ker(\omega) \; \text{ and } \; \omega[\text{span}\{\delta b_1,\ldots,\delta b_g\}] = H^0_{\RR}(Y(u,u_y),\Omega).$$ Now $L_{(u,u_y)} = \ker(\omega) \simeq H^1_{\RR}(Y(u,u_y),\mathcal{O})$ and since the pairing from Serre duality is non-degenerate we obtain with equation (after choosing the appropriate basis) $$\Omega(\delta a_i,\delta b_j) = \delta_{ij} \; \text{ and } \; \Omega(\delta b_i,\delta a_j) = -\delta_{ij}.$$ Summing up the matrix representation $B_{\Omega}$ of $\Omega$ on $T_{(u,u_y)}M_g^{\mathbf{p}}$ has the form $$B_{\Omega} = \begin{pmatrix} 0 & \unity \\ -\unity & (\ast) \end{pmatrix}$$ and thus $\Omega$ is of full rank. This shows (iii) and concludes the proof of Theorem \[pairing\].
|
---
author:
- 'F.F. Bauer'
- 'M. Zechmeister'
- 'A. Kaminski'
- 'C. Rodríguez López'
- 'J. A. Caballero'
- 'M. Azzaro'
- 'O. Stahl'
- 'D. Kossakowski'
- 'A. Quirrenbach'
- 'S. Becerril Jarque'
- 'E. Rodríguez'
- 'P.J. Amado'
- 'W. Seifert'
- 'A. Reiners'
- 'S. Sch[ä]{}fer'
- 'I. Ribas'
- 'V.J.S. Béjar'
- 'M. Cortés-Contreras'
- 'S. Dreizler'
- 'A. Hatzes'
- 'T. Henning'
- 'S.V. Jeffers'
- 'M. Kürster'
- 'M. Lafarga'
- 'D. Montes'
- 'J.C. Morales'
- 'J.H.M.M. Schmitt'
- 'A. Schweitzer'
- 'E. Solano'
date: 'Received 26 March 2020 / Accepted 26 May 2020'
subtitle: |
Measuring precise radial velocities in the near infrared:\
the example of the super-Earth CDCetb[^1]^,^[^2]
title: The CARMENES search for exoplanets around M dwarfs
---
Introduction
============
Our ability to detect exoplanets with radial velocities (RVs) has constantly grown with advances in technology and methodology. Hard-won early planetary discoveries, such as 51Pegb [@51Peg], have now become easily detectable, and the field steadily approaches the domain of small, rocky planets (e.g., [@Proxima]; [@Teegarden]; [@GJ1061]). The detection of Earth analogs orbiting in the habitable zone may currently be beyond our reach in G stars for a number of reasons. However, we can probe this interesting region of the exoplanet space with M dwarfs. Most of the closest neighbors of the Sun are M dwarfs, which are the most numerous stars in the Galaxy [@2006AJ....132.2360H]. M dwarfs seem to harbor at least one or two planets per star [@Ga16; @HU19]. Therefore, many, if not most, planets in the Galaxy orbit M dwarfs. If we want to understand planet formation processes, planet compositions, and habitability, M dwarfs in our vicinity are a good starting point. Moreover, small, rocky planets orbiting within the habitable zone of M stars are indeed already within our detection ability. Although habitability is still vaguely defined [@HZ_Kasting; @Ta07], if life develops under a broad range of conditions, inhabited worlds could be numerous, close by, and already detectable for us.
As a result, M dwarfs have become the focus of many exoplanetary studies. As M dwarfs are intrinsically faint and their spectral energy distribution peaks in the near infrared, instrumentation had to adapt over the last years. This drove the extension from blue-sensitive spectrographs such as HARPS [@Mayor2003Msngr.114...20M] to near-infrared instruments, such as CARMENES[^3] [@CARMENES_instrument_overview; @CARMENES_near_infra_read_spectra], GIANO [@GIANO], iSHELL [@iSHELL] the successor of CSHELL [@CSHELL][^4], SPIRou[^5] [@Spirou], IRD[^6] [@IRD14], and HPF[^7] [@HPF], which are already in operation or those that will be operative soon, such as NIRPS[^8] [@NIRPS17], CRIRES+[^9] [@CRIRES_plus], or Veloce Rosso [@Veloce_Rosso]. Making this move, however, requires solving several technical issues. Near-infrared spectrographs need to be actively thermally stabilized to a high degree of precision [@Carmenes_tank; @HPF_tank; @Spirou_tank]. Furthermore, HgCdTe hybrid near-infrared detectors work differently from optical charge-coupled devices (CCDs) [@Bec19]. Last but not least, the calibration procedure requires sources and methods that are adapted to the new wavelength regime [@NIR_FPs_Schaefer; @FP_Bauer; @Luis; @FP_Schaefer; @CARMENES_near_infra_read_spectra].
CARMENES was among the first operational near-infrared instruments to have taken on all these challenges. The system has finished its fourth year in operation and has proven its value for exoplanetary science through new detections and confirmations of planets, and through investigations of their atmospheres [e.g., @Sa18; @No18; @Barnard_c; @Teegarden; @GJ3512]. In this paper, we present our experience and provide insight for scientists outside the consortium working with CARMENES data. To showcase the capabilities of CARMENES, we also announce the discovery of a super-Earth orbiting in the temperate zone of an M5.0V star. Section \[sec:CARMENES\] summarizes the main characteristics of both spectrograph channels and presents their stability limits, along with insights about their causes. In Sect. \[sec:RV\_computation\] we show our recipe to derive precise RVs with the near-infrared channel of CARMENES, and how they compare to those of the visible. In Sect. \[sec:star\] we apply our RV methods and present the discovery of the temperate super-Earth CDCetb at visible and near-infrared wavelengths. We summarize and discuss the abilities of CARMENES in Sect. \[sec:discussion\].
CARMENES in operation {#sec:CARMENES}
=====================
CARMENES in a nutshell {#sec:CARMENES_summary}
----------------------
CARMENES [@CARMENES_instrument_overview; @CARMENES_near_infra_read_spectra] consists of two channels: the optical channel (from now on VIS), which covers the range 5200–9600Å at a resolution of $R=94\,600$, and the near-infrared channel (from now on NIR), which covers 9600–17100Å at $R=80\,400$. Both spectrograph channels are located at the 3.5m Calar Alto telescope at the Centro Astronómico Hispano-Alemán (CAHA). They are fed from the Cassegrain focus via fiber links containing sections with octagonal cross-sections, which improves the scrambling, that is, the stability of the fiber output when the input varies due to telescope tracking errors or variable seeing [@2014SPIE.9151E..52S]. The entrance apertures of the spectrographs are formed by images of the fibers, sliced into two halves to allow for the high resolving power [@2012SPIE.8446E..33S]. For maximum temperature stability, all optical elements are mounted on solid, aluminum optical benches and placed inside vacuum tanks, which in turn are located in three separate temperature-controlled rooms within the coudé room below the telescope (one for each spectrograph channel, and one for both calibration units).
A major difference between the two CARMENES spectrograph channels is their temperature stabilization concept. The VIS channel is passively stabilized. It thus follows the temperature of the ambient chamber, albeit slowly due to its thermal inertia and insulation. In contrast, the NIR channel is actively stabilized [@Carmenes_tank]. Efficient observations beyond $1\,\mu$m require minimizing the thermal background radiation by cooling the instrument. Thus, the NIR channel is fed by $\mathrm{N_2}$ gas at about 140K. This operating temperature is set by the requirement that the thermal background seen by the detector, which is sensitive to about 2.4$\mu$m, is lower than the dark current. Keeping instrumental changes small is crucial, as milli-Kelvin temperature fluctuations translate into absolute drifts on the order of a few ms$^{-1}$. Maintaining the instrumental temperature ultra-stable by a cooling circuit is, thus, among the main challenges of obtaining high-precision RVs in the near-infrared. Hence, substantial engineering efforts during the first year of operations (2016) with CARMENES were dedicated to optimizing the NIR cooling system (Sect. \[sec:NIR\_performance\]).
Wavelength calibration is another factor critical for obtaining high-precision RVs even with stabilized spectrographs. Hence, the VIS calibration unit is equipped with three types of hollow-cathode lamps (thorium-neon, uranium-neon, and uranium-argon). For the NIR channel we use only an uranium-neon lamp, because thorium lamps deliver too few lines [@2014ApJS..211....4R] and argon emits strong features between 0.96$\mu$m and 1.71$\mu$m. In addition, both spectrograph channels have their own dedicated Fabry-Pérot (FP) etalons, which are used for simultaneous drift measurements during the night [@FP_Schaefer], and also for the construction of the daily wavelength solution [@FP_Bauer]. Below, we examined the whole data set built with all calibration observations carried out during the past four years to present an overview of the VIS and NIR channel performance.
VIS channel performance {#sec:VIS_performance}
-----------------------
![[*Panel ($a$):*]{} absolute RV drift of CARMENES VIS over four years. Detector cryostat sorption pump re-generations are marked with black ticks. [*Panel ($b$):*]{} temperature of the CARMENES VIS room (orange) and of the VIS optical bench (blue). [*Panel ($c$):*]{} Time series of the VIS FP. Object fiber A (black) and reference fiber B (gray). [*Panel ($d$):*]{} relative drift between fiber A and fiber B. [*Panel ($e$):*]{} PTPS computed from panel ($c$). Shaded area: data taken prior to the FP coupling change. [*Panel ($f$):*]{} cumulative histogram of PTPS marked with the 25%, 50%, 75% quantiles (vertical dashed lines). The median precision of the VIS channel is $1.2$ ms$^{-1}$. []{data-label="fig:joint_VIS"}](abs_drift_VIS+ptps){width="\linewidth"}
Except for a few days when large maintenance programs are carried out, CARMENES is calibrated every afternoon a few hours before first observations take place at Calar Alto. The wavelength solution is recomputed for each individual calibration set, that is, daily. To measure the absolute instrument drift, we computed the median difference between each new CARMENES wavelength solution and the first one obtained at the beginning of the survey. We plot this absolute instrumental drift together with the VIS room temperature and the VIS optical bench temperature in Fig. \[fig:joint\_VIS\] (panels $a$ and $b$).
The main source of long-term instrumental drifts in the VIS channel are seasonal temperature variations inside the spectrograph room. These variations ($\pm 0.2$K) are well within the precision that the climatic control inside the room is able to achieve. These changes in room temperature then propagate toward the optical bench. Slight alignment changes due to thermal expansion cause seasonal instrumental drifts of about 400–600ms$^{-1}$ peak to peak. The translation from optical bench temperature change to instrumental drift is about 2ms$^{-1}$mK$^{-1}$. Larger excursions (indicated by black ticks in Fig. \[fig:joint\_VIS\], panel $a$) are related to infrequent vacuum re-generations inside the VIS channel detector cryostat. In general, the instrument quickly settles after such a maintenance action.
To test the performance of our wavelength calibration strategy, we investigated the RV drift of our FP spectra. While the FP spectra assists our wavelength solution during the calibration process, each individual wavelength solution is anchored to the standard hollow-cathode lamps. This means that slow intrinsic Doppler-like drifts in the FP spectra are calibrated out from night to night, but these drifts can be computed by treating the FP spectra like stellar spectra. For this exercise, we selected one FP spectrum taken during each daily calibration run and derived RVs as we do for the stars. We used [[[serval]{}]{}]{} to perform a least-squares matching between an observed spectrum and a high signal-to-noise ratio (S/N) template. In Fig. \[fig:joint\_VIS\] (panel $c$), we present the RV curve for the VIS FP. The shaded area indicates FP spectra taken prior to a change in the coupling of the FPs into the calibration units, which resulted in a more homogeneous illumination of the calibration fibers, but led to a jump in the FP drift curve of about 100.5ms$^{-1}$, for which we corrected in this plot.
The FP RVs show a slowly varying RV drift with a pattern of seasonal variations much smaller than the absolute spectrograph drift in panel ($a$) of Fig.\[fig:joint\_VIS\]. These variations are probably related to slow changes in the alignment of the FP system (see [@NIR_FPs_Schaefer] and [@FP_Schaefer] for more details). The long term relative drift between both fibers is re-calibrated every night by a new wavelength solution. It can, however, be made visibly by subtracting the FP RVs measured in both fibers (panel $c$) from each other as shown in panel ($d$). We do find a small long-term differential drift, but the seasonal temperature modulations observed in the absolute drift (top panel $a$ of Fig. \[fig:joint\_VIS\]) are similar for both fibers. Thus, there is no significant seasonal relative drift between the fibers. The day-to-day jitter around the smooth FP drift of panel ($c$) represents the precision of our VIS channel, including the calibration strategy. We quantified this mostly day-to-day scatter by computing the point-to-point scatter (PTPS, see panel $e$ of Fig. \[fig:joint\_VIS\]) and the cumulative histogram of the PTPS (panel $f$ of Fig. \[fig:joint\_VIS\]). We conclude that the median RV precision reached with the current calibration and data reduction procedures of CARMENES VIS is $1.2$ ms$^{-1}$.
NIR channel performance {#sec:NIR_performance}
-----------------------
![Same as Fig. \[fig:joint\_VIS\], but for CARMENES NIR. For comparison the VIS instrumental drift is over-plotted in green. The orange tick indicates a planned temperature rise of the NIR channel, while the black ticks mark larger maintenance operations. The gray shaded area indicates an initial phase of poor thermal stability. The median precision of the NIR channel is $3.7$ ms$^{-1}$. []{data-label="fig:joint_NIR"}](abs_drift_NIR+ptps){width="\linewidth"}
We begin the instrument performance analysis of CARMENES NIR by elucidating the instrumental drift with the aid of Fig. \[fig:joint\_NIR\]. As a result of an initial phase of poor thermal stability (gray shaded area in top panels $a$ and $b$ of Fig. \[fig:joint\_NIR\]), the nominal operations of CARMENES NIR began on 2016-11-01. We find the NIR channel response to optical bench temperature changes to be similar to that of the VIS channel (about 2ms$^{-1}$mK$^{-1}$), while its peak-to-peak instrumental drift of about 3200ms$^{-1}$ is substantially larger than that of the VIS channel. Most instrumental changes in the NIR channel occur, however, during maintenance operations of the complex active cryogenic cooling system (marked by black ticks in panel $a$ of Fig. \[fig:joint\_NIR\]). During nominal operations the instrumental drift of the VIS and NIR channels are comparable. We repeated the FP analysis of Sect. \[sec:VIS\_performance\] for NIR. The resulting RV curve, differential drift, PTPS, and cumulative histogram are plotted in panels ($c$) to ($f$) of Fig. \[fig:joint\_NIR\]. When comparing the FP RV curves between the VIS and NIR channels (panels $c$ of Figs. \[fig:joint\_VIS\] and \[fig:joint\_NIR\], respectively), one may notice the similarities in the trends. The reason for both FP systems drifting in a rather similar way is likely that they share the same temperature control unit. Thus, they are not completely independent, and any change in coolant temperature affects the VIS and NIR FPs alike. In the relative drift, shown in panel ($d$) of Fig. \[fig:joint\_NIR\], we do find indications for a link between absolute spectrograph and relative fiber drift in the NIR channel. Cooling system maintenance actions (marked by black ticks – and one orange tick) are typically the onset of differential fiber changes. The precision of the NIR channel, plotted in panels ($e$) and ($f$) in Fig. \[fig:joint\_NIR\], is about three times worse than that of the VIS channel. Over the 2017–2019 interval we obtain a median precision of $3.7$ ms$^{-1}$ for CARMENES NIR. However, having gained experience with the stabilization of a cryogenic instrument, we find that there is still room for improvements in CARMENES NIR. As seen around JD 2458400 in Fig. \[fig:joint\_NIR\], panel (e), the precision of the NIR instrument improved from a median of $4.48$ms$^{-1}$ to $2.36$ms$^{-1}$. The reason was a maintenance action improving the insulation of all pipes feeding the instrument with N$_2$ cooling gas. Thus, the coolant arriving in the instrument experiences less warm up and the feedback loop to thermally stabilize the instrument works more accurately.
Radial velocity computation {#sec:RV_computation}
===========================
{width="\linewidth"}
Reaching an RV precision at the ms$^{-1}$ level requires the use of thousands of spectral lines in order to collect sufficient information [@Bo01]. Hence, all state-of-the-art RV instruments employ the cross-dispersed échelle design [@ELODIE], which re-formats a large wavelength range at high resolution onto a square or rectangular detector. RVs are typically measured order-by-order and are then combined into a final measurement value . Particularly in the near-infrared range, however, spectral information is unevenly distributed over spectral orders caused by differences in the distribution of flux, stellar molecular absorption, and telluric (water) absorption, not counting physical gaps between units in the detector array (see [@Reiners2018_324_stars] and [@Reiners2020]). We expect spectral regions with low information content to yield RVs that are more strongly affected by systematics, such as weak and unmasked telluric lines or detector defects. In this section, we aim to optimize the CARMENES RV precision by identifying those spectral orders that provide the most precise RV data and those that exhibit small, unseen systematics.
We ran [[[serval]{}]{}]{} on all M stars in the CARMENES survey and started with an investigation of the orders. The NIR channel contains a mosaic of two 2k$\times$2k detectors arranged side by side with a gap, whose width corresponds to about $140$ pixels. RVs are, therefore, measured separately for the two detectors, that is, for half-orders, by [[[serval]{}]{}]{}. This provides 38 wavelength chunks (excluding those within the telluric bands), each delivering an independent RV measurement ([[[serval]{}]{}]{} rvo files). In contrast, the VIS channel contains 61 individual orders on a single monolithic 4k$\times$4k CCD. The six reddest orders are cut off by the dichroic beam splitter in the front end, which separates the light for the VIS and the NIR channels, and they are thus not used to derive RVs. We also rejected the last four orders before the beam splitter cutoff because of strong telluric bands and the ten bluest orders, because they are cropped and typically yield low signal-to-noise ratios (S/N) for M stars. Hence, the VIS channel provides 41 independent RV measurements.
The task at hand is now to identify which of these orders are the most useful to maximize the RV precision, and which ones systematically bias the final RV data. The systematics contributed by single orders is likely below the ms$^{-1}$ level. Thus, regions strongly affected by unmasked tellurics or other uncorrected effects can hardly be identified in the data of a single star. However, we expect the spectral orders contributing systematics to be the same in a large sample of similar stars. We used the following iterative approach making use of the large CARMENES target sample to identify the useful orders:
1. For the individual wavelength chunks, we compute the root-mean-square (rms) scatter for each star in the entire CARMENES guaranteed time observations sample [@Reiners2018_324_stars], after correction for nightly offsets using the nightly zero points [NZP; @CARMENES_one_Trifon; @HARPS_NZP; @Lev_NZPs]. We rank the wavelength chunks by the median rms scatter derived from the entire CARMENES sample.
2. We select the wavelength chunk with the lowest sample rms and remove it from the pool of available RV measurements.
3. Each remaining RV measurement in the pool is combined with the (half-) orders that were already selected. We use inverse squared RV errors as weights.
4. Again we draw the chunk from the pool that, in combination with those already drawn, results in the lowest sample rms.
5. We continue at point three until no (half-) orders are left in the pool.
It is important to note that we excluded stars with strong intrinsic signals (rms $>$ 100ms$^{-1}$) from the sample when examining the orders.
Panel ($a$) of Fig. \[fig:orderselection\] illustrates graphically the entire process for the VIS and NIR channels. Our final selection of useful (half-) orders contains the set that minimizes the median CARMENES sample rms (bottom panel of Fig. \[fig:orderselection\]). For the VIS channel almost all orders were selected (35 of 41). The remaining six orders that the procedure rejects, however, do not impact the RV precision significantly (0.05ms$^{-1}$) and are, therefore, kept when computing the RVs. For the NIR channel the situation is different, and we obtain the best precision with only 19 half-orders, which are used in what follows to derive homogeneously the RVs for all our sample stars.
We suspect rejected orders to contain systematics. Therefore, orders that are identified to impact RV performance significantly should pass a number of consistency tests. First, we tested the orders selected in Fig. \[fig:orderselection\] against 10000 random order combinations. None of these random draws outperforms the selection of our algorithm and, on average, random selection reaches a sample rms of 14.2ms$^{-1}$ in the NIR channel (10.7ms$^{-1}$ with the selection algorithm). Hence, the presented order selection algorithm identifies contaminated orders reliably. For the second test we split our sample by survey year. There should be no significant change in rejected orders over time and indeed orders flagged as bad (above the 0.2ms$^{-1}$ tolerance zone) are consistent overall. In a third test, we split our sample in two equally sized subsamples and ran the algorithm separately on both. Again we do not find significant differences.
The stable results likely indicate the presence of minuscule, hidden systematics in the orders that are rejected. In panels ($b$) to ($d$) in Fig. \[fig:orderselection\] we present the most likely candidates. Most of the NIR RV information in M dwarfs is located in the $Y$ and $H$ bands, while there exists a paucity of spectral features in the $J$ band (see panel $b$, as well as [@Reiners2018_324_stars] and [@Reiners2020]). Telluric contamination is also less severe in the $Y$ and $H$ bands as compared to the $J$ band so that more pixels are available for RV measurement after masking atmospheric lines of Earth (panel $c$). Thus, we expect the $Y$ and $H$ bands to be most reliable for RV determination, while we expect the $J$ band to contribute more systematics, possibly as a result of unmasked tellurics.
In addition to those astrophysical limitations, the red CARMENES NIR detector suffers from a hot corner that limits its performance in the $H$ band. In panel ($d$) we measured the pixel to pixel scatter in the spectra of a fast rotating F5V star and related it to expectations from photon and read out noise. The F5V star was chosen because it delivers relatively constant S/N values (slightly decreasing toward the blue end of the VIS channel) allowing for a fair rating of underlying detector noise. Values close to unity indicate that the noise in the spectra is explained well by photon and readout noise. Values above unity hint toward additional, unaccounted noise contributions. The difference between the red and blue detector performance toward our reddest spectral orders in CARMENES NIR is clearly visible, explaning why half of the $H$ band is rendered unusable (with the current data reduction) for high precision RV measurements.
Currently our selection algorithm works with relatively wide wavelength grids. We investigated the stellar spectra to understand if a smaller grid would safe parts of the rejected orders and further improve RV precision in CARMENES. In the VIS channel, no orders suffer strong contamination. In the NIR channel, there is very little stellar information to be saved in the $J$ band and increased detector noise on the red edge of the red CARMENES NIR detector is evenly distributed along the orders. Hence, we currently do not expect significantly improved RVs when using finer wavelength bins in both channels. In the future, however, when telluric contamination is modeled and removed from the spectra, a finer grid may help identifying telluric residuals and thus lead to precision improvements. Other instruments, especially those aiming at sub ms$^{-1}$ precision, may as well benefit from a reduction of systematic biases on a suborder scale.
CARMENES currently measures the drift globally and we tested the impact of using all, or only the set of selected good orders, to derive and correct the instrument drift in the NIR channel. Both drift measurements are consistent, as the median difference (0.3ms$^{-1}$) is comparable to the median error of the drift (0.42ms$^{-1}$). Tellurics are weaker in the calibration fiber and detector noise is less problematic for the FP because it is brighter than most of our stars. Thus, drift measurements are less vulnerable to the systematics that impact stars.
We now proceed to the evaluation of the CARMENES RV performance on sky. In Fig. \[fig:rv\_stat\], we show the rms scatter histogram for our M dwarf sample for the VIS and NIR channels. In line with previous RV surveys , we find that the typical RV rms of our M dwarfs observed by the VIS channel is around 3–5ms$^{-1}$. This is significantly above our median photon noise limited RV precision of around 1.5ms$^{-1}$ (as computed by [[[serval]{}]{}]{}). Hence, we attribute most of this noise floor to stellar activity jitter. The rms distribution in the NIR channel peaks at significantly higher values, with its maximum located around 7–9ms$^{-1}$. This distribution can be explained by the lower RV content in the parts of the NIR channel spectra that are useful for precise RV determination (median limit of $6.9$ms$^{-1}$), the higher instrumental jitter ($3.7$ms$^{-1}$, see Sect. \[sec:NIR\_performance\]), and stellar activity. After the better N$_2$ pipe insulation improved the NIR channel stability, the measured stellar RVs improved in rms by about 1ms$^{-1}$. This is in line with expectations when combining all jitter terms. Thus, the relative importance of NIR RVs as compared to those of VIS has improved since September 2018.
![Histogram of rms scatter exhibited by the CARMENES sample stars observed by the CARMENES VIS channel ([*top*]{}) and the NIR channel ([*bottom*]{}). Vertical lines mark the instrument jitter (red long-dashed), the median photon noise of the sample (orange dotted), the average M dwarf activity level (green short-dashed) and the quadratically summed jitter (blue dash-dotted). []{data-label="fig:rv_stat"}](rv_statistics){width="\linewidth"}
An application to CD Cet {#sec:star}
========================
Host star
---------
(GJ 1057, Karmn J03133+047) is a bright [$J \approx 8.8$mag, @2MASS06], nearby [$d \approx$ 8.6pc, @Gaia18b], M5.0V star [@1984AJ.....89.1229R; @1994AJ....107..333K; @1996AJ....112.2799H; @2013AJ....145..102L; @2015ApJ...802L..10T] discovered in the Lowell proper motion survey of @1961LowOB...5...61G, who tabulate it as G 77–31. The mid-type M dwarf is relatively quiet, which is reflected by a low chromospheric activity level as indicted by measurements of $\log R'_{\rm HK}$ between $-5.16$ [@Activity_catalogue] and $-5.52$ [@HARPS_activity]. This is in good agreement with its long rotational period of about $126.2$d determined from MEarth photometry [@Newton_rotation], low rotational velocity [@Reiners2018_324_stars], absent H$\alpha$ emission , and its non detection in volume-limited [*ROSAT*]{} surveys of extreme ultraviolet/soft X-ray emission from all stars within 10pc [@1994ApJS...93..287W; @2013MNRAS.431.2063S].
CDCet was the subject of high-resolution lucky imaging searches by @2012ApJ...754...44J and @multiplicity_Miriam, who impose stringent upper limits on the presence of stellar and substellar companions down to 8mag fainter than CD Cet in the red optical at angular separations from 0.2 to 5.0arcsec. We also applied the same methodology of and searched for wider common proper motion companions within the [*Gaia*]{} DR2 completeness and up to 2deg, but without success[^10]. To sum up, CD Cet is a single star.
Table \[tab:Parameters\] summarizes some basic stellar parameters mostly compiled from the CARMENES M dwarf input catalog [Carmencita, @Carmencita] and following addenda by @Pass19 and @Schw19. Our astrophysical stellar parameters ($T_{\rm eff}$, $\log{g}$, \[Fe/H\]), based on CARMENES VIS+NIR channel spectra, reasonably match previous determinations by @2013AJ....145..102L, , @2014MNRAS.443.2561G, @2014AJ....147...20N, and @2015ApJS..220...16T. We recomputed the galactocentric space velocities as in @Cortes2016UCM-PhD with the latest absolute radial velocity of @Laf20, which is identical within uncertainties to that of @Reiners2018_324_stars and supersedes previous determinations [@2014AJ....147...20N; @2015ApJS..220...16T; @2015ApJ...812....3W]. With the tabulated $UVW$ values, CD Cet belongs to the Galactic thin disk population, from which we estimate an age between 1Ga and 5Ga [@Cortes2016UCM-PhD].
[@lcr@]{} Parameters & Value & Ref.\
Name & &\
GJ & & GJ91\
Karmn & J03133+047 & Cab16\
$\alpha$ (J2000) & 03:13:22.92 & *Gaia* DR2\
$\delta$ (J2000) & +04:46:29.3 & *Gaia* DR2\
$G$ (mag) & $12.1232\pm0.0011$ & *Gaia* DR2\
$J$ (mag) & $8.775\pm0.020$ & 2MASS\
Spectral type & M5.0V& PMSU\
$d$ (pc) & $8.609\pm0.007$ & *Gaia* DR2\
$\mu_{\alpha}\cos{\delta}$ (masa$^{-1}$) & $+1741.86 \pm0.17$ & *Gaia* DR2\
$\mu_{\alpha}$ (mas a$^{-1}$) & $+86.02\pm0.15$ & *Gaia* DR2\
$\gamma$ (kms$^{-1}$) & $+28.16\pm0.02$ & Laf20\
$U$ (kms$^{-1}$) & $-60.32\pm0.05$ & This work\
$V$ (kms$^{-1}$) & $-43.01\pm0.06$ & This work\
$W$ (kms$^{-1}$) & $+19.27\pm0.07$ & This work\
$T_{\rm eff}$ (K) & $3130\pm51$ & Pass19\
$\log{g}$ (cgs) & 4.93$\pm0.04$ & Pass19\
$\rm{[Fe/H]}$ (dex) & $0.13\pm0.16$ & Pass19\
$L$ ($10^{-5}L_\odot$) & $293.4 \pm 5.3$ & Schw19\
$M$ ($M_{\odot}$) & $0.161\pm0.010$ & Schw19\
$R$ ($R_{\odot}$) & $0.175\pm0.006$ & Schw19\
$v\sin{i}$ (kms$^{-1}$) & $\le 2.0$ & Rein18\
$P_{\rm{rot}}$ (d) & 126.2 & New16\
$\log{R'_{\rm{HK}}}$ & $-5.16$ & Boro18\
Age (Ga) & 1–5 & This work\
has been observed by several spectroscopic and photometric instruments. Below, we shortly summarize the data available to us and analyzed in this work.
Data
----
### Precise radial velocities
#### CARMENES.
Since the start of CARMENES operations in January 2016 we have collected $112$ VIS and $111$ NIR channel spectra within the CARMENES guaranteed time observations. The spectra typically reach S/N of $40$ in the VIS channel (at 0.66$\mu$m) and $150$ in the NIR channel (at 1.3$\mu$m) using exposure times of $30$min. In the VIS channel, we discarded two spectra with low ${\rm S/N}<10$ and four without simultaneous FP drift measurement. In the NIR channel, we discarded two spectra with ${\rm S/N}<10$, one without FP reference, and four spectra taken prior to the NIR channel stabilization (see Sect. \[sec:NIR\_performance\]). This left us with 106 useful VIS and 105 NIR spectra. We extracted, calibrated, and drift-corrected all of them with the CARMENES pipeline [caracal]{} [@FOX; @FP_Bauer; @Caballero_data_flow]. RVs were derived from the useful orders selected in Sect. \[sec:RV\_computation\] using [[[serval]{}]{}]{} . Subsequently, we corrected for systematic night-to-night offsets [@CARMENES_one_Trifon; @HARPS_NZP; @Lev_NZPs]. Our final RV time series exhibit an rms scatter of $5.2$ms$^{-1}$ with median errors of $1.7$ms$^{-1}$ in the VIS channel, and an rms scatter of $6.5$ms$^{-1}$ and a median error of $5.4$ms$^{-1}$ in the NIR channel. Therefore, both data sets indicate the presence of excess RV variability, either activity or planetary (or both).
#### ESPRESSO.
We acquired 17 additional spectra with the Echelle SPectrograph for Rocky Exoplanets and Stable Spectroscopic Observations [ESPRESSO; @Pepe2010SPIE.7735E..0FP], the new ultra-stable spectrograph at the Very Large Telescope, between 20 and 26 September 2019. Typical S/N values are $21$ (at 0.66$\mu$m) for $10$min exposure times. We used the ESO ESPRESSO pipeline data products and, for consistency, again derived RVs using [[[serval]{}]{}]{}. The final [[[serval]{}]{}]{} RV time series has an rms scatter of $5.6$ms$^{-1}$ with a median error of $0.8$ms$^{-1}$. RVs from the ESO pipeline exhibit a larger median uncertainty of $1.4$ms$^{-1}$, but both methods deliver mostly consistent RVs. We attribute the differences in RVs and precision to the fact that mid to late type M dwarfs have less isolated spectral features to perform cross-correlation, while the least-squares matching approach used by [[[serval]{}]{}]{} is optimized for such conditions [@Guillem_TERRA].
#### HARPS.
There are nine publicly available spectra of in the ESO archive[^11], taken with the High Accuracy Radial velocity Planet Searcher [HARPS, @Mayor2003Msngr.114...20M] between December 2003 and October 2010. Exposure times were $15$min in all cases. Although typical S/N values are $16$ (at 0.66$\mu$m), as for CARMENES, we discarded two spectra with ${\rm S/N} < 10$. We computed the RVs of the remaining seven spectra using [[[serval]{}]{}]{}, but the resulting time series yields an rms scatter of $29$ms$^{-1}$, which is five times larger than those in the CARMENES and ESPRESSO data sets. Therefore, we did not use the HARPS data for the following analysis.
### Photometry
#### MEarth.
The 8th data release[^12] of the robotically-controlled telescope array MEarth tabulates photometric data of CDCet collected between October 2008 and March 2018, spanning a total of 3456d (approximately 9.5 years). Over 114d in the 2010/2011 season, MEarth used an interference filter with which 93 measurements of CDCet were collected. Because this data set covers only about one rotational period (following @Newton_rotation) and used a different filter from the one used in the rest of the observations, we disregarded it for this work. The available photometric time series taken with the broad RG715 filter includes 11614 individual measurements, with an rms of $17$mmag and a median photometric precision of $4.3$mmag.
#### TESS.
was observed from 19 October 2018 to 14 November 2018 by the [*Transiting Exoplanet Survey Satellite*]{} [[*TESS*]{}, @Ricker2015] in Sector S04. The photometric time series consists of 18684 measurements with 2min cadence and a total time span of 25.95d. This is only a fraction of the rotational period derived from MEarth data, which made the [*TESS*]{} light curve of limited use for deriving (or confirming) the rotation period. However, both the photometric rms and photometric precision of a single measurement are typically $0.16$%, that is, good enough to detect any transiting super-Earth companion to [CDCet]{}.
Analysis {#sec:analysis}
--------
### Radial velocities {#sec:RV_ana}
![GLS periodograms of the data from the VIS channel ($a$) and ($b$), the NIR channel ($c$) and ($d$), and ESPRESSO ($e$) and ($f$). Bottom panels always show the residual periodograms after subtracting the 2.3d signal. The 2.3d and the 134d signal are indicated by the vertical orange dotted and vertical red dashed line, respectively. FAPs are indicated by horizontal lines: 10% (dash-dotted red), 1% (dotted green), 0.1% (dashed blue). []{data-label="fig:gls_rv"}](J03133+047_VIS+NIR+HARPS+ESPRESSO_fou){width="\linewidth"}
We searched for periodic signals in the RV data by computing the generalized Lomb-Scargle (GLS) periodogram [@GLS], as illustrated by Fig. \[fig:gls\_rv\]. When we found a significant signal (i.e., false-alarm probability $\mathrm{FAP} < 0.1\,\%$), we made a Keplerian fit, removed it, and searched the residuals for more signals. The VIS (panel $a$) and NIR (panel $c$) channels both support the presence of a signal at $f_{\rm b}=0.435\,\mathrm{d}^{-1}$ ($\sim$2.3d) or $f'_{\rm b}=0.565\,\mathrm{d}^{-1}$ ($\sim$1.8d), which are related to each other via the one-day alias. Even though there are very few ESPRESSO RVs, we observed excess power at the same frequencies (panel $e$). Combining the data of CARMENES and ESPRESSO helped to resolve the daily alias issue (panel $g$), as both instruments are located at different geographical longitudes and the nightly duty cycle was thus improved. After the combination, the 2.3d signal is significantly preferred ($\mathrm{FAP_b} = 1.1 \times 10^{-15}$) over the 1.8d signal ($\mathrm{FAP'_b} = 5.6 \times 10^{-12}$).
After subtraction of the 2.3d signal, we find a tentative signal at $f=0.0147$d$^{-1}$ or 68d ($\mathrm{FAP_{VIS}} = 5\,\%$) in the VIS channel (panel $b$). This signal is at approximately half of the photometric rotation period derived by [@Newton_rotation]. There is no indication of the 68d period in the NIR channel data (panel $d$), which may be explained by a lower activity amplitude at redder wavelengths and by the higher instrumental jitter of the NIR channel data. The baseline of ESPRESSO (panel $f$) is too short to detect such long-term modulations. The combination of the three data sets (panel $h$) yielded a peak close to significance (FAP = 0.11%) at $f=0.0074$d$^{-1}$ or 134d (about twice the period found with the VIS channel and very close to the photometric period). After removal of the 68–134d period, we did not find additional significant signals in the data.
[@lccc@]{} Model & $P$ (d) & $\ln Z$ & $\Delta \ln Z$\
Null & ... & $-743.80 \pm 0.12$ & 133.80\
1 circ.orbit& 2.29 & $-630.33 \pm 0.16$ & 20.30\
1 ecc.orbit & 2.29 & $-629.97 \pm 0.17$ & 19.97\
2 ecc.orbits & 2.29; 139.2 & $-617.42 \pm 0.19$ & 7.42\
1 circ.orbit+GP & 2.29; 152.5 & $-610.00 \pm 0.16$ & 0\
1 ecc.orbit+GP & 2.29; 147.9 & $-608.73 \pm 0.17$ & –1.27\
We modeled the combined RV data with [juliet]{} [@juliet] to carry out a comparison between models containing one planet, two planets and one planet plus activity. Activity was modeled by a Gaussian process (GP) using a quasi-periodic kernel of the form: $$k(\tau) = A^{2} \exp\left(-\alpha \tau^2 - \Gamma \sin^2 \frac{\pi \tau}{P_{{\rm rot}}}\right),
\label{eq:Qper}$$ as implemented in [george]{} [@Ambikasaran2015ITPAM..38..252A]. The parameter $A$ is the amplitude of the GP, $\alpha$ is the inverse of the exponential decay timescale, $\Gamma$ is the amplitude of the periodic (sine) part of the kernel, $P_{\rm rot}$ is its period, and $\tau=|t_i-t_j|$ is the time lag between any pair of RV measurements.
All modeling results are summarized in Table \[tab:RV\_signals\]. As suggested by [@Bayesian_statistics_in_astrophysics], we considered a model significantly better than another if the difference in Bayesian log-evidence exceeded $\Delta\ln Z$ = 5. The models using GPs to fit the second signal are, thus, preferred over the two-planet solution, which is a first indication that the 134d period is caused by activity. More evidence for this conclusion is presented in Sects. \[sec:Act\_ana\] and \[sec:Phot\_ana\]. Allowing eccentric orbits in the fit did not significantly improve the Bayesian log-evidence ($\Delta\ln Z = -1.27$). Therefore, we adopted the circular orbit plus activity solution (“1 circ. orbit + GP” in Table \[tab:RV\_signals\]) for further analysis of the system.
### Chromatic and temporal coherence of the 2.3 d signal {#sec:coherence}
The RVs used in this paper were derived from observations over a wide range of different wavelengths, from the blue optical (ESPRESSO; 0.38$\mu$m) to the near infrared (CARMENES NIR; 1.7$\mu$m). We used the data of each instrument independently to test the chromaticity of the 2.3d signal. We removed the 134d signal for this analysis using the GP model of Sect. \[sec:RV\_ana\] and fit a circular orbit for the 2.3d signal to the data. The Keplerian amplitudes found with each instrument are $K_{\rm ESPRESSO} = 6.93\pm0.41$ms$^{-1}$, $K_{\rm VIS} = 6.33\pm0.23$ms$^{-1}$, and $K_{\rm NIR}=6.59\pm0.84$ms$^{-1}$, which are all consistent within their error bars.
Most RV data for CD Cet were collected during the observing seasons 2018/19 and 2019/20. We also analyzed these seasons separately to probe period, phase, and amplitude changes of the 2.3d signal. Both data sets yield, however, consistent results, as shown in Table \[tab:RV\_seasons\]. Chromatic and temporal coherence tests provide strong evidence for the origin of RV signals, while activity-induced signals are expected to show variations over wavelength and time. As this is not the case, the 2.3d signal is consistent with a planetary companion orbiting CD Cet. However, the RV data gathered for this work were insufficient to test the 68–134d signal in the same way. Therefore, we resorted to spectral activity indicators (Sect. \[sec:Act\_ana\]) and photometry (Sect. \[sec:Phot\_ana\]) to resolve its origin.
[@lccc@]{} Season & $P$ (d) & $K$ (ms$^{-1}$) & $t_c-2458804$ (d)\
2018/19 & $2.2920 \pm 0.0022$ & $6.57\pm0.29$ & $0.530\pm0.052$\
2019/20 & $2.2917 \pm 0.0010$ & $6.40\pm0.30$ & $0.619\pm0.140$\
### Activity indicators {#sec:Act_ana}
![GLS periodograms for various activity indicators. Horizontal FAP lines and vertical lines as in Fig. \[fig:gls\_rv\]. []{data-label="fig:gls_activity"}](J03133+047_VIS+NIR_activity_fou_bigger){width="\linewidth"}
We evaluated a number of activity indicators already used by us in a number of publications (e.g., ). They included the differential line width (dLW; similar to the FWHM of the cross-correlation function) and the chromatic index (CRX; wavelength dependence of an RV measurement) from [[[serval]{}]{}]{}, as well as a number of line and band indices that track chromospheric activity from the VIS and NIR channel spectra. For a detailed explanation of all the activity indicators see and . For signal searches we computed the GLS periodograms displayed in Fig. \[fig:gls\_activity\]. In two cases, namely the “a” component of the Ca [ii]{} infrared triplet (Ca IRT$_{\rm a}$) covered by the VIS channel and the He [i]{} triplet at 1.083$\mu$m in the NIR channel, the strongest signals are related to a year or half-year period and were, therefore, disregarded. The Ca IRT$_{\rm a}$ line in question is likely contaminated by a telluric line, while the He [i]{} triplet is located close to an OH sky emission line. Nonetheless, the dLW in the VIS ($141$d) and the NIR channel ($134$d), as well as H$\alpha$ ($129$d), exhibit signals close to $134$d, where we found the second signal in our RV data. This is yet another indication for stellar activity being the origin of this signal.
In contrast, none of the spectral activity indicators shows any power at 2.3d. Combined with the information from chromatic and temporal coherence tests, we favor a planetary companion as explanation for the discovered RV signal.
### Photometry {#sec:Phot_ana}
![MEarth photometry of CDCet. [*Top panels*]{}: ($a$) light curve used by @DA_rotation_periods [black circles], @Newton_rotation [adding blue squares], and this work (adding red triangles). [*Bottom panels:*]{} ($b$) GLS periodograms of data used by [@DA_rotation_periods], ($c$) data used by [@Newton_rotation], ($d$) 2017/18 season data, ($e$) all public RG715 MEarth data, and ($f$) of their residuals after subtracting the 170d period. Ticks in panels $d$ and $e$ mark a 170d period (solid black) and its one-year aliases (dashed black). The vertical red dashed line marks the 134d period found in RV data and some spectral activity indicators, and the vertical red dotted line marks the 68d period found in VIS channel RV data.[]{data-label="fig:gls_MEarth"}](MEarth_fou){width="\linewidth"}
Guided by the RV analysis results, we revisited the MEarth and [*TESS*]{} photometric data to confirm the rotation period of CDCet and search for transits. [@Newton_rotation] used the MEarth observations from 2008 to 2015 to derive a rotation period of about $126$d with a small amplitude of 6.2$\pm$2.2mmag. [@DA_rotation_periods], using MEarth data spanning only the initial 1.2years of the observations (i.e., seasons 2008/09 and 2009/10), could not confirm it.
In our analysis, we used all the RG715-filter CD Cet data available in the 8th MEarth data release, which covers the years from 2008 to 2018 except for the interference-filter 2010/11 season gap. We averaged multiple observations conducted within one night and derived a lightcurve consisting of 310 data points. In Fig. \[fig:gls\_MEarth\] we present the whole light curve together with the GLS periodograms of the data used by [@DA_rotation_periods], the data used by [@Newton_rotation], the 2017/18 season data, which has the highest cadence and lowest rms, and the full RG715-filter MEarth dataset.
As [@DA_rotation_periods], we could not recover the 126.2d period found by [@Newton_rotation]. Furthermore, there is no indication of the tentative 68d and 134d periods found in the RVs and the spectral activity indicators H$\alpha$ and dLW of both channels. Although the MEarth photometry shows the presence of a long-period signal, its exact value can not be pinned down. When using the entire public MEarth dataset, we find the most significant (FAP$=2.2 \times 10^{-12}$%) signal at about 170d. This period is dominated by the high cadence data of the 2017/18 season (panel $d$), while earlier sections of the MEarth data show very wide peaks at low frequencies (panels $b$ and $c$).
[@llll@]{} Parameter & Prior value & Posterior & Unit\
\
$A_{\Delta\,\mathrm{RG715}}$ & ${\pazocal{U}}(0,300)$ & $7.2_{-0.7}^{+0.8}$ & mmag\
$\alpha$ & ${\pazocal{J}}(10^{-5},1)$ & $5.9_{-3.8}^{+77.4}$ & $10^{-5}$d$^{-2}$\
$\Gamma_{\Delta\,\mathrm{RG715}}$ & ${\pazocal{U}}(0,300)$ & $0.012_{-0.005}^{+0.010}$ & mmag\
$P_{\rm rot}$ & ${\pazocal{U}}(100,200)$ & $170_{-38}^{+19}$ & d\
$\sigma_{\mathrm{MEarth}}$ & ${\pazocal{U}}(0.1,300)$ & $3.7_{-0.3}^{+0.3}$ & mmag\
$\mu_{\mathrm{MEarth}}$ & ${\pazocal{U}}(-200,200)$ & $-0.8_{-0.9}^{+1.0}$ & $10^{-6}$mmag\
\
$A_{\Delta\,\mathrm{RG715}}$ & ${\pazocal{U}}(0,300)$ & $7.4_{-0.7}^{+0.9}$ & mmag\
$\rho$ & ${\pazocal{J}}(10^{-5},1000)$ & $13.8_{-2.7}^{+3.1}$ & $\mathrm{d}$\
$\sigma_{\mathrm{MEarth}}$ & ${\pazocal{U}}(0.1,300)$ & $3.4_{-0.2}^{+0.2}$ & mmag\
$\mu_{\mathrm{MEarth}}$ & ${\pazocal{U}}(-200,200)$ & $-0.4_{+1.2}^{-1.2}$ & $10^{-6}$mmag\
Because we observed phase and amplitude changes in the photometry (see panel $a$ of Fig. \[fig:gls\_MEarth\]), we fit the MEarth data with a GP. Table \[tab:photometry\_GP\] summarizes the two GP models. When fitting a GP with the quasi-periodic kernel of Eq. (\[eq:Qper\]), we found a very low amplitude for the periodic term, $\Gamma = 0.012$mmag. This is a hint that the photometric variability over a decade is not dominated by rotation, but by spot evolution (represented by the exponential decay term $\alpha$ in Eq. (\[eq:Qper\])). Hence, we reduced the number of parameters and used a stationary Matern kernel (with amplitude $A$ and a correlation length $\rho$ as parameters) to just fit for spot evolution with a GP. The Matern kernel gave a significantly higher Bayesian log-evidence than the quasi-periodic kernel ($\Delta\ln Z=13.27$). As a result, spot evolution explains the frequency changes observed between different sections of MEarth photometry. We could also expect to find period differences between photometry on the one hand and RVs and the spectral activity indicators on the other hand, as they span different epochs. With an orbital period of 2.3d, the transit probability of CDCetb is 4.4%, and we would expect ten transits during the 25days of observation. No alert was issued for this target after the end of observations of [*TESS*]{} Sector 04 in November 2018. We re-investigated the light curve with the Transit Least Squares algorithm [@TLS]. As expected, we found no indication of a transit, as shown in Fig. \[fig:transit\_search\]. We used the winning RV model of Sect. \[sec:RV\_ana\] to estimate the transit center in Fig. \[fig:transit\_search\]. With an expected transit depth of $0.56\,\%$ (assuming a rocky and silicate composition; [@Zeng_2019]), CDCetb would have been easily detected by [[[*TESS*]{}]{}]{} (rms $=0.16$%). Thus, we concluded that CDCetb is not transiting.
![Transit search in the [[[*TESS*]{}]{}]{} lightcurve. [*Top:*]{} signal detection efficiency (SDE) as a function of frequency using the Transit Least Squares algorithm. Horizontal lines indicates false alarm probabilities of 10% (red), 1% (green), and 0.1% (blue). Vertical orange dotted line marks the planetary period $P_{\rm b}$ = 2.2907d present in the RVs. [*Bottom:*]{} [[[*TESS*]{}]{}]{} data (gray: two minute cadence, black: binned to 500 phase points) folded to $P_{\rm b}=2.2907$d guided by the RV solution along with the expected transit signal (red solid line). []{data-label="fig:transit_search"}](J03133+047_transit_search_TESS){width="\linewidth"}
### Combined {#sec:comb_ana}
In order to derive the planetary parameters of CDCetb, we took into account the activity signal. We combined photometric and RV information in a joint fit of CARMENES, ESPRESSO, and MEarth data. Activity was fit by a GP, which was constrained by RVs and photometry simultaneously. For completeness, we tested again quasi-periodic and Matern kernels, and concluded that indeed the Matern kernel was the best choice for the combined data set as well ($\Delta\ln Z=11.74$). Because the Bayesian model comparison in Sect. \[sec:RV\_ana\] favored assuming zero eccentricity, we fit a circular orbit for CDCetb. All model parameters are listed in Table \[tab:RV+Phot\_analysis\], while the resulting fits to the RVs and the photometric data can be seen in Fig. \[fig:GP\_phot\_rv\]. When subtracting the model from the data, we obtained a weighted rms consistent with the respective error bars in each instrument: $1.55$ms$^{-1}$ in the VIS channel, $4.66$ms$^{-1}$ in the NIR channel, and $1.13$ms$^{-1}$ for ESPRESSO. Assuming a variety of Bond albedos found among the rocky Solar System bodies, $A_0 \approx 0$ (P-type asteroids), $A_\oplus \approx 0.30$, or $A_{\rm Venus} \approx 0.76$, results in a wide range of equilibrium temperatures from $T_{{\rm eq},0} = 464\pm16$K to $T_{{\rm eq},\oplus} = 424\pm16$K down to $T_{\rm eq,Venus} = 325\pm16$K. We conclude that CDCet is orbited by a temperate super-Earth ($3.95\,M_\oplus$) on a 2.29d orbit.
To compare CDCetb to the bulk of known exoplanets[^13], we show the period versus planet mass and stellar mass versus planet mass plots in Fig. \[fig:properties\]. CDCetb lines up well with other discoveries; it belongs to a population of super-Earths commonly found around M dwarfs. The discovery adds to 42 other known exoplanets in the regime of low-mass exoplanets ($<5 M_{\oplus}$) orbiting mid- to late-type M dwarfs ($<0.3 M_{\odot}$).
[@lccl@]{} Parameter & Prior & Posterior & Unit\
\
$P_{\rm b}$ & ${\pazocal{U}}(2.0,2.5)$ & $2.29070_{-0.00012}^{+0.00012}$ & d\
$K_{\rm b}$ & ${\pazocal{U}}(0,10)$ & $6.51_{-0.23}^{+0.22}$ & ms$^{-1}$\
$t_{c,{\rm b}}-2450000$ & ${\pazocal{U}}(8802.7,8805.0)$ & $8804.489_{-0.018}^{+0.017}$ & d\
\
$A_{\Delta\,\mathrm{RG715}}$ & ${\pazocal{U}}(0,300)$ & $7.46_{-0.72}^{+0.84}$ & mmag\
$A_{\mathrm{RV}}$ & ${\pazocal{U}}(0,40)$ & $2.05_{-0.34}^{+0.40}$ & ms$^{-1}$\
$\rho$ & ${\pazocal{J}}(10^{-5},1000)$ & $14.6_{-2.4}^{+2.9}$ & $\mathrm{d}$\
\
$\sigma_{\mathrm{MEarth}}$ & ${\pazocal{U}}(0.1,300)$ & $3.46_{-0.21}^{+0.22}$ & mmag\
$\mu_{\mathrm{MEarth}}$ & ${\pazocal{U}}(-200,200)$ & $-0.4_{-1.2}^{+1.2} \times 10^{-6}$ & mmag\
$\sigma_{\text{C.VIS}}$ & ${\pazocal{J}}(0.01,15)$ & $0.84_{-0.21}^{+0.27}$ & ms$^{-1}$\
$\mu_{\text{C.VIS}}$ & ${\pazocal{U}}(-10,10)$ & $-0.74_{-0.56}^{+0.54}$ & ms$^{-1}$\
$\sigma_{\text{C.NIR}}$ & ${\pazocal{J}}(0.01,20)$ & $0.97_{-0.34}^{+0.57}$ & ms$^{-1}$\
$\mu_{\text{C.NIR}}$ & ${\pazocal{U}}(-20,20)$ & $0.08_{-0.79}^{-0.76}$ & ms$^{-1}$\
$\sigma_{\mathrm{ESPRESSO}}$ & ${\pazocal{J}}(0.01,15)$ & $0.91_{-0.34}^{+0.41}$ & ms$^{-1}$\
$\mu_{\mathrm{ESPRESSO}}$ & ${\pazocal{U}}(-20,20)$ & $4.4_{+1.1}^{-1.1}$ & ms$^{-1}$\
\
$a_{\rm b}$ & & $0.0185_{-0.0013}^{+0.0013}$ & au\
$m_{\rm b} \sin{i}$ & & $3.95_{-0.43}^{+0.42}$ & $M_\oplus$\
$i_{\rm b}$ & &$<87.48_{-0.28}^{+0.25}$ & deg\
$S_{\rm b}$ & & $8.6_{-2.4}^{+2.4}$ & $S_{\oplus}$\
$T_{\rm{eq,b}}$ & & $464_{-18}^{+18}$ & K\
![Combined analysis of photometry and RV. [*Top panel:*]{} MEarth photometry (black points) and GP fit (gray line). [*Middle panel:*]{} RV time series of the VIS channel (green points), the NIR channel (red squares), ESPRESSO (blue triangles), and GP fit with Matern kernel (gray line). [*Bottom panel:*]{} activity-subtracted RVs folded to the Keplerian period $P_{\rm b}=2.2907$d and best-fit circular orbit (black line). []{data-label="fig:GP_phot_rv"}](J03133+047_phase_rvs+lc_2_Keplarians){width="\linewidth"}
![[*Top panel:*]{} orbital period versus planetary mass for known exoplanets orbiting stars $>0.6\,M_{\odot}$ (gray points), $\le 0.6\,M_{\odot}$ (black squares), and CD Cet b (red star). [*Bottom panel:*]{} stellar mass versus planetary mass plot. Colors and symbols as above.[]{data-label="fig:properties"}](J03133+047_properties){width="\linewidth"}
Summary and discussion {#sec:discussion}
======================
In this work we presented an overview of how the optical and near-infrared dual-channel CARMENES spectrograph can be used to deliver high-precision RVs. Scientists who want to use CARMENES in the future should understand the instrument and its performance on the level that was discussed in this paper. This will be of importance for the community, as a considerable amount of observing time is available through open calls for proposals. Moreover, there are many other similar instruments that achieved first light after CARMENES and that may benefit from the concepts discussed in this work.
Currently, CARMENES VIS is capable of routinely measuring RVs with a median precision of 1.2ms$^{-1}$. The main instrumental limitations are currently imperfections in the calibration, which may be related to the sensitivity of the instrument to temperature changes in the climatic rooms, to drifts of the FP etalons, and to inconsistencies in the wavelength solutions computed every night. Improvements through higher thermal stability, second-generation etalons, and improved procedures for the calibration and wavelength solution would further reduce the instrumental errors.
On the other hand, CARMENES NIR delivers a median RV precision of 3.7ms$^{-1}$ (around three times that of CARMENES VIS). The key to improving the performance of this channel lies in optimizing its active thermal stabilization mechanism. This has already been partly achieved during the survey, and further options are currently under investigation.
We showed that carefully avoiding spectral regions with low RV content and strong telluric contamination or regions with increased detector effects can significantly improve the RV performance. For CARMENES NIR and with the currently implemented strategy of masking telluric contamination, optimal performance was reached if only half of the covered spectral range was used to measure RVs. However, in the future, efforts for telluric correction may result in larger useful spectral regions. The rejection strategy led to an average improvement of 2ms$^{-1}$ in the rms of our NIR channel RV curves. On the whole, we showed that the RV rms scatter in our M dwarf sample is consistent with expectations from the combined effects of photon noise, instrumental performance, and stellar activity.
As an example, we presented the case of CDCet. This M5.0V star is a typical target for which the instrument was designed. With about 100 spectra taken, we discovered a highly significant signal of amplitude $K=6.5$ms$^{-1}$ independently in both CARMENES channels. The facts that the amplitude was consistent in the CARMENES VIS and NIR channels, and that none of our activity indicators showed a $2.3$d period, are strong indicators for the planetary (super-Earth) origin of the signal. The signal was confirmed by 17 ESPRESSO measurements complementing CARMENES at bluer wavelengths. The ability to investigate the consistency of an RV signal across large wavelength ranges and, thereby, differentiate between signals caused by stellar activity from those of Keplerian nature, is one of the main strengths of CARMENES. The use of chromaticity at wavelengths longer than 1$\mu$m is, however, hampered by the NIR channel photon limit, which is typically on the order of 5–7ms$^{-1}$ (assuming S/N=150). A second, low-amplitude ($\approx2$ms$^{-1}$) RV signal in CDCet was, hence, only picked up in the VIS channel. However, this signal in the range of 130–140 days had counterparts in H$\alpha$ and the dLW, measured in both CARMENES VIS and NIR. CDCet appears to be a relatively quiet star, which is also reflected by a low photometric variability of about 7mmag, again found over long time scales. The discovery of CDCetb is, thus, an excellent example of the capabilities of CARMENES, which was designed for discovering and extending our statistical knowledge of planets orbiting mid-M dwarfs, and for discriminating their Keplerian signatures from activity-induced RV variations. Currently, planned upgrades have the potential of reducing the instrumental noise in both channels of CARMENES even further.
We thank the referee for her/his suggestions that have certainly improved this manuscript. We thank the whole team at CAHA, because their work during day and night time makes our science with CARMENES possible: Jesús Aceituno, Belén Arroyo, Antonio Barón Carreño, Daniel Benítez, Gilles Bergond, Enrique de Guindos, Enrique de Juan, Alba Fernández, Roberto Fernández González, David Galadí-Enriquez, Eulalia Gallego, Joaquín García de la Fuente, Vicente Gómez Galera, José Góngora Rueda, Ana Guijarro, Jens Helmling, Israel Hermelo, Luis Hernández Castaño, Francisco Hernández Hernando, Juan F. López Salas, Héctor Magán Medinabeitia, Julio Marń Molina, Pablo Martín, David Maroto Fernández, Francisco Márquez Sánchez, Santos Pedraz, Rubén P. Hedrosa, Miguel Á. Peñalver, Manuel Pineda Puente, Santiago Reinhart Franca, and José I. Vico Linares. CARMENES is an instrument for the Centro Astronómico Hispano-Alemán (CAHA) at Calar Alto (Almería, Spain), operated jointly by the Junta de Andalucía and the Instituto de Astrofísica de Andalucía (CSIC). CARMENES was funded by the German Max-Planck-Gesellschaft (MPG), the Spanish Consejo Superior de Investigaciones Científicas (CSIC), the European Union through FEDER/ERF FICTS-2011-02 funds, and the members of the CARMENES Consortium (Max-Planck-Institut für Astronomie, Instituto de Astrofísica de Andalucía, Landessternwarte Königstuhl, Institut de Ciències de l’Espai, Institut für Astrophysik Göttingen, Universidad Complutense de Madrid, Thüringer Landessternwarte Tautenburg, Instituto de Astrofísica de Canarias, Hamburger Sternwarte, Centro de Astrobiología and Centro Astronómico Hispano-Alemán), with additional contributions by the Spanish Ministry of Economy, the German Science Foundation through the Major Research Instrumentation Program and DFG Research Unit FOR2544 “Blue Planets around Red Stars”, the Klaus Tschira Stiftung, the states of Baden-Württemberg and Niedersachsen, and by the Junta de Andalucía. Based on data from the CARMENES data archive at CAB (INTA-CSIC). We acknowledge financial support from the European Research Council under the Horizon 2020 Framework Program via the ERC Advanced Grant Origins 83 24 28, the Deutsche Forschungsgemeinschaft through project RE 1664/14-1, the Agencia Estatal de Investigación of the Ministerio de Ciencia, Innovación y Universidades and the European FEDER/ERF funds through projects AYA2018-84089, ESP2016-80435-C2-1-R, AYA2016-79425-C3-1/2/3-P, AYA2015-69350-C3-2-P, the Centre of Excellence “Severo Ochoa” and “María de Maeztu” awards to the Instituto de Astrofísica de Canarias (SEV-2015-0548), Instituto de Astrofísica de Andalucía (SEV-2017-0709), and Centro de Astrobiología (MDM-2017-0737), and the Generalitat de Catalunya/CERCA program.
[95]{} natexlab\#1[\#1]{}
Alonso-Floriano, F., Morales, J., Caballero, J., [et al.]{} 2015, Astronomy and Astrophysics, 577
, F. J., [Caballero]{}, J. A., [Cort[é]{}s-Contreras]{}, M., [Solano]{}, E., & [Montes]{}, D. 2015, [, 583, A85](https://ui.adsabs.harvard.edu/abs/2015A&A...583A..85A)
, S., [Foreman-Mackey]{}, D., [Greengard]{}, L., [Hogg]{}, D. W., & [O’Neil]{}, M. 2015, [IEEE Transactions on Pattern Analysis and Machine Intelligence, 38, 252](https://ui.adsabs.harvard.edu/abs/2015ITPAM..38..252A)
, G., [Amado]{}, P. J., [Barnes]{}, J., [et al.]{} 2016, [, 536, 437](https://ui.adsabs.harvard.edu/abs/2016Natur.536..437A)
, G. & [Butler]{}, R. P. 2012, [, 200, 15](https://ui.adsabs.harvard.edu/abs/2012ApJS..200...15A)
, N., [Delfosse]{}, X., [Bonfils]{}, X., [et al.]{} 2017, [, 600, A13](https://ui.adsabs.harvard.edu/abs/2017A&A...600A..13A)
, A., [Queloz]{}, D., [Mayor]{}, M., [et al.]{} 1996, [, 119, 373](https://ui.adsabs.harvard.edu/abs/1996A&AS..119..373B)
, F. F., [Zechmeister]{}, M., & [Reiners]{}, A. 2015, [, 581, A117](https://ui.adsabs.harvard.edu/abs/2015A&A...581A.117B)
, S., [Mirabet]{}, E., [Lizon]{}, J. L., [et al.]{} 2016, [, 9912, 991262](https://ui.adsabs.harvard.edu/abs/2016SPIE.9912E..62B)
, E. B., [Bechter]{}, A. J., [Crepp]{}, J. R., & [Crass]{}, J. 2019, [Journal of Astronomical Telescopes, Instruments, and Systems, 5, 038004](https://ui.adsabs.harvard.edu/abs/2019JATIS...5c8004B)
, X., [Delfosse]{}, X., [Udry]{}, S., [et al.]{} 2013, [, 549, A109](https://ui.adsabs.harvard.edu/abs/2013A&A...549A.109B)
, S., [Marvin]{}, C. J., [Jeffers]{}, S. V., [et al.]{} 2018, [, 616, A108](https://ui.adsabs.harvard.edu/abs/2018A&A...616A.108B)
, F., [Pepe]{}, F., & [Queloz]{}, D. 2001, [, 374, 733](https://ui.adsabs.harvard.edu/abs/2001A&A...374..733B)
, J. A., [Cort[é]{}s-Contreras]{}, M., [Alonso-Floriano]{}, F. J., [et al.]{} 2016, [Cambridge Workshop on Cool Stars, Stellar Systems, and the Sun, 148](https://ui.adsabs.harvard.edu/abs/2016csss.confE.148C)
, J. A., [Gu[à]{}rdia]{}, J., [L[ó]{}pez del Fresno]{}, M., [et al.]{} 2016, [, 9910, 99100E](https://ui.adsabs.harvard.edu/abs/2016SPIE.9910E..0EC)
, B., [Plavchan]{}, P., [LeBrun]{}, D., [et al.]{} 2019, [, 158, 170](https://ui.adsabs.harvard.edu/abs/2019AJ....158..170C)
, Z., [Reshetov]{}, V., [Baratchart]{}, S., [et al.]{} 2018, [, 10702, 1070262](https://ui.adsabs.harvard.edu/abs/2018SPIE10702E..62C)
, M. 2016, [PhD Thesis, Universidad Complutense de Madrid, Madrid, Spain](https://carmenes.caha.es/ext/theses/phd.ucm.cortescontreras-pub.pdf)
, M., [B[é]{}jar]{}, V. J. S., [Caballero]{}, J. A., [et al.]{} 2017, [, 597, A47](https://ui.adsabs.harvard.edu/abs/2017A&A...597A..47C)
, E., [Caballero]{}, J. A., [Montes]{}, D., [et al.]{} 2019, [, 621, A126](https://ui.adsabs.harvard.edu/abs/2019A&A...621A.126D)
, J.-F., [Kouach]{}, D., [Lacombe]{}, M., [et al.]{} 2018, [Handbook of Exoplanets (Springer International Publishing), 107](https://ui.adsabs.harvard.edu/abs/2018haex.bookE.107D)
, S., [Jeffers]{}, S. V., [Rodr[í]{}guez]{}, E., [et al.]{} 2020, [, 235](https://ui.adsabs.harvard.edu/abs/2020MNRAS.tmp..235D)
, N., [Kossakowski]{}, D., & [Brahm]{}, R. 2019, [, 490, 2262](https://ui.adsabs.harvard.edu/abs/2019MNRAS.490.2262E)
. 2018, [VizieR Online Data Catalog, I/345](https://ui.adsabs.harvard.edu/abs/2018yCat.1345....0G)
, E., [Mann]{}, A. W., [Kraus]{}, A. L., & [Ireland]{}, M. 2016, [, 457, 2877](https://ui.adsabs.harvard.edu/abs/2016MNRAS.457.2877G)
, E., [Mann]{}, A. W., [L[é]{}pine]{}, S., [et al.]{} 2014, [, 443, 2561](https://ui.adsabs.harvard.edu/abs/2014MNRAS.443.2561G)
, H. L., [Burnham]{}, R., & [Thomas]{}, N. G. 1961, [Lowell Observatory Bulletin, 6, 61](https://ui.adsabs.harvard.edu/abs/1961LowOB...5...61G)
, J., [Bergmann]{}, C., [Bloxham]{}, G., [et al.]{} 2018, [, 10702, 107020Y](https://ui.adsabs.harvard.edu/abs/2018SPIE10702E..0YG)
, W. & [Jahrei[ß]{}]{}, H. [, [Preliminary Version of the Third Catalogue of Nearby Stars]{}, On: The Astronomical Data Center CD-ROM: Selected Astronomical Catalogs](https://ui.adsabs.harvard.edu/abs/1991adc..rept.....G)
, T. P., [Tokunaga]{}, A. T., [Toomey]{}, D. W., & [Carr]{}, J. B. 1993, [Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, 1946, 313–324](https://ui.adsabs.harvard.edu/abs/1993SPIE.1946..313G)
, K. K., [Cushing]{}, M. C., [Muirhead]{}, P. S., & [Christiansen]{}, J. L. 2019, [, 158, 75](https://ui.adsabs.harvard.edu/abs/2019AJ....158...75H)
, A. & [CRIRES+ Team]{}. 2017, [American Astronomical Society Meeting Abstracts, 230, 117.02](https://ui.adsabs.harvard.edu/abs/2017AAS...23011702H)
, S. L., [Gizis]{}, J. E., & [Reid]{}, I. N. 1996, [, 112, 2799](https://ui.adsabs.harvard.edu/abs/1996AJ....112.2799H)
, T. J., [Jao]{}, W.-C., [Subasavage]{}, J. P., [et al.]{} 2006, [, 132, 2360](https://ui.adsabs.harvard.edu/abs/2006AJ....132.2360H)
, M. & [Heller]{}, R. [, [TLS: Transit Least Squares]{}](https://ui.adsabs.harvard.edu/abs/2019ascl.soft10007H)
, M., [Hormuth]{}, F., [Bergfors]{}, C., [et al.]{} 2012, [, 754, 44](https://ui.adsabs.harvard.edu/abs/2012ApJ...754...44J)
, S. V., [Sch[ö]{}fer]{}, P., [Lamert]{}, A., [et al.]{} 2018, [, 614, A76](https://ui.adsabs.harvard.edu/abs/2018A&A...614A..76J)
, H. 1946, [Proceedings of the Royal Society of London Series A, 186, 453](https://ui.adsabs.harvard.edu/abs/1946RSPSA.186..453J)
, A., [Trifonov]{}, T., [Caballero]{}, J. A., [et al.]{} 2018, [, 618, A115](https://ui.adsabs.harvard.edu/abs/2018A&A...618A.115K)
, J. F., [Whitmire]{}, D. P., & [Reynolds]{}, R. T. 1993, [, 101, 108](https://ui.adsabs.harvard.edu/abs/1993Icar..101..108K)
, J. D. & [McCarthy]{}, Donald W., J. 1994, [, 107, 333](https://ui.adsabs.harvard.edu/abs/1994AJ....107..333K)
, T., [Tamura]{}, M., [Suto]{}, H., [et al.]{} 2014, [, 9147, 914714](https://ui.adsabs.harvard.edu/abs/2014SPIE.9147E..14K)
, M., [Ribas]{}, I., [Lovis]{}, C., [et al.]{} 2020, [, 636, A36](https://ui.adsabs.harvard.edu/abs/2020A&A...636A..36L)
, S., [Baroch]{}, D., [Morales]{}, J. C., [et al.]{} 2019, [, 627, A116](https://ui.adsabs.harvard.edu/abs/2019A&A...627A.116L)
, S., [Hilton]{}, E. J., [Mann]{}, A. W., [et al.]{} 2013, [, 145, 102](https://ui.adsabs.harvard.edu/abs/2013AJ....145..102L)
, S., [Ramsey]{}, L. W., [Terrien]{}, R., [et al.]{} 2014, [, 9147, 91471G](https://ui.adsabs.harvard.edu/abs/2014SPIE.9147E..1GM)
, B. D., [Wycoff]{}, G. L., [Hartkopf]{}, W. I., [Douglass]{}, G. G., & [Worley]{}, C. E. 2001, [, 122, 3466](https://ui.adsabs.harvard.edu/abs/2001AJ....122.3466M)
, M., [Pepe]{}, F., [Queloz]{}, D., [et al.]{} 2003, [The Messenger, 114, 20](https://ui.adsabs.harvard.edu/abs/2003Msngr.114...20M)
, M. & [Queloz]{}, D. 1995, [, 378, 355](https://ui.adsabs.harvard.edu/abs/1995Natur.378..355M)
, J. C., [Mustill]{}, A. J., [Ribas]{}, I., [et al.]{} 2019, [Science, 365, 1441](https://ui.adsabs.harvard.edu/abs/2019Sci...365.1441M)
, E., [Czesla]{}, S., [Schmitt]{}, J. H. M. M., [et al.]{} 2019, [, 622, A153](https://ui.adsabs.harvard.edu/abs/2019A&A...622A.153N)
, E. R., [Charbonneau]{}, D., [Irwin]{}, J., [et al.]{} 2014, [, 147, 20](https://ui.adsabs.harvard.edu/abs/2014AJ....147...20N)
, E. R., [Irwin]{}, J., [Charbonneau]{}, D., [et al.]{} 2016, [, 821, 93](https://ui.adsabs.harvard.edu/abs/2016ApJ...821...93N)
, L., [Pall[é]{}]{}, E., [Salz]{}, M., [et al.]{} 2018, [Science, 362, 1388](https://ui.adsabs.harvard.edu/abs/2018Sci...362.1388N)
, L., [Oliva]{}, E., [Baffa]{}, C., [et al.]{} 2014, [, 9147, 91471E](https://ui.adsabs.harvard.edu/abs/2014SPIE.9147E..1EO)
, V. M., [Schweitzer]{}, A., [Shulyak]{}, D., [et al.]{} 2019, [, 627, A161](https://ui.adsabs.harvard.edu/abs/2019A&A...627A.161P)
, F. A., [Cristiani]{}, S., [Rebolo Lopez]{}, R., [et al.]{} 2010, [, 7735, 77350F](https://ui.adsabs.harvard.edu/abs/2010SPIE.7735E..0FP)
, A., [Amado]{}, P. J., [Caballero]{}, J. A., [et al.]{} 2014, [, 9147, 91471F](https://ui.adsabs.harvard.edu/abs/2014SPIE.9147E..1FQ)
, A., [Amado]{}, P. J., [Ribas]{}, I., [et al.]{} 2018, [, 10702, 107020W](https://ui.adsabs.harvard.edu/abs/2018SPIE10702E..0WQ)
, A. S., [Allard]{}, F., [Rajpurohit]{}, S., [et al.]{} 2018, [, 620, A180](https://ui.adsabs.harvard.edu/abs/2018A&A...620A.180R)
, A. S., [Reyl[é]{}]{}, C., [Allard]{}, F., [et al.]{} 2013, [, 556, A15](https://ui.adsabs.harvard.edu/abs/2013A&A...556A..15R)
, S. L., [Nave]{}, G., & [Sansonetti]{}, C. J. 2014, [, 211, 4](https://ui.adsabs.harvard.edu/abs/2014ApJS..211....4R)
, A. & [Zechmeister]{}, M. 2020, [, 247, 11](https://ui.adsabs.harvard.edu/abs/2020ApJS..247...11R)
, A., [Zechmeister]{}, M., [Caballero]{}, J. A., [et al.]{} 2018, [, 612, A49](https://ui.adsabs.harvard.edu/abs/2018A&A...612A..49R)
, I., [Tuomi]{}, M., [Reiners]{}, A., [et al.]{} 2018, [, 563, 365](https://ui.adsabs.harvard.edu/abs/2018Natur.563..365R)
, G. R., [Winn]{}, J. N., [Vanderspek]{}, R., [et al.]{} 2015, [Journal of Astronomical Telescopes, Instruments, and Systems, 1, 014003](https://ui.adsabs.harvard.edu/abs/2015JATIS...1a4003R)
, T. H. 1984, [, 89, 1229](https://ui.adsabs.harvard.edu/abs/1984AJ.....89.1229R)
, P., [Henning]{}, T., [K[ü]{}rster]{}, M., [et al.]{} 2018, [, 155, 257](https://ui.adsabs.harvard.edu/abs/2018AJ....155..257S)
, L. F., [Reiners]{}, A., [Huke]{}, P., [et al.]{} 2018, [, 618, A118](https://ui.adsabs.harvard.edu/abs/2018A&A...618A.118S)
, S., [Guenther]{}, E. W., [Reiners]{}, A., [et al.]{} 2018, [, 10702, 1070276](https://ui.adsabs.harvard.edu/abs/2018SPIE10702E..76S)
, S. & [Reiners]{}, A. 2012, [, 8446, 844694](https://ui.adsabs.harvard.edu/abs/2012SPIE.8446E..94S)
, P., [Jeffers]{}, S. V., [Reiners]{}, A., [et al.]{} 2019, [, 623, A44](https://ui.adsabs.harvard.edu/abs/2019A&A...623A..44S)
, A., [Passegger]{}, V. M., [Cifuentes]{}, C., [et al.]{} 2019, [, 625, A68](https://ui.adsabs.harvard.edu/abs/2019A&A...625A..68S)
, W., [S[á]{}nchez Carrasco]{}, M. A., [Xu]{}, W., [et al.]{} 2012, [, 8446, 844633](https://ui.adsabs.harvard.edu/abs/2012SPIE.8446E..33S)
, M. F., [Cutri]{}, R. M., [Stiening]{}, R., [et al.]{} 2006, [, 131, 1163](https://ui.adsabs.harvard.edu/abs/2006AJ....131.1163S)
, G., [Hearty]{}, F., [Robertson]{}, P., [et al.]{} 2016, [, 833, 175](https://ui.adsabs.harvard.edu/abs/2016ApJ...833..175S)
, B., [Marino]{}, A., [Micela]{}, G., [L[ó]{}pez-Santiago]{}, J., & [Liefke]{}, C. 2013, [, 431, 2063](https://ui.adsabs.harvard.edu/abs/2013MNRAS.431.2063S)
, J., [Stahl]{}, O., [Schwab]{}, C., [et al.]{} 2014, [, 9151, 915152](https://ui.adsabs.harvard.edu/abs/2014SPIE.9151E..52S)
, L., [Trifonov]{}, T., [Zucker]{}, S., [Mazeh]{}, T., & [Zechmeister]{}, M. 2019, [, 484, L8](https://ui.adsabs.harvard.edu/abs/2019MNRAS.484L...8T)
, J. C., [Backus]{}, P. R., [Mancinelli]{}, R. L., [et al.]{} 2007, [Astrobiology, 7, 30](https://ui.adsabs.harvard.edu/abs/2007AsBio...7...30T)
, R. C., [Mahadevan]{}, S., [Bender]{}, C. F., [Deshpande]{}, R., & [Robertson]{}, P. 2015, [, 802, L10](https://ui.adsabs.harvard.edu/abs/2015ApJ...802L..10T)
, R. C., [Mahadevan]{}, S., [Deshpande]{}, R., & [Bender]{}, C. F. 2015, [, 220, 16](https://ui.adsabs.harvard.edu/abs/2015ApJS..220...16T)
, T., [K[ü]{}rster]{}, M., [Zechmeister]{}, M., [et al.]{} 2018, [, 609, A117](https://ui.adsabs.harvard.edu/abs/2018A&A...609A.117T)
, T., [Tal-Or]{}, L., [Zechmeister]{}, M., [et al.]{} 2020, [, 636, A74](https://ui.adsabs.harvard.edu/abs/2020A&A...636A..74T)
, R. 2008, [Contemporary Physics, 49, 71](https://ui.adsabs.harvard.edu/abs/2008ConPh..49...71T)
, A. A., [Weisenburger]{}, K. L., [Irwin]{}, J., [et al.]{} 2015, [, 812, 3](https://ui.adsabs.harvard.edu/abs/2015ApJ...812....3W)
, F., [Blind]{}, N., [Reshetov]{}, V., [et al.]{} 2017, [, 10400, 1040018](https://ui.adsabs.harvard.edu/abs/2017SPIE10400E..18W)
, B. E., [Brown]{}, A., [Linsky]{}, J. L., [et al.]{} 1994, [, 93, 287](https://ui.adsabs.harvard.edu/abs/1994ApJS...93..287W)
, J. T. 2005, [, 117, 657](https://ui.adsabs.harvard.edu/abs/2005PASP..117..657W)
, M., [Anglada-Escud[é]{}]{}, G., & [Reiners]{}, A. 2014, [, 561, A59](https://ui.adsabs.harvard.edu/abs/2014A&A...561A..59Z)
, M., [Dreizler]{}, S., [Ribas]{}, I., [et al.]{} 2019, [, 627, A49](https://ui.adsabs.harvard.edu/abs/2019A&A...627A..49Z)
, M. & [K[ü]{}rster]{}, M. 2009, [, 496, 577](https://ui.adsabs.harvard.edu/abs/2009A&A...496..577Z)
, M., [K[ü]{}rster]{}, M., & [Endl]{}, M. 2009, [, 505, 859](https://ui.adsabs.harvard.edu/abs/2009A&A...505..859Z)
, M., [Reiners]{}, A., [Amado]{}, P. J., [et al.]{} 2018, [, 609, A12](http://adsabs.harvard.edu/abs/2018A%26A...609A..12Z)
, L., [Jacobsen]{}, S. B., [Sasselov]{}, D. D., [et al.]{} 2019, [Proceedings of the National Academy of Science, 116, 9723](https://ui.adsabs.harvard.edu/abs/2019PNAS..116.9723Z)
Cornerplot and RV table
=======================
{width="\linewidth"}
[@cccl@]{}
\
BJD & RV & $\delta$RV \[ms$^{-1}$\] & Instrument\
& \[ms$^{-1}$\] & \[ms$^{-1}$\] &\
\
BJD & RV & $\delta$RV \[ms$^{-1}$\] & Instrument\
& \[ms$^{-1}$\] & \[ms$^{-1}$\] &\
2457621.67888 & 3.22 & 1.86 & CARM-VIS\
2457642.68307 & 7.28 & 1.69 & CARM-VIS\
2457676.59512 & 3.76 & 1.49 & CARM-VIS\
2457677.59061 & -1.97 & 1.59 & CARM-VIS\
2457692.53769 & 3.19 & 1.28 & CARM-VIS\
2457703.50752 & -2.94 & 1.31 & CARM-VIS\
2457752.42983 & 5.13 & 1.28 & CARM-VIS\
2458001.57254 & -4.88 & 1.36 & CARM-VIS\
2458065.51706 & -6.79 & 1.19 & CARM-VIS\
2458078.44269 & -0.98 & 1.48 & CARM-VIS\
2458084.46363 & 8.32 & 1.91 & CARM-VIS\
2458109.44990 & 1.50 & 1.61 & CARM-VIS\
2458135.38699 & -0.66 & 1.83 & CARM-VIS\
2458355.59568 & 1.74 & 1.62 & CARM-VIS\
2458361.63224 & 7.15 & 1.48 & CARM-VIS\
2458385.64435 & -6.75 & 1.29 & CARM-VIS\
2458394.59634 & -6.33 & 1.42 & CARM-VIS\
2458397.62191 & -5.93 & 1.57 & CARM-VIS\
2458410.60644 & 1.40 & 5.71 & CARM-VIS\
2458413.58058 & -6.57 & 1.29 & CARM-VIS\
2458421.56441 & 6.21 & 3.66 & CARM-VIS\
2458424.67833 & -8.26 & 2.33 & CARM-VIS\
2458426.49175 & -5.30 & 1.70 & CARM-VIS\
2458433.54508 & -8.66 & 1.62 & CARM-VIS\
2458434.52675 & -3.67 & 1.42 & CARM-VIS\
2458435.59257 & -1.52 & 2.06 & CARM-VIS\
2458449.59827 & -6.92 & 1.63 & CARM-VIS\
2458450.48081 & -1.45 & 1.61 & CARM-VIS\
2458451.46339 & 3.37 & 1.54 & CARM-VIS\
2458454.48322 & -5.84 & 1.46 & CARM-VIS\
2458470.42316 & -7.54 & 1.57 & CARM-VIS\
2458474.42725 & 1.16 & 1.51 & CARM-VIS\
2458475.42822 & -4.47 & 1.56 & CARM-VIS\
2458476.41889 & 6.27 & 1.33 & CARM-VIS\
2458477.41990 & -6.84 & 1.49 & CARM-VIS\
2458478.40155 & 4.64 & 1.24 & CARM-VIS\
2458480.39332 & 0.75 & 1.52 & CARM-VIS\
2458481.38642 & 2.13 & 1.88 & CARM-VIS\
2458483.40355 & 4.68 & 1.54 & CARM-VIS\
2458484.37463 & -1.56 & 1.51 & CARM-VIS\
2458485.39910 & 6.57 & 1.16 & CARM-VIS\
2458486.38676 & -5.57 & 1.26 & CARM-VIS\
2458487.50047 & 4.54 & 1.45 & CARM-VIS\
2458488.37346 & 0.48 & 1.23 & CARM-VIS\
2458489.38051 & -3.03 & 1.09 & CARM-VIS\
2458490.38613 & 4.77 & 1.44 & CARM-VIS\
2458491.41350 & -3.64 & 1.45 & CARM-VIS\
2458492.38027 & 9.02 & 1.17 & CARM-VIS\
2458492.40442 & 6.87 & 1.11 & CARM-VIS\
2458493.38641 & 1.34 & 2.92 & CARM-VIS\
2458494.47998 & 10.92 & 2.84 & CARM-VIS\
2458495.38384 & -1.63 & 1.50 & CARM-VIS\
2458496.39060 & 2.20 & 1.26 & CARM-VIS\
2458497.37685 & 3.20 & 1.71 & CARM-VIS\
2458510.32361 & 5.17 & 1.23 & CARM-VIS\
2458518.29593 & 0.13 & 4.71 & CARM-VIS\
2458518.31966 & 1.03 & 6.37 & CARM-VIS\
2458518.39467 & 7.48 & 3.70 & CARM-VIS\
2458528.30334 & -3.74 & 0.97 & CARM-VIS\
2458529.29283 & 5.62 & 1.23 & CARM-VIS\
2458530.31263 & -7.25 & 1.39 & CARM-VIS\
2458532.31068 & -8.08 & 1.31 & CARM-VIS\
2458534.30066 & -4.26 & 1.60 & CARM-VIS\
2458535.30898 & -2.50 & 1.53 & CARM-VIS\
2458538.29198 & 3.39 & 1.35 & CARM-VIS\
2458539.31096 & -10.87 & 1.41 & CARM-VIS\
2458540.29356 & 1.70 & 1.48 & CARM-VIS\
2458541.31743 & -7.09 & 1.79 & CARM-VIS\
2458693.66944 & 3.18 & 1.30 & CARM-VIS\
2458709.66857 & 7.65 & 1.56 & CARM-VIS\
2458710.67818 & -1.07 & 1.34 & CARM-VIS\
2458714.67355 & 5.16 & 1.46 & CARM-VIS\
2458715.67651 & -5.36 & 1.33 & CARM-VIS\
2458721.68168 & 3.79 & 2.44 & CARM-VIS\
2458723.68616 & 9.43 & 1.66 & CARM-VIS\
2458725.68713 & 3.79 & 1.41 & CARM-VIS\
2458731.67728 & -6.95 & 1.33 & CARM-VIS\
2458735.66462 & -1.37 & 1.81 & CARM-VIS\
2458742.66198 & -6.07 & 1.79 & CARM-VIS\
2458744.64352 & 1.32 & 1.41 & CARM-VIS\
2458749.67624 & -4.62 & 2.37 & CARM-VIS\
2458756.59954 & -9.75 & 1.32 & CARM-VIS\
2458757.64116 & -0.25 & 1.37 & CARM-VIS\
2458758.61752 & -0.67 & 1.49 & CARM-VIS\
2458759.62519 & -1.73 & 1.69 & CARM-VIS\
2458760.62206 & 5.30 & 1.31 & CARM-VIS\
2458761.59804 & -4.53 & 1.35 & CARM-VIS\
2458762.61542 & 4.19 & 1.14 & CARM-VIS\
2458763.61099 & -3.39 & 1.58 & CARM-VIS\
2458764.61249 & 3.97 & 1.38 & CARM-VIS\
2458765.55554 & -2.36 & 1.37 & CARM-VIS\
2458766.55575 & -2.92 & 1.29 & CARM-VIS\
2458767.59472 & 9.57 & 1.50 & CARM-VIS\
2458769.59518 & 6.21 & 1.80 & CARM-VIS\
2458770.59142 & -3.15 & 1.77 & CARM-VIS\
2458774.58130 & 3.99 & 1.12 & CARM-VIS\
2458775.57588 & 0.43 & 1.57 & CARM-VIS\
2458777.56053 & -4.45 & 1.90 & CARM-VIS\
2458781.68039 & -0.59 & 1.83 & CARM-VIS\
2458783.53425 & 6.72 & 1.16 & CARM-VIS\
2458784.55096 & -3.91 & 1.43 & CARM-VIS\
2458785.53872 & 7.88 & 1.38 & CARM-VIS\
2458791.65437 & -10.12 & 3.37 & CARM-VIS\
2458796.53106 & 1.60 & 1.93 & CARM-VIS\
2458801.57402 & 9.55 & 2.00 & CARM-VIS\
2458804.52117 & 0.37 & 1.93 & CARM-VIS\
2457703.50776 & -5.59 & 4.35 & CARM-NIR\
2457752.42927 & 3.76 & 5.63 & CARM-NIR\
2458001.57240 & -2.16 & 6.73 & CARM-NIR\
2458043.56937 & 3.72 & 5.84 & CARM-NIR\
2458065.51751 & -8.03 & 6.32 & CARM-NIR\
2458078.44277 & 2.08 & 4.79 & CARM-NIR\
2458084.46385 & 15.44 & 5.31 & CARM-NIR\
2458109.44945 & 3.26 & 10.63 & CARM-NIR\
2458135.38740 & 10.34 & 5.35 & CARM-NIR\
2458355.59603 & -3.18 & 6.10 & CARM-NIR\
2458361.63291 & 10.05 & 9.58 & CARM-NIR\
2458379.62915 & 7.16 & 6.05 & CARM-NIR\
2458385.64406 & -7.43 & 4.43 & CARM-NIR\
2458394.59653 & -2.50 & 8.63 & CARM-NIR\
2458397.62206 & -0.87 & 5.17 & CARM-NIR\
2458410.60659 & -6.14 & 25.51 & CARM-NIR\
2458413.58128 & -2.61 & 5.46 & CARM-NIR\
2458421.56456 & 0.36 & 16.75 & CARM-NIR\
2458424.67752 & -0.46 & 9.16 & CARM-NIR\
2458426.49280 & -2.82 & 7.82 & CARM-NIR\
2458433.54451 & 2.10 & 5.00 & CARM-NIR\
2458434.52562 & 3.76 & 5.71 & CARM-NIR\
2458435.59277 & -13.85 & 15.38 & CARM-NIR\
2458449.59885 & -7.41 & 9.93 & CARM-NIR\
2458450.48087 & 1.14 & 5.72 & CARM-NIR\
2458451.46350 & 4.77 & 5.16 & CARM-NIR\
2458454.48317 & -9.99 & 7.38 & CARM-NIR\
2458470.42305 & -7.19 & 4.96 & CARM-NIR\
2458474.42722 & 11.65 & 10.78 & CARM-NIR\
2458475.42661 & -5.16 & 5.13 & CARM-NIR\
2458476.41720 & 12.88 & 3.99 & CARM-NIR\
2458477.41891 & 0.47 & 13.23 & CARM-NIR\
2458478.40329 & 2.85 & 6.48 & CARM-NIR\
2458480.39312 & 2.84 & 5.07 & CARM-NIR\
2458481.38660 & -5.09 & 10.12 & CARM-NIR\
2458483.40314 & -1.08 & 8.21 & CARM-NIR\
2458484.37506 & -3.56 & 5.73 & CARM-NIR\
2458485.40026 & 8.65 & 4.23 & CARM-NIR\
2458486.38572 & -4.78 & 5.71 & CARM-NIR\
2458487.49941 & 3.96 & 5.86 & CARM-NIR\
2458488.37376 & -1.29 & 4.74 & CARM-NIR\
2458489.37987 & -0.66 & 5.23 & CARM-NIR\
2458490.38516 & 9.30 & 6.46 & CARM-NIR\
2458491.41357 & -9.03 & 4.91 & CARM-NIR\
2458492.37997 & 9.75 & 4.61 & CARM-NIR\
2458492.40359 & 11.84 & 5.42 & CARM-NIR\
2458493.38592 & -20.43 & 8.08 & CARM-NIR\
2458494.47974 & -3.03 & 9.71 & CARM-NIR\
2458495.38245 & 2.30 & 5.43 & CARM-NIR\
2458496.35648 & -39.03 & 24.68 & CARM-NIR\
2458496.39021 & -0.62 & 5.57 & CARM-NIR\
2458497.37586 & 9.50 & 5.48 & CARM-NIR\
2458510.32343 & 1.48 & 14.66 & CARM-NIR\
2458518.29586 & -13.49 & 19.23 & CARM-NIR\
2458518.32058 & -14.25 & 22.85 & CARM-NIR\
2458518.34696 & -2.86 & 20.58 & CARM-NIR\
2458518.39461 & -9.08 & 23.16 & CARM-NIR\
2458528.30243 & -1.30 & 4.42 & CARM-NIR\
2458529.29228 & 8.03 & 4.13 & CARM-NIR\
2458530.31288 & -5.18 & 10.18 & CARM-NIR\
2458532.31050 & -5.71 & 5.31 & CARM-NIR\
2458534.30032 & -10.32 & 5.80 & CARM-NIR\
2458535.30817 & 2.28 & 5.50 & CARM-NIR\
2458538.29222 & 3.20 & 4.42 & CARM-NIR\
2458539.31097 & -18.41 & 5.25 & CARM-NIR\
2458540.29336 & -11.13 & 6.17 & CARM-NIR\
2458541.31701 & -5.54 & 6.73 & CARM-NIR\
2458693.67033 & 1.48 & 4.60 & CARM-NIR\
2458709.66834 & 5.66 & 6.12 & CARM-NIR\
2458710.67867 & -5.85 & 5.18 & CARM-NIR\
2458714.67398 & 4.15 & 4.84 & CARM-NIR\
2458715.67641 & -2.47 & 4.53 & CARM-NIR\
2458721.68158 & 29.70 & 15.35 & CARM-NIR\
2458723.68638 & 7.94 & 5.49 & CARM-NIR\
2458725.68696 & 0.28 & 6.22 & CARM-NIR\
2458731.67697 & -5.42 & 5.47 & CARM-NIR\
2458735.66467 & -0.67 & 6.70 & CARM-NIR\
2458742.66108 & -6.05 & 4.92 & CARM-NIR\
2458744.64351 & -0.73 & 4.47 & CARM-NIR\
2458749.67645 & -2.32 & 8.68 & CARM-NIR\
2458756.60037 & 0.33 & 5.08 & CARM-NIR\
2458757.64113 & 2.16 & 4.72 & CARM-NIR\
2458758.61806 & 2.00 & 7.48 & CARM-NIR\
2458759.62502 & -0.68 & 5.89 & CARM-NIR\
2458760.62214 & 2.79 & 4.49 & CARM-NIR\
2458761.59865 & -10.14 & 4.86 & CARM-NIR\
2458762.61566 & 1.87 & 4.47 & CARM-NIR\
2458763.61115 & -1.06 & 6.41 & CARM-NIR\
2458764.61238 & 4.06 & 4.52 & CARM-NIR\
2458765.55553 & -4.36 & 4.45 & CARM-NIR\
2458766.55566 & -4.20 & 4.43 & CARM-NIR\
2458767.59405 & 7.48 & 4.67 & CARM-NIR\
2458769.59525 & 6.90 & 5.92 & CARM-NIR\
2458770.59144 & 2.49 & 12.43 & CARM-NIR\
2458774.58063 & 9.77 & 3.99 & CARM-NIR\
2458775.57505 & 0.32 & 4.81 & CARM-NIR\
2458777.56011 & 8.96 & 12.66 & CARM-NIR\
2458781.67920 & 4.89 & 14.99 & CARM-NIR\
2458783.53402 & 8.36 & 4.64 & CARM-NIR\
2458784.55132 & -1.59 & 4.48 & CARM-NIR\
2458785.53831 & 12.58 & 4.39 & CARM-NIR\
2458791.65440 & -1.44 & 17.75 & CARM-NIR\
2458796.53116 & -2.64 & 6.39 & CARM-NIR\
2458801.57351 & 5.97 & 7.97 & CARM-NIR\
2458804.52074 & 1.17 & 8.00 & CARM-NIR\
2458747.72635 & -6.49 & 0.88 & ESPRESSO\
2458747.78703 & -6.31 & 0.74 & ESPRESSO\
2458747.84737 & -4.82 & 0.61 & ESPRESSO\
2458747.87837 & -5.01 & 0.71 & ESPRESSO\
2458747.88888 & -5.55 & 0.67 & ESPRESSO\
2458749.83137 & -5.20 & 0.58 & ESPRESSO\
2458749.86178 & -3.47 & 0.66 & ESPRESSO\
2458749.88207 & -4.27 & 0.58 & ESPRESSO\
2458750.73535 & 2.05 & 0.90 & ESPRESSO\
2458750.80646 & 3.80 & 0.75 & ESPRESSO\
2458750.83495 & 2.62 & 0.86 & ESPRESSO\
2458750.90479 & 6.75 & 0.81 & ESPRESSO\
2458751.68823 & 6.29 & 1.29 & ESPRESSO\
2458751.86500 & 1.61 & 1.04 & ESPRESSO\
2458753.79784 & 7.56 & 0.89 & ESPRESSO\
2458753.85050 & 7.93 & 0.83 & ESPRESSO\
2458753.87137 & 8.61 & 0.78 & ESPRESSO\
[^1]: Based on observations collected at the Centro Astronómico Hispano Alemán (CAHA) at Calar Alto, Almería, Spain, operated jointly by the Junta de Andalucía and the Instituto de Astrofísica de Andalucía (CSIC).
[^2]: Based on observations collected at the European Southern Observatory, Paranal, Chile, under program 0103.C-0152(A), and La Silla, Chile, under programs 072.C-0488(E) and 183.C-0437(A).
[^3]: Calar Alto high-Resolution search for M dwarfs with Exoearths with Near-infrared and optical Échelle Spectrographs.
[^4]: Cryogenic Echelle Spectrograph
[^5]: SPectropolarimètre InfraROUge.
[^6]: InfraRed Doppler for the Subaru telescope.
[^7]: Habitable zone Planet Finder.
[^8]: Near Infra Red Planet Searcher
[^9]: CRyogenic high-resolution InfraRed Echelle Spectrograph Plus.
[^10]: We verified our search with the recovery of AB (A 2032), the only star pair tabulated by the Washington Double Star catalog [@2001AJ....122.3466M] in several degrees around CD Cet.
[^11]: <http://archive.eso.org/eso/eso_archive_main.html>
[^12]: <https://www.cfa.harvard.edu/MEarth/DataDR8.html>
[^13]: taken from [exoplanet.eu](exoplanet.eu)
|
---
abstract: 'The human braingraph, or connectome is a description of the connections of the brain: the nodes of the graph correspond to small areas of the gray matter, and two nodes are connected by an edge if a diffusion MRI-based workflow finds fibers between those brain areas. We have constructed 1015-vertex graphs from the diffusion MRI brain images of 392 human subjects and compared the individual graphs with respect to several different areas of the brain. The inter-individual variability of the graphs within different brain regions was discovered and described. We have found that the frontal and the limbic lobes are more conservative, while the edges in the temporal and occipital lobes are more diverse. Interestingly, a “hybrid” conservative and diverse distribution was found in the paracentral lobule and the fusiform gyrus. Smaller cortical areas were also evaluated: precentral gyri were found to be more conservative, and the postcentral and the superior temporal gyri to be very diverse.'
address:
- 'PIT Bioinformatics Group, Eötvös University, H-1117 Budapest, Hungary'
- 'Uratim Ltd., H-1118 Budapest, Hungary'
author:
- Csaba Kerepesi
- Balázs Szalkai
- Bálint Varga
- Vince Grolmusz
title: 'Comparative Connectomics: Mapping the Inter-Individual Variability of Connections within the Regions of the Human Brain'
---
Introduction
============
Large co-operative research projects, such as the Human Connectome Project [@McNab2013], produce high-quality MRI-imaging data of hundreds of healthy individuals. The comparison of the connections of the brains of the subjects is a challenging problem that may open numerous research directions. In the present work we map the variability of the connections within different brain areas in 392 human subjects, in order to discover brain areas with higher variability in their connections or other brain regions with more conservative connections.
The braingraphs or connectomes are the well-structured discretizations of the diffusion MRI imaging data that yield new possibilities for the comparison of the connections between distinct brain areas in different subjects [@Ingalhalikar2014b; @Szalkai2015] or for finding common connections in distinct cerebra [@Szalkai2015a], forming a common, consensus human braingraph.
Here, by using the data of the Human Connectome Project [@McNab2013], we describe, by their distribution functions, the inter-individual diversity of the braingraph connections in separate brain areas in 392 healthy subjects of ages between 22 and 35 years.
Since every brain is unique, the workflow that produces the braingraphs consists of several steps, including a diffeomorphism [@Hirsch1997] of the brain atlas to the brain-image processed. After the diffeomorphism, corresponding areas of different human brains are pairwise identified through the atlas and, consequently, can be compared with one another. The braingraphs, with nodes in the corresponded brain areas, are prepared from the diffusion MRI images of the individual cerebra through a workflow detailed in the “Methods” section. Every braingraph studied contains 1015 nodes (or vertices). The vertices correspond to the subdivision of anatomical gray matter areas in cortical and subcortical regions. For the list of the regions and the number of nodes in each region, we refer to Table S1 and Figure S1 in the Appendix.
Next, we describe the variability, or the distribution of the graph edges in each brain region, and also in each lobe. Figure 1 contains a simplified example on three small graphs (1,2,3) each with only two regions (A & B). The example clarifies the method, the way the results are presented through a distribution function, and the diagrams describing these functions.
For any fixed brain area, and for any $x: 0\leq x\leq 1$, let $F(x)$ denote the fraction of the edges[^1] in the fixed area[^2] that are present in at most the fraction $x$ of all braingraphs, (for a more exact definition of $F(x)$ we refer to the “Methods” section). We note that $F(x)$ is a cumulative distribution function [@Feller2008] of a random variable described in the “Methods” section.
![A simple example of computing the edge distribution between brain areas. In the example, there are three “braingraphs”, each with two areas: $A$ and $B$. We intend to count the edges that are present in all three graphs, only in two graphs and only in a single graph, respectively (between the same nodes, but in different graphs). For example, the copies of edge $e$ are present in all three $A$ areas, copies of edge $h$ in all three $B$ areas, copies of edge $g$ in two $B$ areas and edge $f$ is present only in $B1$. The edges crossing the boundary of $A$ and $B$ (colored green) are ignored when counting the edge distribution within the areas $A$ and $B$. In area $A$, two edges are present once, two edges twice and also two edges (including edge $e$) exactly three times. In area $B$, two edges (including $f$) are present once, four edges (including $g$) twice and one edge – $h$ – three times. In the diagram on the bottom, we give the $F(x)$ distribution functions for both areas. On axis $x$, the fractions of the graphs are given, 1/3 correspond to one graph, 2/3 for two and 1.0 for all three graphs. $F(x)$ is defined as the fraction of the edges in the fixed area that are present in at most the fraction $x$ of all braingraphs. Data points corresponding to area $A$ are on the same blue line (1/3, 2/3, 1) and those, corresponding to area $B$ are on the broken, red line (2/7, 6/7, 1). We remark that if all three graphs are the same, then the data points are (0,0,1) (the extremely conservative case, orange line). Similarly, if no two graphs have the same edges, the data points are (1,1,1) (that is the extremely diverse case, green line). This type of diagram is used for the presentation of the results of the distribution of the edges in separate areas of the brain: The faster the line reaches the top $F(x)=1$ value, the more diverse is the edge set in the corresponding brain area. We also note that in the diagram the lines connect the data points corresponding to the discrete values on axis $x$, and [*do not*]{} describe the step-function $F(x)$ [*between*]{} the data points: we have chosen this visualization method because of its clarity even if a higher number of areas are shown (c.f. Figures 2 and 3 with numerous crossing lines).[]{data-label="egy"}](Figure_1.png){width="140mm"}
Results and Discussion
======================
Table 1 summarizes the edge diversity results for the 392 graphs for the lobes of the brain, described by the distribution functions $F(x)$. The last column contains the data for the whole brain with 1015 nodes and 70,652 edges. The sum of the edges of the lobes in Table 1 is 30,326: these edges have both endpoints in the same lobe. More than forty thousand edges are present and accounted for only in the last column, because these edges connect nodes from different lobes. Therefore, the values in the last column cannot be derived from the other columns, since that column contains the contribution of edges that do not contribute to any other columns.
We want to find out which brain areas are more conservative and which are more diverse than the others. We suggest to designate an area as “conservative” if for most $x$ values, its $F(x)$ distribution function is less than the $F(x)$ of the all brain, given in the last column. We also suggest to designate an area as “diverse” if for most $x$ values, its $F(x)$ distribution function is greater than the $F(x)$ of the all brain, given in the last column.
The most conservative lobes are the smallest ones: the brainstem, the thalamus and the basal ganglia contain only 1, 2 and 8 nodes, resp., and most of the edges in those regions are present in almost all braingraphs. If we take the average number of the braingraphs containing an edge from those regions, we get 316, 390 and 213 graphs, resp.
It is much more interesting to review the diversity of the connections in larger areas. The frontal and the limbic lobes are conservative for most values of $x$ (i.e., their $F(x)$ values are less than that of the last column), while the temporal and the occipital lobes are diverse for larger $x$’s. The distribution of the edges in the fusiform gyrus is particularly interesting: more than 10% of the graphs contain 46% of the edges which means this is a conservative brain area in that parameter domain, compared to the other lobes. The fusiform gyrus remains conservative for $x=0.2$ and even for $x=0.3$, but more than 50% of the graphs contain only 0.7% of the edges. That means that some edges of the fusiform gyrus are well conserved, and some parts are very diverse. The paracentral lobule has a very similar distribution.
{width="130mm"}
{width="130mm"}
Table 2 summarizes the diversity results for those cortical areas which have more than 222 edges (see Table S2 in the Appendix for the edge numbers).
{width="130mm"}
{width="130mm"}
Methods
=======
We have worked with a subset of the anonymized 500 Subjects Release published by the Human Connectome Project [@McNab2013]: (<http://www.humanconnectome.org/documentation/S500>) of healthy subjects between 22 and 35 years of age. Data were downloaded in October, 2014.
We have applied the Connectome Mapper Toolkit [@Daducci2012] (<http://cmtk.org>) for brain tissue segmentation, partitioning, tractography and the construction of the graphs. The fibers were identified in the tractography step. The program FreeSurfer was used to partition the images into 1015 cortical and sub-cortical structures (Regions of Interest, abbreviated: ROIs), and was based on the Desikan-Killiany anatomical atlas [@Daducci2012](see Figure 4 in [@Daducci2012]). Tractography was performed by the Connectome Mapper Toolkit [@Daducci2012], using the MRtrix processing tool [@Tournier2012] and choosing the deterministic streamline method with randomized seeding.
The graphs were constructed as follows: the 1015 nodes correspond to the 1015 ROIs, and two nodes were connected by an edge if there exists at least one fiber connecting the ROIs corresponding to the nodes.
The distribution function
-------------------------
The variability of the edges in regions or lobes are described by cumulative distribution functions (CDF) (also called just the “distribution function”) of the edges [@Feller2008]. The general definition of the CDF is as follows:
Let $Y$ be a real-valued random variable. Then $$F(x)=Pr(Y\leq x)$$ defines the cumulative distribution function of $Y$ for real $x$ values.
For example, if $a$ is the maximum value of $Y$ then $F(a)=1$, and if $b$ is less than the minimum value of $Y$, then $F(b)=0$.
CDFs are used the following way: Suppose that our cohort consists of $n$ persons’ braingraphs (in the present work $n=392$). For a given, fixed brain area, our random variable $Y$ takes on values $Y=u/n, u=0,1,\ldots,n$. The equation $Y=u/n$ corresponds to the event that a uniformly, randomly chosen edge is in exactly $u$ graphs from the $n$ possible one, and the probability $Pr(Y=u/n)$ gives the probability of this event. Or, in other words, the equation $Y=u/n$ corresponds to the set of edges — with both nodes in the fixed brain area — which are present in exactly $u$ braingraphs, and the probability $Pr(Y=u/n)$ gives the fraction of the edges that are present in exactly $u$ braingraphs. Therefore, $F(x)=Pr(Y\leq x)$ gives the fraction (i.e., the probability) of the edges that are present in at most of a fraction $x$ of all the graphs.
The number of nodes and edges in each brain regions are given in supporting Tables S1 and S2 in the Appendix. We remark that we counted the edges without multiplicities: that is, if an edge $e$ was either present in, say, 42 copies or just 1 copy of the braingraph, in both cases we counted it only once.
The distributions were computed by counting the number of appearances of each edge in all the 392 braingraphs. Then the distribution of these numbers were evaluated in lobes and smaller cortical areas.
Conclusions:
============
By our knowledge for the first time, we have mapped the inter-individual variability of the braingraph edges in different cortical areas. We have found more and less conservative areas of the brain: for example, frontal lobes are conservative, superiortemporal and the post-central gyri are very diverse. The fusiform gyrus and the paracentral lobule have shown both conservative and diverse distributions, depending on the range of the parameters.
Data availability: {#data-availability .unnumbered}
==================
The unprocessed and pre-processed MRI data are available at the Human Connectome Project’s website:
http://www.humanconnectome.org/documentation/S500 [@McNab2013].
The assembled graphs that were analyzed in the present work can be accessed and downloaded at the site
<http://braingraph.org/download-pit-group-connectomes/>.
Acknowledgments {#acknowledgments .unnumbered}
===============
Data were provided in part by the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research; and by the McDonnell Center for Systems Neuroscience at Washington University.
[8]{} \[1\][\#1]{} \[1\][`#1`]{} urlstyle \[1\][doi: \#1]{}
Jennifer A. McNab, Brian L. Edlow, Thomas Witzel, Susie Y. Huang, Himanshu Bhat, Keith Heberlein, Thorsten Feiweier, Kecheng Liu, Boris Keil, Julien Cohen-Adad, M Dylan Tisdall, Rebecca D. Folkerth, Hannah C. Kinney, and Lawrence L. Wald. The [H]{}uman [C]{}onnectome [P]{}roject and beyond: initial applications of 300 m[T]{}/m gradients. *Neuroimage*, 80:0 234–245, Oct 2013. [doi: ]{}[10.1016/j.neuroimage.2013.05.074]{}. URL <http://dx.doi.org/10.1016/j.neuroimage.2013.05.074>.
Madhura Ingalhalikar, Alex Smith, Drew Parker, Theodore D. Satterthwaite, Mark A. Elliott, Kosha Ruparel, Hakon Hakonarson, Raquel E. Gur, Ruben C. Gur, and Ragini Verma. Sex differences in the structural connectome of the human brain. *Proc Natl Acad Sci U S A*, 1110 (2):0 823–828, Jan 2014. [doi: ]{}[10.1073/pnas.1316909110]{}. URL <http://dx.doi.org/10.1073/pnas.1316909110>.
Bal[á]{}zs Szalkai, B[á]{}lint Varga, and Vince Grolmusz. Graph theoretical analysis reveals: Women’s brains are better connected than men’s. *PLOS One*, 100 (7):0 e0130045, July 2015. [doi: ]{}[doi:10.1371/journal.pone.0130045]{}. URL <http://dx.plos.org/10.1371/journal.pone.0130045>.
Bal[á]{}zs Szalkai, Csaba Kerepesi, B[á]{}lint Varga, and Vince Grolmusz. The [B]{}udapest [R]{}eference [C]{}onnectome [S]{}erver v2. 0. *Neuroscience Letters*, 595:0 60–62, 2015.
Morris Hirsch. *Differential Topology*. Springer-Verlag, 1997. ISBN 978-0-387-90148-0.
Willliam Feller. *An introduction to probability theory and its applications*. John Wiley & Sons, 2008.
Alessandro Daducci, Stephan Gerhard, Alessandra Griffa, Alia Lemkaddem, Leila Cammoun, Xavier Gigandet, Reto Meuli, Patric Hagmann, and Jean-Philippe Thiran. The connectome mapper: an open-source processing pipeline to map connectomes with [MRI]{}. *PLoS One*, 70 (12):0 e48121, 2012. [doi: ]{}[10.1371/journal.pone.0048121]{}. URL <http://dx.doi.org/10.1371/journal.pone.0048121>.
J Tournier, Fernando Calamante, Alan Connelly, et al. Mrtrix: diffusion tractography in crossing fiber regions. *International Journal of Imaging Systems and Technology*, 220 (1):0 53–66, 2012.
Appendix {#appendix .unnumbered}
========
Abbreviations: ctx-rh: cortex right-hemisphere ctx-lh: cortex left-hemisphere
[ | l | l | ]{} Area name & No. Of nodes\
& \
ctx-lh-superiorfrontal & 45\
ctx-rh-superiorfrontal & 42\
ctx-rh-precentral & 36\
ctx-lh-precentral & 35\
ctx-lh-postcentral & 31\
ctx-rh-postcentral & 30\
ctx-lh-superiorparietal & 29\
ctx-rh-superiorparietal & 29\
ctx-rh-rostralmiddlefrontal & 27\
ctx-lh-superiortemporal & 26\
ctx-lh-rostralmiddlefrontal & 26\
ctx-rh-inferiorparietal & 26\
ctx-rh-superiortemporal & 25\
ctx-rh-lateraloccipital & 23\
ctx-rh-precuneus & 23\
ctx-lh-lateraloccipital & 23\
ctx-lh-precuneus & 22\
ctx-lh-inferiorparietal & 22\
ctx-lh-supramarginal & 21\
ctx-rh-supramarginal & 20\
ctx-rh-middletemporal & 19\
ctx-lh-fusiform & 18\
ctx-rh-lateralorbitofrontal & 17\
ctx-rh-fusiform & 17\
ctx-rh-lingual & 17\
ctx-lh-insula & 17\
ctx-lh-lingual & 17\
ctx-lh-inferiortemporal & 16\
ctx-rh-insula & 16\
ctx-rh-inferiortemporal & 16\
ctx-lh-middletemporal & 16\
ctx-lh-lateralorbitofrontal & 16\
ctx-lh-caudalmiddlefrontal & 13\
ctx-rh-paracentral & 12\
ctx-lh-paracentral & 11\
ctx-rh-caudalmiddlefrontal & 11\
ctx-rh-medialorbitofrontal & 11\
ctx-lh-medialorbitofrontal & 10\
ctx-lh-parsopercularis & 10\
ctx-lh-posteriorcingulate & 9\
ctx-rh-posteriorcingulate & 9\
ctx-rh-parsopercularis & 9\
ctx-rh-parstriangularis & 8\
ctx-rh-cuneus & 8\
ctx-rh-pericalcarine & 8\
ctx-lh-cuneus & 7\
ctx-lh-pericalcarine & 7\
ctx-lh-isthmuscingulate & 7\
ctx-lh-parstriangularis & 7\
ctx-rh-parahippocampal & 6\
ctx-lh-bankssts & 6\
ctx-rh-caudalanteriorcingulate & 6\
ctx-rh-isthmuscingulate & 6\
ctx-lh-parahippocampal & 6\
ctx-rh-bankssts & 6\
ctx-lh-rostralanteriorcingulate & 5\
ctx-lh-caudalanteriorcingulate & 5\
ctx-rh-parsorbitalis & 4\
ctx-lh-transversetemporal & 4\
ctx-lh-parsorbitalis & 4\
ctx-rh-rostralanteriorcingulate & 4\
ctx-lh-entorhinal & 3\
ctx-lh-temporalpole & 3\
ctx-rh-temporalpole & 3\
ctx-rh-transversetemporal & 3\
ctx-lh-frontalpole & 2\
ctx-rh-entorhinal & 2\
ctx-rh-frontalpole & 2\
Left-Thalamus-Proper & 1\
Left-Amygdala & 1\
Right-Hippocampus & 1\
Right-Amygdala & 1\
Right-Putamen & 1\
Right-Accumbens-area & 1\
Left-Hippocampus & 1\
Left-Pallidum & 1\
Right-Pallidum & 1\
Right-Thalamus-Proper & 1\
Left-Putamen & 1\
Right-Caudate & 1\
Left-Caudate & 1\
Left-Accumbens-area & 1\
Brain-Stem & 1\
& \
Sum of nodes & 1015\
{width="150mm"}
[ | l | l | ]{} ’all’ & 70652\
’ctx-lh-superiorfrontal’ & 910\
’ctx-rh-superiorfrontal’ & 774\
’ctx-rh-precentral’ & 500\
’ctx-lh-precentral’ & 448\
’ctx-rh-rostralmiddlefrontal’ & 352\
’ctx-rh-inferiorparietal’ & 340\
’ctx-lh-rostralmiddlefrontal’ & 331\
’ctx-lh-superiorparietal’ & 317\
’ctx-rh-superiorparietal’ & 314\
’ctx-lh-postcentral’ & 305\
’ctx-rh-postcentral’ & 273\
’ctx-rh-lateraloccipital’ & 263\
’ctx-lh-lateraloccipital’ & 254\
’ctx-lh-superiortemporal’ & 250\
’ctx-lh-inferiorparietal’ & 242\
’ctx-rh-superiortemporal’ & 228\
’ctx-rh-precuneus’ & 227\
’ctx-lh-precuneus’ & 222\
’ctx-lh-supramarginal’ & 209\
’ctx-rh-supramarginal’ & 206\
’ctx-rh-middletemporal’ & 176\
’ctx-lh-fusiform’ & 157\
’ctx-rh-lateralorbitofrontal’ & 144\
’ctx-lh-inferiortemporal’ & 135\
’ctx-rh-insula’ & 131\
’ctx-lh-lingual’ & 131\
’ctx-rh-fusiform’ & 130\
’ctx-rh-inferiortemporal’ & 130\
’ctx-lh-lateralorbitofrontal’ & 127\
’ctx-lh-insula’ & 125\
’ctx-lh-middletemporal’ & 119\
’ctx-rh-lingual’ & 114\
’ctx-lh-caudalmiddlefrontal’ & 91\
’ctx-rh-paracentral’ & 76\
’ctx-rh-caudalmiddlefrontal’ & 65\
’ctx-lh-paracentral’ & 64\
’ctx-rh-medialorbitofrontal’ & 59\
’ctx-lh-parsopercularis’ & 55\
’ctx-lh-medialorbitofrontal’ & 54\
’ctx-lh-posteriorcingulate’ & 45\
’ctx-rh-parsopercularis’ & 45\
’ctx-rh-posteriorcingulate’ & 43\
’ctx-rh-parstriangularis’ & 36\
’ctx-rh-cuneus’ & 35\
’ctx-rh-pericalcarine’ & 35\
’ctx-lh-cuneus’ & 28\
’ctx-lh-pericalcarine’ & 28\
’ctx-lh-isthmuscingulate’ & 28\
’ctx-lh-parstriangularis’ & 28\
’ctx-lh-bankssts’ & 21\
’ctx-rh-caudalanteriorcingulate’ & 21\
’ctx-lh-parahippocampal’ & 21\
’ctx-rh-parahippocampal’ & 20\
’ctx-rh-isthmuscingulate’ & 20\
’ctx-rh-bankssts’ & 20\
’ctx-lh-rostralanteriorcingulate’ & 15\
’ctx-lh-caudalanteriorcingulate’ & 15\
’ctx-rh-parsorbitalis’ & 10\
’ctx-lh-parsorbitalis’ & 10\
’ctx-rh-rostralanteriorcingulate’ & 10\
’ctx-lh-transversetemporal’ & 8\
’ctx-lh-entorhinal’ & 6\
’ctx-rh-transversetemporal’ & 5\
’ctx-lh-temporalpole’ & 4\
’ctx-rh-entorhinal’ & 3\
’ctx-rh-temporalpole’ & 3\
’Left-Thalamus-Proper’ & 1\
’Left-Amygdala’ & 1\
’ctx-lh-frontalpole’ & 1\
’Right-Hippocampus’ & 1\
’Right-Amygdala’ & 1\
’ctx-rh-frontalpole’ & 1\
’Right-Putamen’ & 1\
’Right-Accumbens-area’ & 1\
’Left-Hippocampus’ & 1\
’Left-Pallidum’ & 1\
’Right-Pallidum’ & 1\
’Right-Thalamus-Proper’ & 1\
’Left-Putamen’ & 1\
’Right-Caudate’ & 1\
’Left-Caudate’ & 1\
’Left-Accumbens-area’ & 1\
’Brainstem’ & 1\
[^1]: i.e., the number of the edges in question, divided by the number of all edges in the fixed area;
[^2]: i.e., with both vertices in the fixed area;
|
---
author:
- 'Maan A. Rasheed and Miroslav Chlebik'
title: 'The Blow-up Rate Estimates for a System of Heat Equations with Nonlinear Boundary Conditions'
---
This paper deals with the blow-up properties of positive solutions to a system of two heat equations $u_t= \Delta u,~v_t=\Delta v$ in $B_R \times (0,T)$ with Neumann boundary conditions $\frac{\partial u}{\partial \eta} =e^{v^p},~ \frac{\partial v}{\partial \eta}=e^{u^q}$ on $\partial B_R \times (0,T),$ where $p,q>1,$ $B_R$ is a ball in $R^n,$ $\eta$ is the outward normal. The upper bounds of blow-up rate estimates were obtained. It is also proved that the blow-up occurs only on the boundary.
**Introduction**
================
In this paper, we consider the system of two heat equations with coupled nonlinear Neumann boundary conditions, namely $$\label{c1}\left. \begin{array}{lll}
u_t= \Delta u,& v_t=\Delta v,& (x,t)\in B_R \times (0,T),\\
\frac{\partial u}{\partial \eta} =e^{v^p}, & \frac{\partial v}{\partial \eta}=e^{u^q},& (x,t)\in \partial B_R \times (0,T),\\
u(x,0)=u_0(x),& v(x,0)=v_0(x),& x \in {B}_R,
\end{array} \right\}$$ where $p,q>1,$ $B_R$ is a ball in $R^n,$ $\eta$ is the outward normal, $u_0,v_0$ are smooth, radially symmetric, nonzero, nonnegative functions satisfy the condition $$\label{Mo}\Delta u_0,\Delta u_0\ge 0,\quad u_{0r}(|x|),v_{0r}(|x|)\ge 0,\quad x \in \overline{B}_R.$$
The problem of system of two heat equations with nonlinear Neumann boundary conditions defined in a ball,
$$\label{3}\left. \begin{array}{lll}
u_t= \Delta u,& v_t=\Delta v,& (x,t)\in B_R \times (0,T),\\
\frac{\partial u}{\partial \eta} =f(v), & \frac{\partial v}{\partial \eta}=g(u),& (x,t)\in \partial B_R \times (0,T),\\
u(x,0)=u_0(x),& v(x,0)=v_0(x),& x \in {B}_R,
\end{array} \right\}$$
was introduced in [@19; @7; @20; @18], for instance, in [@19] it was studied the blow-up solutions to the system (\[3\]), where $$\label{c11}
f(v) =v^p, \quad g(u) =u^q, \quad p,q>1.$$ It was proved that for any nonzero, nonnegative initial data $(u_0,v_0),$ the finite time blow-up can only occur on the boundary, moreover, it was shown in [@20] that, the blow-up rate estimates take the following form $$c\le \max_{x\in\overline{\Omega}}u(x,t)(T-t)^{\frac{p+1}{2(pq-1)} }\le C,\quad t\in(0,T),$$ $$c\le\max_{x\in\overline{\Omega}}v(x,t)(T-t)^{\frac{q+1}{2(pq-1)} }\le C,\quad t\in(0,T).$$
In [@7; @18], it was considered the solutions of the system (\[3\]) with exponential Neumann boundary conditions model, namely $$\label{c12}
f(v) =e^{pv}, \quad g(u) =e^{qu},\quad p,q>0.$$ It was proved that for any nonzero, nonnegative initial data, $(u_0,v_0),$ the solution blows up in finite time and the blow-up occurs only on the boundary, moreover, the blow-up rate estimates take the following forms $$C_1\le e^{qu(R,t)}(T-t)^{1/2} \le C_2, \quad C_3\le e^{pv(R,t)}(T-t)^{1/2} \le C_4.$$
In this paper, we prove that the upper blow-up rate estimates for problem (\[c1\]) take the following form $$\begin{aligned}
\max_{ \overline{B}_R}u(x,t)\le \log C_1-\frac{\alpha}{2} \log(T-t),\quad 0<t<T, \\ \max_{ \overline{B}_R}v(x,t)\le \log C_2-\frac{\beta}{2} \log (T-t),\quad 0<t<T, \end{aligned}$$ where $\alpha=\frac{p+1}{pq-1},\beta=\frac{q+1}{pq-1}.$ Moreover, the blow-up occurs only on the boundary.
Preliminaries
=============
The local existence and uniqueness of classical solutions to problem (\[c1\]) is well known by [@47]. On the other hand, every nontrivial solution blows up simultaneously in finite time, and that due to the known blow-up results of problem (\[3\]) with (\[c11\]) and the comparison principle [@47].
In the following lemma we study some properties of the classical solutions of problem (\[c1\]). We denote for simplicity $u(r,t)=u(x,t).$
\[c\] Let $(u,v)$ be a classical unique solution of (\[c1\]). Then
1. $u, v$ are positive, radial. Moreover, $u_r ,v_r \ge 0$ in $[0,R]\times (0,T).$
2. $u_t,v_t>0$ in $\overline B_R\times (0,T).$
Rate Estimates
==============
In order to study the upper blow-up rate estimates for problem (\[c1\]), we need to recall some results from [@23; @20].
[**[@20]**]{}\[d\] Let $A(t)$ and $B(t)$ be positive $C^1$ functions in $[0,T)$ and satisfy $$A^{'} (t)\ge c \frac{B^p(t)}{\sqrt{T-t}},\quad B^{'} (t)\ge c \frac{A^q(t)}{\sqrt{T-t}} \quad \mbox{for} \quad t \in [0,T),$$ $$A(t) \longrightarrow + \infty \quad \mbox{or} \quad B(t) \longrightarrow + \infty \quad \mbox{as} \quad t \longrightarrow T^{-},$$ where $p,q>0,c>0 $ and $pq>1.$ Then there exists $C>0$ such that $$A(t) \le C(T-t)^{-\alpha /2}, \quad B(t) \le C(T-t)^{-\beta /2}, \quad t \in[0,T),$$ where $\alpha=\frac{p+1}{pq-1},\beta=\frac{q+1}{pq-1}.$
\[pk\][**[@23]**]{} Let $x \in \overline{B}_R.$ If $0\le a <n-1.$ Then there exist $C>0$ such that $$\int_{S_R} \frac{ds_y}{|x-y|^a }\le C.$$
\[Jump\][**(Jump relation, [@23])**]{} Let $\Gamma(x,t)$ be the fundamental solution of heat equation, namely $$\label{op}\Gamma(x,t)=\frac{1}{(4\pi t)^{(n/2)}}\exp[-\frac {|x|^2}{4t}] .$$ Let $\varphi$ be a continuous function on $S_R \times[0,T].$ Then for any $x \in {B}_R,x^0 \in S_R,0<t_1<t_2\le T,$ for some $T>0,$ the function $$U(x,t)=\int_{t_1}^{t_2} \int_{S_R} \Gamma (x-y,t-z)\varphi(y,z)ds_yd\tau$$ satisfies the jump realtion $$\frac{\partial}{\partial \eta}U(x,t) \rightarrow -\frac{1}{2}\varphi(x^0,t)+\frac{\partial}{\partial \eta}U(x^0,t), \quad as\quad x \rightarrow x^0.$$
\[theorem d\] Let $(u,v)$ be a solution of (\[c1\]), which blows up in finite time T. Then there exist positive constants $C_1,C_2$ such that $$\begin{aligned}
\max_{ \overline{B}_R}u(x,t)\le \log C_1-\frac{\alpha}{2} \log(T-t),\quad 0< t<T, \\ \max_{ \overline{B}_R}v(x,t)\le \log C_2-\frac{\beta}{2} \log (T-t),\quad 0< t<T. \end{aligned}$$
We follow the idea of [@20], define the functions $M$ and $M_b$ as follows $$M(t)=\max_{\overline{B}_R}u(x,t), \quad \mbox{and}\quad M_b(t)=\max_{S_R}u(x,t).$$ Similarly, $$N(t)=\max_{ \overline{B}_R}v(x,t), \quad \mbox{and}\quad N_b(t)=\max_{S_R}v(x,t).$$ Depending on Lemma \[c\], both of $M,M_b$ are monotone increasing functions, and since $u$ is a solution of heat equation, it cannot attain interior maximum without being constant, therefore, $$M(t)=M_b(t).\quad \mbox{Similarly}\quad N(t)=N_b(t).$$ Moreover, since $u,v$ blow up simultaneously, therefore, we have $$\label{Nobal} M(t)\longrightarrow +\infty, \quad N(t)\longrightarrow +\infty \quad \mbox{as} \quad t\longrightarrow T^{-}.$$As in [@22; @20], for $0<z_1<t<T $ and $ x\in B_R,$ depending on the second Green’s identity with assuming the Green function: $$G(x,y;z_1,t)=\Gamma(x-y,t-z_1),$$ where $\Gamma$ is defined in (\[op\]), the integral equation to problem (\[c1\]) with respect to $u,$ can be written as follows $$\begin{aligned}
u(x,t)&=&\int_{B_R} \Gamma (x-y,t-z_1)u(y,z_1)dy+\int_{z_1}^t \int_{S_R}e^{v^p(y,\tau)}
\Gamma (x-y,t-\tau)ds_y d_\tau\\ &&-\int_{z_1}^t \int_{S_R}
u(y,\tau ) \frac{\partial \Gamma}{\partial \eta _y} (x-y,t-\tau )ds_y d \tau,\end{aligned}$$ As in [@22], letting $x\rightarrow S_R$ and using the jump relation (Theorem \[Jump\]) for the third term on the right hand side of the last equation, it follows that $$\begin{aligned}
\frac{1}{2}u(x,t)&=&\int_{B_R} \Gamma (x-y,t-{z_1})u(y,z_1)dy+\int_{z_1}^t \int_{S_R}e^{v^p(y,\tau)}
\Gamma (x-y,t-\tau)d{s_y}d_\tau\\ &&-\int_{z_1}^t \int_{S_R}
u(y,\tau ) \frac{\partial \Gamma}{\partial \eta _y} (x-y,t-\tau )ds_y d \tau,\end{aligned}$$ for $x\in S_R, 0<z_1<t<T.$
Depending on Lemma \[c\] we notice that $u,v$ are positive and radial.Thus $$\begin{aligned}
&&\int_{B_R} \Gamma (x-y,t-z_1)u(y,z_1)dy >0,\\ &&\int_{z_1}^t \int_{S_R}e^{v^p(y,\tau)}
\Gamma (x-y,t-\tau)d{s_y}d_\tau=\int_{z_1}^t e^{v^p(R,\tau)}[\int_{S_R}\Gamma (x-y,t-\tau)ds_y]d\tau. \end{aligned}$$ This leads to $$\begin{aligned}
\frac{1}{2}M(t) &\ge& \int_{z_1}^t e^{N^p(\tau)}[\int_{S_R}\Gamma (x-y,t-\tau)ds_y]d\tau\\
&&-\int_{z_1}^t M(\tau)[\int_{S_R} | \frac{\partial \Gamma}{\partial \eta _y} (x-y,t-\tau )| ds_y] d \tau, \quad x\in S_R, 0<z_1<t<T.\end{aligned}$$ It is known that (see [@23]) there exist $C_0>0,$ such that $\Gamma$ satisfies $$|\frac{\partial \Gamma}{\partial \eta_{y}}(x-y,t-\tau)|\le \frac{C_0}{(t-\tau )^\mu}\cdot \frac{1}{|x-y|^{(n+1-2\mu- \sigma ) }},\quad x,y \in S_R, ~\sigma\in(0,1).$$ Choose $1-\frac{\sigma}{2} < \mu <1,$ from Lemma \[pk\], there exist $C^*>0$ such that $$\int_{S_R}\frac{ds_y}{|x-y|^{(n+1-2\mu- \sigma ) }} <C^*.$$ Moreover, for $0<t_1<t_2$ and $t_1$ is closed to $t_2,$ there exists $c>0,$ such that $$\int_{S_R} \Gamma (x-y,t_2-t_1)ds_y \ge \frac{c}{\sqrt{t_2-t_1}},$$
Thus $$\frac{1}{2} M(t) \ge c \int_{z_1}^t \frac{e^{N^p(\tau)}}{\sqrt{t-\tau}}d\tau-C\int_{z_1}^t \frac{M(\tau)}{|t-\tau|^{\mu}}d \tau.$$ Since for $0<z_1< t_0< t <T,$ it follows that $M(t_0) \le M(t),$ thus the last equation becomes $$\label{d8}
\frac{1}{2}M(t)\ge c \int_{z_1}^t \frac{e^{N^p(\tau)}}{\sqrt{T-\tau}}d\tau-C^*_1 M(t)|T-z_1|^{1-\mu}.$$ Similarly, for $0<z_2< t <T,$ we have $$\frac{1}{2}N(t)\ge c \int_{z_2}^t \frac{e^{M^q(\tau)}}{\sqrt{T-\tau}}d\tau-C^*_2 N(t)|T-z_2|^{1-\mu}.$$ Taking $z_1,z_2$ so that $$C^*_1|T-z_1|^{1-\mu}\le1/2,\quad C^*_2|T-z_2|^{1-\mu}\le 1/2,$$ it follows $$\label{mol}
M(t)\ge c \int_{z_1}^t \frac{e^{N^p(\tau)}}{\sqrt{T-\tau}}d\tau, \quad
N(t)\ge c \int_{z_2}^t \frac{e^{M^q(\tau)}}{\sqrt{T-\tau}}d\tau.$$
Since both of $M,N$ increasing functions and from (\[Nobal\]), we can find $T^*$ in $(0,T)$ such that $$M(t)\ge q^{\frac{1}{(q-1)}},\quad N(t)\ge p^{\frac{1}{(p-1)}}, \quad \mbox{for}\quad T^*\le t<T.$$ Thus $$e^{M^q(t)}\ge e^{qM(t)}, \quad e^{N^p(t)}\ge e^{pN(t)},\quad T^*\le t<T.$$
Therefore, if we choose $z_1,z_2$ in $(T^*,T),$ then (\[mol\]) becomes $$e^{M(t)}\ge c \int_{z_1}^t \frac{e^{pN(\tau)}}{\sqrt{T-\tau}}d\tau\equiv I_1(t), \quad
e^{N(t)}\ge c \int_{z_2}^t \frac{e^{qM(\tau)}}{\sqrt{T-\tau}}d\tau\equiv I_2(t).$$ Clearly, $$I_1^{'}(t)=c\frac{e^{pN(t)}}{\sqrt{T-t}}\ge \frac{cI_2^p}{\sqrt{T-t}},\quad I_2^{'}(t)=c\frac{e^{qM(t)}}{\sqrt{T-t}}\ge \frac{cI_1^q}{\sqrt{T-t}}.$$ By Lemma \[d\], it follows that $$\label{ef} I_1(t)\le \frac{C}{(T-t)^{\frac{\alpha}{2}}},\quad I_2(t)\le \frac{C}{(T-t)^{\frac{\beta}{2}}},\quad t \in (\max\{z_1,z_2\},T).$$ On the other hand, for $t^*=2t-T$ (Assuming that $t$ is close to $T$). $$I_1(t)\ge c\int_{t^*}^t \frac{e^{pN(\tau)}}{\sqrt{T-\tau}}d\tau\ge c e^{pN(t^*)} \int_{2t-T}^t \frac{1}{\sqrt{T-\tau}}d\tau=2c(\sqrt{2}-1)\sqrt{T-t} e^{pN(t^*)}.$$ Combining the last inquality with (\[ef\]) yields $$e^{N(t^*)} \le \frac{C}{2c(\sqrt{2}-1)(T-t)^{\frac{p+1}{2p(pq-1)}+\frac{1}{2p} } }= \frac{2^{\frac{q+1}{2(pq-1)}}C}{2c(\sqrt{2}-1)(T-t^*)^{\frac{q+1}{2(pq-1)} } }.$$ Thus, there exists a constant $c_1>0$ such that $$e^{N(t^*)} (T-t^*)^{\frac{q+1}{2(pq-1)}} \le c_1.$$ In the same way we can show $$e^{M(t^*)} (T-t^*)^{\frac{p+1}{2(pq-1)}}\le c_2.$$ This leads to, there exists $C_1,C_2>0$ such that $$\begin{aligned}
\label{yas}
\max_{ \overline{B}_R}u(x,t)\le \log C_1-\frac{\alpha}{2} \log(T-t),\quad 0< t<T, \\ \label{sad}\max_{ \overline{B}_R}v(x,t)\le \log C_2-\frac{\beta}{2} \log (T-t),\quad 0< t<T.\end{aligned}$$
Blow-up Set
===========
In order to show that the blow-up to problem (\[c1\]) occurs only on the boundary, we need to recall the following lemma from [@18].
\[power\] Let $w$ is a continuous function on the domain $\overline{B}_R\times[0,T)$ and satisfies $$\left.\begin{array}{ll}
w_t=\Delta w,&\quad (x,t) \in B_R \times (0,T),\\
w(x,t) \le \frac{C}{(T-t)^m},& \quad (x,t) \in S_R \times(0,T),\quad m>0.
\end{array} \right\}$$ Then for any $0<a<R$ $$\sup\{ w(x,t): 0 \le|x|\le a,~0\le t < T \} < \infty.$$
Set $$h(x)=(R^2-r^2)^2 ,~ r=|x|,$$ $$z(x,t)=\frac{C_1}{[h(x)+C_2(T-t)]^m}.$$
We can show that: $$\begin{aligned}
\Delta h-\frac{(m+1)|\nabla h|^2}{h}&=& 8r^2-4n(R^2-r^2)-(m+1)16r^2\\ &\ge&-4nR^2-16R^2(m+1),\\
z_t-\Delta z&=&\frac{C_1m}{[h(x)+C_2(T-t)]^{m+1}}(C_2+\Delta h-\frac{(m+1)|\nabla h|^2}{h+C_2(T-t)})\\
&\ge& \frac{C_1m}{[h(x)+C_2(T-t)]^{m+1}}(C_2-4nR^2-16R^2(m+1)).\end{aligned}$$ Let $$C_2=4nR^2+16R^2(m+1)+1$$ and take $C_1$ to be large such that $$z(x,0)\ge w(x,0),\quad x\in B_R.$$ Let $C_1\ge C(C_2)^m,$ which implies that $$z(x,t)\ge w(x,t) \quad \mbox{on}\quad S_R \times [0,T).$$ Then from the maximum principle [@21], it follows that $$z(x,t)\ge w(x,t),\quad (x,t) \in \overline{B}_R \times (0,T)$$ and hence $$\sup\{w(x,t): 0\le|x|\le a,0\le t <T\} \le C_1(R^2-a^2)^{-2m} < \infty, \quad 0\le a <R.$$
Let the assumptions of Theorem \[theorem d\] be in force. Then $(u,v)$ blows up only on the boundary.
Using equations (\[yas\]), (\[sad\]) $$u(R,t) \le \frac{c_1}{(T-t)^{\frac{\alpha}{2}}}, \quad v(R,t) \le \frac{c_2}{(T-t)^{\frac{\beta}{2}}}, \quad t \in (0,T).$$ From Lemma \[power\], it follows that $$\sup\{u(x,t): (x,t)\in B_a\times [0,T)\}\le C_1(R^2-a^2)^{-\alpha} <\infty,$$ $$\sup\{v(x,t): (x,t)\in B_a\times [0,T)\}\le C_1(R^2-a^2)^{-\beta} <\infty,$$ for $a<R.$
Therefore, $u,v$ blow up simultaneously and the blow-up occurs only on the boundary.
[99]{}
K. Deng, *Global existence and blow-up for a system of heat equations with nonlinear boundary conditions,* [Math. Methods Appl. Sci.]{} 18, 307-315, (1995).
K. Deng, *Blow-up rates for parabolic systems,* [Z. Angew. Math. Phys.]{} 47,132-143, (1996).
A. Friedman, *Partial Differential Equations of Parabolic Type*, Prentice-Hall, Englewood Cliffs, N.J., (1964).
B. Hu and H. M. Yin, *The profile near blow-up time for solution of the heat equation with a non-linear boundary condition,* *Trans. Amer. Math. Soc.* 346, 117-135, (1994).
Z. Lin, Ch. Xie and M. Wang, *The blow-up property for a system of heat equations with nonlinear boundary conditions,* [Appl. Math. -JCU.]{} 13, 181-288, (1998).
Z. Lin and C. Xie, *The blow-up rate for a system of heat equations with Neumann boundary conditions,* [Acta Math. Sinica]{} 15, 549-554, (1999).
C. V. Pao., *Nonlinear Parabolic and Elliptic Equations,* New York and London: Plenum Press, (1992).
J. D. Rossi and N. Wolanski, *Global existence and nonexistence for a parabolic system with nonlinear boundary conditions,* [Differential Integral Equations]{} 11, 179-190, (1998).
|
---
abstract: 'We cross-correlate the new 3 year Wilkinson Microwave Anistropy Probe (WMAP3) cosmic microwave background (CMB) data with the NRAO VLA Sky Survey (NVSS) radio galaxy data, and find further evidence of late integrated Sachs-Wolfe (ISW) effect taking place at late times in cosmic history. Our detection makes use of a novel statistical method based on a new construction of spherical wavelets, called needlets. The null hypothesis (no ISW) is excluded at more than 99.7% confidence. When we compare the measured cross-correlation with the theoretical predictions of standard, flat cosmological models with a generalized dark energy component parameterized by its density, $\omde$, equation of state $w$ and speed of sound $\cs2$, we find $0.3\leq\omde\leq0.8$ at 95% c.l., independently of $\cs2$ and $w$. If dark energy is assumed to be a cosmological constant ($w=-1$), the bound on density shrinks to $0.41\leq\omde\leq 0.79$. Models without dark energy are excluded at more than $4\sigma$. The bounds on $w$ depend rather strongly on the assumed value of $\cs2$.'
address:
- |
Dipartimento di Fisica, Università di Roma “Tor Vergata”,\
V. della Ricerca Scientifica 1, I-00133 Roma, Italy\
- |
Dipartimento di Fisica, Università di Roma “Tor Vergata” and\
INFN Sezione di Roma “Tor Vergata”, V. della Ricerca Scientifica 1, I-00133 Roma, Italy\
- |
Dipartimento di Matematica, Università di Roma “Tor Vergata”,\
V. della Ricerca Scientifica 1, I-00133 Roma, Italy\
author:
- Davide Pietrobon
- Amedeo Balbi
- Domenico Marinucci
title: DARK ENERGY CONSTRAINTS FROM NEEDLETS ANALYSIS OF WMAP3 AND NVSS DATA
---
2[c\_s\^2]{}
Addressing the problem {#intro}
======================
The most outstanding problem in modern cosmology is understanding the mechanism that led to a recent epoch of accelerated expansion of the universe. The evidence that we live in an accelerating universe is now compelling. The most recent cosmological data sets, luminosity distance measured from type Ia supernovae [@Riessetal.2004], the amount of clustered matter in the universe, detected from redshift surveys, clusters of galaxies, etc. [@Springeletal.2006], and the pattern of anisotropy in the CMB radiation[@Spergeletal.2006] have shown that the total density of the universe is very close to its critical value, suggesting a flat universe geometry. Matter, characterized by ordinary actractive gravity, covers only $\sim 1/3$ of the total density, while the remaining $\sim 2/3$, responsable for the acceleration epoch, show a repulsive gravity. The precise nature of this cosmological term, however, remains mysterious. The favoured working hypothesis is to consider a dynamical, almost homogeneous component (termed [*dark energy*]{}) [@Caldwelletal.1998].
One key indication of an accelerated phase in cosmic history is the signature from the integrated Sachs-Wolfe (ISW) effect in the CMB angular power spectrum. A detection of a ISW signal in a flat universe is, in itself, a direct evidence of dark energy. Furthermore, the details of the ISW contribution depend on the physics of dark energy, in particular on its clustering properties, and are therefore a powerful tool to better understand its nature. Unfortunately, because of the cosmic variance, making the extraction of the ISW signal is a difficult task. A useful way to separate the ISW contribution from the total signal is to cross-correlate the CMB anisotropy pattern with tracers of the large scale structure (LSS) in the local universe.
In our analysis[@Pietrobonetal.2006] we improve previous studies in two directions. On one side we applied a new type of spherical wavelets, the so-called [*needlets*]{} [@Baldietal.2006b], to extract cross-correlation signal beetwen WMAP3 CMB sky maps with the radio galaxy NVSS catalogue. Needlets have the great advantage to show a double localization: both in multipole space and in pixel space due to the analitic properties of the shape of the window function. This makes our analysis almost insensitive to the cuts present in the maps, increasing our ability in the detection of ISW effect. On the other side we treated dark energy as a generic fluid characterized by three physical parameters: its overall density $\omde$, its equation of state $w=p/\rho$, and the sound speed $c_s^2= \delta p/\delta \rho$. This parameterization has the advantage of being model independent, allowing one to encompass a rather broad set of fundamental models, and of giving a more realistic description of the dark energy fluid.
The first step in our work was to build the two maps at the proper resolution, useful for our purpose, paying particular attention to remove noise and systematic errors; secondly we extract the cross-correlation coefficients from maps. To make a comparison with the theoretical prevision, we numerically integrated the Boltzmann equation for photon brightness coupled to the other relevant equations, suitably modifing the CMBFast code.
Results
=======
In the left hand panel of the figure, we show the cross-correlation signal in wavelet space extracted from the WMAP3 and NVSS data, superimposed to the predicted cross-correlation for some dark energy models. The excess signal peaks at value $7<j<11$, corresponding to angular scales between $2^\circ$ and $10^\circ$, as expected from theoretical studies. We checked that the observed signal was not produced by casual alignment of sources in the NVSS catalogue with the CMB pattern at decoupling ($z\simeq1100$) and the errors, calculated through the Monte Carlo procedure, are close to the analytical estimates, that confirm the consistence of our approach.
![Cross-correlation signal in needlet space extract from WMAP3 CMB data and NVSS radio galaxy survey (left hand panel). Note that a simple Cold Dark Matter model does not reproduce the signal (dotted line). in the right hand panel, constraints at 68%, 95% and 99% confidence level in the $\omde$—$w$ plane for scalar field model (speed of sound $\cs2=1$). \[fig:contours\]](figure1.ps "fig:"){width="0.45\columnwidth"} ![Cross-correlation signal in needlet space extract from WMAP3 CMB data and NVSS radio galaxy survey (left hand panel). Note that a simple Cold Dark Matter model does not reproduce the signal (dotted line). in the right hand panel, constraints at 68%, 95% and 99% confidence level in the $\omde$—$w$ plane for scalar field model (speed of sound $\cs2=1$). \[fig:contours\]](figure2.ps "fig:"){width="0.45\columnwidth"}
The first conclusion we can draw from our analysis is that the evidence for non zero dark energy density is rather robust: we find $0.32\leq\omde\leq 0.78$ at 95% confidence level. A null value of $\omde$ is excluded at more than $4\sigma$, independently of $\cs2$. When we model the dark energy as a cosmological constant (i.e. we assume the value $w=-1$ for its equation of state), the bounds on its density shrinks to $0.41\leq\omde\leq 0.79$ at 95% confidence level.
Quite interestingly, we find that although the case for a non zero dark energy contribution to the total density is compelling, the constraints on $w$ do depend on the assumed clustering properties of the dark energy component, namely its sound speed $\cs2$. Phantom models, and also the ordinary cosmological constant case $w=-1$, perform worse when a quintessence behaviour $\cs2=1$ is assumed. However, we emphasize that, for values of $\omde\sim 0.7$, models with $w=-1$ are a good fit to the data, as it is evident from the right hand panel in figure.
Clearly, the observation of ISW is proving quite promising as a tool to answer the questions arising from the mysterious nature of dark energy. While the CMB data have reached a great degree of accuracy on the angular scales that are more relevant for the detection of ISW, deeper redshift surveys and better catalogues can, in the future, improve the tracing of the local matter distribution, thus allowing to reduce the errors on the cross-correlation determination.
Riess, A. G., et al., ApJ [**607**]{}, 665 (2004).
Springel, V., C. S. Frenk, & S. D. M. White, Natur [**440**]{}, 1137 (2006).
Spergel, D. N., et al., arXiv:astro-ph/0603449 (2006).
Caldwell, R. R., R. Dave, & P. J. Steinhardt, PhRvL [**80**]{}, 1582 (1998).
Sachs, R. K. & A. M. Wolfe, ApJ [**147**]{}, 73 (1967).
Pietrobon, D., A. Balbi & D. Marinucci, PhRvD [**74**]{}, 043524 (2006)
Baldi, P., G. Kerkyacharian, D. Marinucci, D. Picard, arXiv:math.ST/0606599.
|
---
abstract: |
Automatic segmentation in MR brain images is important for quantitative analysis in large-scale studies with images acquired at all ages.
This paper presents a method for the automatic segmentation of MR brain images into a number of tissue classes using a convolutional neural network. To ensure that the method obtains accurate segmentation details as well as spatial consistency, the network uses multiple patch sizes and multiple convolution kernel sizes to acquire multi-scale information about each voxel. The method is not dependent on explicit features, but learns to recognise the information that is important for the classification based on training data. The method requires a single anatomical MR image only.
The segmentation method is applied to five different data sets: coronal T~2~-weighted images of preterm infants acquired at 30 weeks postmenstrual age (PMA) and 40 weeks PMA, axial T~2~-weighted images of preterm infants acquired at 40 weeks PMA, axial T~1~-weighted images of ageing adults acquired at an average age of 70 years, and T~1~-weighted images of young adults acquired at an average age of 23 years. The method obtained the following average Dice coefficients over all segmented tissue classes for each data set, respectively: 0.87, 0.82, 0.84, 0.86 and 0.91.
The results demonstrate that the method obtains accurate segmentations in all five sets, and hence demonstrates its robustness to differences in age and acquisition protocol.
author:
- |
Pim Moeskops^a,b^, Max A. Viergever^a^, Adriënne M. Mendrik^a^, Linda S. de Vries^b^, Manon J.N.L. Benders^b^ and Ivana Išgum^a^\
^a^Image Sciences Institute, University Medical Center Utrecht, The Netherlands\
^b^Department of Neonatology, University Medical Center Utrecht, The Netherlands [^1]
bibliography:
- 'references.bib'
title: |
Automatic segmentation of MR brain images\
with a convolutional neural network
---
Deep learning, convolutional neural networks, automatic image segmentation, preterm neonatal brain, adult brain, MRI.
Introduction {#sec:introduction}
============
automatic brain image segmentation in magnetic resonance (MR) images is a prerequisite for the quantitative assessment of the brain in large-scale studies with images acquired at all ages [@Sala04; @Dubo08b; @Rodr08; @Tham10; @Hogs13; @Li14; @Lyal14; @Wrig14; @Moes15a]. Especially the field of neonatal brain image segmentation has developed rapidly in the last ten years. A popular approach for brain image segmentation is the use of (population specific) atlases[@Fisc02; @Pras05; @Xue07; @Weis09; @Haba10; @Shi10b; @Card13; @Makr14], but also pattern recognition methods are used [@Wang15; @Moes15], sometimes in combination with an atlas-based approach [@Vroo07; @Srho13; @Anbe13]. To obtain anatomically correct segmentations, these methods need explicit spatial and intensity information. For atlas-based methods spatial information is provided in the form of an atlas which is deformed to match the image at hand. For methods based on pattern recognition spatial information is included in the feature set as distances within an atlas space [@Anbe13], distances to a brain mask [@Moes15], probabilistic results of a previous segmentation step [@Wang15; @Moes15], or by imposing anatomical constraints [@Wang15]. Intensity information is, in pattern recognition methods, included as a set of features based on (local) intensity, and atlas-based methods are typically performed by matching intensity information between the atlas and target images.
The explicit definition of such spatial and intensity features could be avoided by using convolutional neural networks (CNNs). CNNs have shown to be successful in several computer vision tasks including the recognition of handwriting [@LeCu98] and, more recently, the ImageNet challenge, where 1.2 million 2D pictures are classified into 1000 different classes [@Kriz12]. In recent years, CNNs also gained popularity in the field of medical image analysis [@Pras13; @Roth14; @Veta15; @Roth15; @Roth15a; @Bar15; @Breb15; @Zhan15; @Ciom15; @Wolt15]. In contrast to classical machine learning methods, CNNs do not require a set of hand-crafted features for the classification, but learn sets of convolution kernels that are specifically trained for the classification problem at hand. While classical machine learning methods applied to image segmentation would use e.g. Gaussian or Haar-like kernels to acquire appearance information, CNNs optimise sets of kernels based on the provided training data. In this way, the system can automatically extract information that is relevant for the task. If spatial and intensity information within the image are important to distinguish between classes, this can be learned from the provided information, much like a human observer would recognise objects within a (medical) image.
CNNs have also been used for brain image segmentation. Zhang et al. [@Zhan15] presented a method using CNNs for the segmentation of three tissue types: white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF), in MR brain images of 6–8 months old infants, which have low contrast between WM and GM. The method uses 2D patches of a single size from one image plane in T~1~-weighted, T~2~-weighted and fractional anisotropy images as input for a CNN. Prior to the classification, all voxels that do not belong to either of the three tissue types, i.e. skull, cerebellum and brain stem are excluded using in-house tools. De Brébisson et al. [@Breb15] presented a method for the segmentation of adult T~1~-weighted MR brain images in 134 regions as provided by the MICCAI challenge on multi-atlas labelling [^2]. The method uses multiple parallel networks of 2D patches in orthogonal planes, a 3D patch, and distances to a previous segmentation step.
Recent MICCAI challenges in neonatal [@Isgu15] and adult [@Mend15] MR brain image segmentation show that various segmentation methods achieve accurate results, but also that different methods are better at different aspects of brain image segmentation. The best results per tissue type in both the NeoBrainS12 [^3] and the MRBrainS13 [^4] challenges were not obtained by a single best performing method. These challenges also show that, despite the overall accurate segmentations achieved by these automatic methods, various inaccuracies are still present.
This paper presents a method for the automatic segmentation of anatomical MR brain images into a number of classes based on a multi-scale CNN. The multi-scale approach allows the method to obtain accurate segmentation details as well as spatial consistency. Unlike previous work that employs CNNs for a brain image segmentation task [@Breb15], the proposed method allows omitting the explicit definition of spatial features. Furthermore, unlike previous work used for brain image segmentation [@Zhan15], the method uses multiple patch and kernel sizes combined. This approach allows the method to learn multi-scale features that estimate both intensity and spatial characteristics. In contrast to using these multiple patch and kernel sizes, other approaches to multi-scale CNNs, used in different applications, provide multi-scale features by directly using the feature maps after the first convolution layer as additional input for a fully connected layer [@Serm11; @Li12].
Additionally, unlike previous work on brain image segmentation, the method is applied to the segmentation of images of developing neonates at different ages as well as young adults and ageing adults, and to coronal as well as axial images. This allows demonstrating that the method is able to adapt to the segmentation task at hand based on training data.
The paper is organised as follows. Section \[sec:method\] provides a general description of the method. Section \[sec:data\] describes the five test data sets that were used. Section \[sec:preprocessing\] describes the preprocessing that was performed. Section \[sec:parameters\] provides the parameter settings of the CNN that were determined experimentally with a subset of the data. Section \[sec:experiments\] lists the evaluation experiments and their results. Section \[sec:othermethods\] compares the obtained results with the results in the recent literature and in the NeoBrainS12 and MRBrainS13 challenges. Section \[sec:multiscale\] illustrates the influence of the multi-scale approach, and Section \[sec:singleplane\] illustrates the influence of the single-plane approach. Finally, Section \[sec:discussion\] discusses the results.
![Schematic overview of the convolutional neural network. The number of output classes, N, was set to 9 (8 tissue classes and background) for the neonatal images, to 8 (7 tissue classes and background) for the ageing adult images, and to 7 (6 tissue classes and background) for the young adult images. After the third convolution layer, max-pooling is only performed for the two largest patch sizes.[]{data-label="fig:network"}](figures/moesk1){width="45.00000%"}
Method {#sec:method}
======
Each voxel in the image is classified to one of the brain tissue classes using a CNN. Information about each voxel is provided in the form of image patches where the voxel of interest is in the centre. To allow the method to use multi-scale information about each voxel, multiple patch sizes are used. The larger scales implicitly provide spatial information, because they are large enough to recognise where in the image the voxel is located, while the smaller scales provide detailed information about the local neighbourhood of the voxel. For each of these patch sizes, different kernel sizes are trained, i.e. larger kernel sizes are used for larger patches. This multi-scale approach allows the network to incorporate local details as well as global spatial consistency.
For each of the patch sizes, a separate network branch is used; only the output layer is shared. This allows the weights and biases in the CNN to be specifically optimised for each patch size and corresponding kernel size. Multiple convolution layers are used; after each convolution, the resulting images are sub-sampled by extracting the highest responses with max-pooling. While the dimensions of the patches decrease in each layer, the number of kernels that are trained increases.
After the convolution layers, separate fully connected layers are used for each input patch size. To perform the final classification, these layers are connected to a single softmax output layer with one node for each class (including a background class).
Because the number of samples per tissue class might vary between the classes, a defined number of samples is extracted per class from each training image. In this way, the number of samples used in the training procedure is the same for each class, which avoids a bias towards larger tissue classes. To provide the network with as many training samples as possible, in every epoch, a new random selection of samples is made and provided to the network in a randomised order. Rectified linear units [@Nair10] are used for all nodes because of their speed in training CNNs [@Kriz12]. Drop-out [@Sriv14] is used on the fully connected layers to decrease the effect of overfitting on the training set. Mini-batch learning and RMSprop [@Tiel12] are used to train the network and cross-entropy is used as cost function to optimise the weights and biases.
A schematic of the CNN that is used in this study is shown in Figure \[fig:network\]. The parameters that are used in the experiments are described in Section \[sec:parameters\].
**Cor. 30 wks** **Cor. 40 wks** **Ax. 40 wks** **Ageing adults** **Young adults**
------------------------------------- --------------------------------- --------------------------------- --------------------------------- --------------------------------- -------------------------------------
Age 30 weeks PMA 40 weeks PMA 40 weeks PMA 70 years 23 years
Acquisition protocol Coronal T~2~-weighted Coronal T~2~-weighted Axial T~2~-weighted Axial T~1~-weighted Sagittal T~1~-weighted
Number of images 10 5 7 20 15
Reconstruction matrix 384 $\times$ 384 $\times$ 50 512 $\times$ 512 $\times$ 110 512 $\times$ 512 $\times$ 50 240 $\times$ 240 $\times$ 48 256 $\times$ 256 $\times$ (261–334)
Reconstructed voxel sizes \[mm^3^\] 0.34 $\times$ 0.34 $\times$ 2.0 0.35 $\times$ 0.35 $\times$ 1.2 0.35 $\times$ 0.35 $\times$ 2.0 0.96 $\times$ 0.96 $\times$ 3.0 1.0 $\times$ 1.0 $\times$ 1.0
{width="95.00000%"}
{height="38mm"} {height="38mm"} {height="38mm"} {height="38mm"} {height="38mm"}\
{height="38mm"} {height="38mm"} {height="38mm"} {height="38mm"} {height="38mm"}\
{height="38mm"} {height="38mm"} {height="38mm"} {height="38mm"} {height="38mm"}\
[|c|l|l|c c c c c c c c|]{}\
& **Image set** & **Experiment** & **CB** & **mWM** & **BGT** & **vCSF** & **(u)WM** & **BS** & **cGM** & **eCSF**\
1 & Cor. 30 wks & 5 training, 5 test & 0.92 $\pm$ 0.02 & 0.69 $\pm$ 0.06 & 0.91 $\pm$ 0.02 & 0.88 $\pm$ 0.05 & 0.96 $\pm$ 0.00 & 0.87 $\pm$ 0.02 & 0.84 $\pm$ 0.01 & 0.91 $\pm$ 0.02\
2 & & LOSO 10, NBS12 & 0.91 $\pm$ 0.02 & 0.74 $\pm$ 0.06 & 0.89 $\pm$ 0.02 & 0.91 $\pm$ 0.01 & 0.96 $\pm$ 0.00 & 0.88 $\pm$ 0.02 & 0.83 $\pm$ 0.02 & 0.91 $\pm$ 0.02\
3 & Cor. 40 wks & LOSO 5, NBS12 & 0.93 $\pm$ 0.01 & 0.56 $\pm$ 0.05 & 0.86 $\pm$ 0.02 & 0.85 $\pm$ 0.04 & 0.92 $\pm$ 0.00 & 0.78 $\pm$ 0.05 & 0.82 $\pm$ 0.01 & 0.86 $\pm$ 0.02\
4 & Ax. 40 wks & LOSO 7 & 0.93 $\pm$ 0.01 & 0.55 $\pm$ 0.05 & 0.91 $\pm$ 0.02 & 0.81 $\pm$ 0.03 & 0.92 $\pm$ 0.01 & 0.84 $\pm$ 0.02 & 0.88 $\pm$ 0.02 & 0.84 $\pm$ 0.04\
5 & & LOSO 7, NBS12 & 0.93 $\pm$ 0.01 & 0.56 $\pm$ 0.06 & 0.91 $\pm$ 0.02 & 0.81 $\pm$ 0.03 & 0.93 $\pm$ 0.01 & 0.85 $\pm$ 0.02 & 0.87 $\pm$ 0.02 & 0.83 $\pm$ 0.04\
6 & Ageing adults & 5 training, 15 test & 0.90 $\pm$ 0.03 & - & 0.81 $\pm$ 0.03 & 0.92 $\pm$ 0.02 & 0.88 $\pm$ 0.02 & 0.90 $\pm$ 0.02 & 0.84 $\pm$ 0.01 & 0.76 $\pm$ 0.04\
7 & Young adults & 5 training, 10 test & 0.95 $\pm$ 0.01 & - & 0.85 $\pm$ 0.01 & 0.85 $\pm$ 0.04 & 0.94 $\pm$ 0.01 & 0.92 $\pm$ 0.01 & 0.91 $\pm$ 0.01 & -\
8 & & 15 training, 20 test & 0.95 $\pm$ 0.01 & - & 0.86 $\pm$ 0.02 & 0.86 $\pm$ 0.05 & 0.93 $\pm$ 0.02 & 0.93 $\pm$ 0.01 & 0.91 $\pm$ 0.03 & -\
\
& **Image set** & **Experiment** & **CB** & **mWM** & **BGT** & **vCSF** & **(u)WM** & **BS** & **cGM** & **eCSF**\
1 & Cor. 30 wks & 5 training, 5 test & 0.42 $\pm$ 0.32 & 0.90 $\pm$ 0.70 & 0.49 $\pm$ 0.20 & 0.23 $\pm$ 0.07 & 0.12 $\pm$ 0.01 & 0.29 $\pm$ 0.02 & 0.13 $\pm$ 0.02 & 0.10 $\pm$ 0.02\
2 & & LOSO 10, NBS12 & 0.40 $\pm$ 0.11 & 0.59 $\pm$ 0.34 & 0.49 $\pm$ 0.16 & 0.22 $\pm$ 0.08 & 0.12 $\pm$ 0.01 & 0.25 $\pm$ 0.08 & 0.11 $\pm$ 0.01 & 0.10 $\pm$ 0.02\
3 & Cor. 40 wks & LOSO 5, NBS12 & 0.72 $\pm$ 0.32 & 0.94 $\pm$ 0.44 & 0.96 $\pm$ 0.50 & 0.45 $\pm$ 0.11 & 0.15 $\pm$ 0.01 & 0.87 $\pm$ 0.56 & 0.12 $\pm$ 0.01 & 0.17 $\pm$ 0.02\
4 & Ax. 40 wks & LOSO 7 & 1.04 $\pm$ 0.38 & 0.82 $\pm$ 0.30 & 0.49 $\pm$ 0.20 & 0.67 $\pm$ 0.62 & 0.13 $\pm$ 0.02 & 0.39 $\pm$ 0.12 & 0.11 $\pm$ 0.02 & 0.19 $\pm$ 0.04\
5 & & LOSO 7, NBS12 & 1.14 $\pm$ 0.30 & 0.68 $\pm$ 0.24 & 0.46 $\pm$ 0.19 & 0.43 $\pm$ 0.14 & 0.12 $\pm$ 0.02 & 0.35 $\pm$ 0.08 & 0.11 $\pm$ 0.02 & 0.19 $\pm$ 0.05\
6 & Ageing adults & 5 training, 15 test & 2.01 $\pm$ 2.10 & - & 0.67 $\pm$ 0.17 & 0.43 $\pm$ 0.67 & 0.28 $\pm$ 0.05 & 2.41 $\pm$ 2.21 & 0.23 $\pm$ 0.02 & 0.48 $\pm$ 0.10\
7 & Young adults & 5 training, 10 test & 1.17 $\pm$ 0.80 & - & 0.75 $\pm$ 0.07 & 0.52 $\pm$ 0.12 & 0.21 $\pm$ 0.05 & 0.64 $\pm$ 0.28 & 0.28 $\pm$ 0.06 & -\
8 & & 15 training, 20 test & 0.61 $\pm$ 0.25 & - & 0.72 $\pm$ 0.23 & 0.52 $\pm$ 0.25 & 0.28 $\pm$ 0.17 & 0.46 $\pm$ 0.19 & 0.39 $\pm$ 0.23 & -\
**Source** **Acquisition** **Age** **Test images** **CB** **mWM** **BGT** **vCSF** **uWM** **BS** **cGM** **eCSF** **WM** **GM** **CSF**
------------------------------------ --------------------------------------- -------------- ----------------- -------- --------- --------- ---------- --------- -------- --------- ---------- -------- -------- ---------
Gui et al., 2012 [@Gui12] 3T coronal T~1~ and T~2~ 40 weeks PMA 10 0.87 0.78 0.88 - 0.94 0.90 0.92 - - - 0.84
Cardoso et al., 2013 [@Card13] 1.5T T~1~ 40 weeks PMA 5 0.87 - 0.84 0.94 0.91 0.74 0.76 - - - -
Makropoulos et al., 2014 [@Makr14] 3T axial T~1~ and T~2~ 40 weeks PMA 20 0.93 - 0.84 0.84 - 0.92 - - - - -
Wang et al., 2015 [@Wang15] 3T isotropic T~1~, T~2~ and FA 40 weeks PMA - - - - - - - - - 0.92 0.89 0.84
Moeskops et al., 2015 [@Moes15] 3T coronal T~2~ 30 weeks PMA 5 - - - - 0.95 - 0.81 0.89 - - -
3T coronal T~2~ 40 weeks PMA 5 - - - - 0.92 - 0.80 0.85 - - -
NeoBrainS12 3T axial T~1~ and T~2~ 40 weeks PMA 5 0.92 0.47 0.92 0.86 0.90 0.83 0.86 0.79 0.92 - 0.80
3T coronal T~1~ and T~2~ 30 weeks PMA 5 0.88 0.69 0.84 0.88 0.91 0.76 0.71 0.83 0.90 - 0.84
3T coronal T~1~ and T~2~ 40 weeks PMA 5 0.92 0.48 0.88 0.84 0.89 0.75 0.77 0.77 0.84 - 0.79
MRBrainS13 3T axial T~1~, T~1~ IR and T~2~ FLAIR 70 years 15 - - - - - - - - 0.89 0.86 0.84
[|l|c|l|c|c|l|c|l|c|c|l|c|l|c|]{} **Image set** & **Rank** & **Method** & **Dice** & & **Image set** & **Rank** & **Method** & **Dice** & & **Image set** & **Rank** & **Method** & **Dice**\
Cor. 30 wks & **1** & **Proposed** & **0.827** & & Ax. 40 wks & **1** & **Proposed** & **0.805** & & Cor. 40 wks & **1** & **Proposed** & **0.819^\*^**\
& 2 & Picsl\_upenn & 0.737 & & & 2 & DTC & 0.789 & & & 2 & DTC & 0.765\
& 3 & UCL & 0.728 & & & 3 & UCL & 0.756 & & & 3 & UCL & 0.680\
& 4 & DTC & 0.721 & & & 4 & Picsl\_upenn & 0.737 & & & 4 & Picsl\_upenn & 0.649\
Data {#sec:data}
====
The method was applied to the segmentation of 3 different sets of volumetric T~2~-weighted MR brain images of preterm infants and 2 sets of volumetric T~1~-weighted MR brain images of adults, hence, 5 different sets of images in total. Because of the inverted contrast between WM and GM in neonatal images as compared with adult images, T~2~-weighted images of neonates provide a similar contrast between these tissue types as T~1~-weighted images of adults.
Basic information about the images is listed in Table \[tab:acquisition\].
Neonatal images {#sec:neonatalimages}
---------------
In total, 22 images of preterm infants were acquired on a Philips Achieva 3T scanner in accordance with standard clinical practice in the neonatal intensive care unit of the University Medical Center Utrecht (UMCU), The Netherlands. Permission from the medical ethical review committee of the UMCU and informed parental consent were obtained. 10 coronal images were acquired at an age of 30.9 $\pm$ 0.6 (mean $\pm$ standard deviation) weeks postmenstrual age (PMA). For 5 of these patients a coronal follow-up image was acquired at 41.3 $\pm$ 1.3 weeks PMA. Furthermore, 7 axial images (of different patients) were acquired at an age of 41.3 $\pm$ 0.5 weeks PMA. No brain pathology was visible on these images.
The neonatal images were manually segmented by experts into 8 classes: cerebellum (CB), myelinated white matter (mWM), basal ganglia and thalami (BGT), ventricular cerebrospinal fluid (vCSF), unmyelinated white matter (uWM), brain stem (BS), cortical grey matter (cGM), and extracerebral cerebrospinal fluid (eCSF).
Detailed information about the acquisition and the manual segmentation protocol can be found in the paper by Išgum et al. [@Isgu15] and on the NeoBrainS12 website [^5]. Išgum et al. also evaluated the inter-observer variability for the manual segmentations of the data in the NeoBrainS12 challenge.
Ageing adult images {#sec:ageingadultimages}
-------------------
20 axial images of adults were acquired on a Philips Achieva 3T scanner at an age of 70.5 $\pm$ 4.0 years, as provided by the MRBrainS13 challenge. Permission from the medical ethical review committee of the UMCU was obtained and all participants signed an informed consent form. The patients had a varying degree of atrophy and white matter lesions.
The ageing adult images were manually segmented by experts into the same classes as the neonatal images. However, because all WM is myelinated in adults, only one WM class was made, resulting in 7 different classes. Possible white matter lesions were included in the WM segmentation.
Detailed information about the acquisition and the manual segmentation protocol can be found in the paper by Mendrik et al. [@Mend15] and on the MRBrainS13 website [^6].
Young adult images {#sec:youngadultimages}
------------------
15 images from the OASIS project [@Marc07] were acquired on a Siemens Vision 1.5T scanner at an age of 23.0 $\pm$ 4.1 years, as provided by the MICCAI challenge on multi-atlas labelling [@Land12]. The images were acquired in the sagittal plane with a voxel size of 1.0 $\times$ 1.0 $\times$ 1.25 mm^3^ and were subsequently resized to an isotropic resolution of 1.0 $\times$ 1.0 $\times$ 1.0 mm^3^ [^7].
The images were manually segmented, in the coronal plane, into 134 classes. To match the tissue definitions used in this paper, these classes were merged into the same tissue classes as used for the other images. Because no eCSF was segmented in these images, this resulted in 6 tissue classes.
Detailed information about the acquisition can be found in the paper by Marcus et al. [@Marc07] and on the OASIS website [^8]. Detailed information about the manual segmentation protocol can be found on the NeuroMorphometrics website [^9] [^10].
Evaluation {#sec:evaluation}
----------
The automatic segmentations were evaluated in 3D using the Dice coefficient and the mean surface distance between the manual and automatic segmentations.
Experiments and results {#sec:experimentsandresults}
=======================
The parameter settings of the CNN were defined in preliminary leave-one-subject-out experiments with 5 of the images acquired at 30 weeks PMA. This set of images was selected such that it included none of the patients with a follow-up image acquired at 40 weeks PMA. This allowed independent evaluation on the remaining 5 images acquired at 30 weeks PMA as well as on the images of the other 4 test sets by training with representative training data.
Preprocessing {#sec:preprocessing}
-------------
Prior to the segmentation, the neonatal images were bias corrected using the method of Likar et al. [@Lika01]. The adult images were bias corrected as provided in the MRBrainS13 challenge and the multi-atlas labelling challenge. To limit the number of voxels considered in the classification, brain masks were generated with BET [@Smit02a]. In each of the experiments, the samples from the training images were only selected from within the brain mask volumes. For each test image, only voxels within the brain mask volume were considered in the classification. The intensities were scaled such that for all images, the range of intensities within the brain mask was between 0 and 1023.
CNN parameters {#sec:parameters}
--------------
For all voxels within the brain mask, three in-plane patches with sizes of 25 $\times$ 25, 51 $\times$ 51 and 75 $\times$ 75 voxels are extracted, where the voxel of interest is in the centre. Because of the large slice thickness relative to the in-plane voxel size in 4 of the 5 image sets, orthogonal or 3D patches are not used, i.e. only information from the imaging plane is used for the segmentation of volumetric images.
A CNN with multiple convolution layers is used; a schematic of the network is shown in Figure \[fig:network\]. In the first layers 24 kernels are trained for each patch size. For the patches of 25 $\times$ 25 voxels, kernels of 5 $\times$ 5 voxels are used, for the patches of 51 $\times$ 51 voxels, kernels of 7 $\times$ 7 voxels are used, and for the patches of 75 $\times$ 75 voxels, kernels of 9 $\times$ 9 voxels are used. Max-pooling with kernels of 2 $\times$ 2 voxels is performed on the convolved images. The output images are input for the second layer, where 32 kernels of 3 $\times$ 3, 5 $\times$ 5 and 7 $\times$ 7 voxels are used. Again, max-pooling with kernels of 2 $\times$ 2 voxels is performed on the convolved images. In the third layer, 48 kernels of 3 $\times$ 3, 3 $\times$ 3 and 5 $\times$ 5 voxels are used. After this layer, max-pooling is only performed for the two largest patches, because for the smallest patch only an image of 3 $\times$ 3 voxels is left after three convolution layers. The contiguous even-sized max-pooling kernels cannot exactly cover the odd-sized patches. Therefore, before all max-pooling operations, the values are mirrored along the borders. The output of the third layer is input to fully connected layers with 256 nodes for each patch size. These nodes are subsequently connected to the softmax output layer with 9 nodes (8 tissue classes and background) for the neonatal images, 8 nodes (7 tissue classes and background) for the ageing adult images, and 7 nodes (6 tissue classes and background) for the young adult images. An example of the trained kernels in the first layer and an example of the convolution responses for images acquired at 30 weeks PMA is shown in Figure \[fig:kernels\].
The tissue classes consist of a very different number of voxels in these images. For example, in the images acquired at 30 weeks PMA, mWM consists of about 200 voxels, while uWM consists of 400 000 voxels. Therefore, to better balance the number of training samples of each class a fixed number of samples are extracted per class from each training image. If a class contains less than this number of samples, all samples are used. Multiple epochs are used in the training process, where a new set of random samples is extracted in each epoch. The performance on the test images acquired at 30 weeks PMA for each epoch using 25 000 and 50 000 training samples is shown in Figure \[fig:DiceEpochs\]. Based on this evaluation, in all experiments, 10 epochs and 50 000 training samples per class from each image are used.
Evaluation experiments {#sec:experiments}
----------------------
The performance for the images acquired at 30 weeks PMA was evaluated on the independent set of 5 images not included in the parameter estimation, using the other 5 images as training data. For the other image sets, the method was retrained using representative training data. Because of the limited number of available training images, the performance on the coronal and axial images acquired at 40 weeks PMA was evaluated in leave-one-subject-out cross-validation for each set. The average Dice coefficients and mean surface distances for each tissue class are shown per set in Table \[tab:diceresults\]: Experiments 1, 3, and 4. The segmentation results in one slice of one image of each of the test sets are shown in Figure \[fig:segmentations\].
A subset of the neonatal images used in this paper is used as test data in the NeoBrainS12 challenge. This includes 5 of the coronal images acquired at 30 weeks PMA, all 5 images acquired at 40 weeks PMA, and 5 of the axial images acquired at 40 weeks PMA. Therefore, to allow an *indication* of the performance in comparison with the results in the challenge, the average results for the axial images acquired at 40 weeks are presented separately for the 5 test images in the challenge (Table \[tab:diceresults\]: Experiment 5). For the coronal images acquired at 30 weeks PMA an additional experiment is done in the same fashion as for the other images: leave-one-subject-out cross-validation with all ten images where the average over the test images in the challenge is listed in Table \[tab:diceresults\]: Experiment 2.
Given that a larger set of manually annotated images of ageing adults was available, their performance was evaluated using 5 images as training data and 15 images to test the performance (Table \[tab:diceresults\]: Experiment 6, and Figure \[fig:segmentations\]). This furthermore allowed (indirect) comparison with the results of the MRBrainS13 challenge, which is performed using the same training and test images, except that only 3 combined tissue classes, namely WM, GM, and CSF are evaluated in the challenge.
Similar to the images of ageing adults, the images of young adults were evaluated using 5 images as training data and 10 images to test the performance (Table \[tab:diceresults\]: Experiment 7, and Figure \[fig:segmentations\]). Additionally, the performance on the independent test set of 20 images [@Land12] is evaluated using all 15 images as training data (Table \[tab:diceresults\]: Experiment 8).
Comparison with previous methods {#sec:othermethods}
--------------------------------
For comparison purposes, Table \[tab:diceresultsliterature\] lists segmentation results in terms of average Dice coefficients reported in the recent literature on neonatal MR brain image segmentation. Note that these methods used different scans that were segmented possibly using different tissue definitions. Hence, these results can only be used as an indication. Furthermore, Table \[tab:diceresultsliterature\] lists the best results per tissue class at the time of writing in the NeoBrainS12 and MRBrainS13 challenges. Note that in these challenges, the best results for different tissue types are not necessarily obtained by a single method.
To directly compare the method with previous methods, it was also evaluated in the NeoBrainS12 challenge using only the training images provided by the challenge. Table \[tab:comparisonneobrains\] compares the results of the proposed method with other methods that have segmented the same tissue classes in the same images. Comparison has been performed by averaging the Dice coefficients over all tissue classes in all test images. Detailed comparison per tissue class is available on the NeoBrainS12 website [^11].
Multi-scale approach {#sec:multiscale}
--------------------
To show the influence of combining different patch sizes in the network, the method was additionally trained using each of the three patch sizes separately. A segmentation result for the lateral sulcus and the hippocampus in an image acquired at 30 weeks PMA using only the patches of 25 $\times$ 25, 51 $\times$ 51 and 75 $\times$ 75 voxels, as well as those three patches combined, is shown in Figure \[fig:zoomedsegmentations\]. The results in terms of Dice coefficients and mean surface distance over all 5 test images acquired at 30 weeks PMA are shown in Table \[tab:diceseparatepatches\]: Experiments 1, 3, 5 and 7.
In addition, the proposed multi-scale approach was compared with the multi-scale approach of Sermanet and LeCun [@Serm11]. To allow fair comparison with the other experiments, we used the largest patch size, i.e. 75 $\times$ 75 voxels, as input. To acquire multi-scale information in this approach, the output of the first, second and third layers provide combined input for the fully connected layer (Table \[tab:diceseparatepatches\]: Experiment 6).
Single-plane approach {#sec:singleplane}
---------------------
To motivate the choice of providing the network with patches from the acquisition planes only, the method was additionally evaluated using patches from consecutive slices, patches from the orthogonal planes, and 3D patches.
First, the method was trained for the images acquired at 30 weeks PMA using three consecutive slices for each of the three patch sizes (25 $\times$ 25, 51 $\times$ 51 and 75 $\times$ 75). Instead of a network with three branches, i.e. one for each of the three patch sizes (Figure \[fig:network\]), now a network with nine branches, i.e. three for each of the three patch sizes, was used (Table \[tab:diceseparatepatches\]: Experiment 8). Because of the large anatomical variation between slices due to the large slice thickness in 4 of the 5 image sets, separate branches allow the network to extract the relevant information by optimising dedicated kernels for each of the nine provided input patches.
Second, the method was trained for the images acquired at 30 weeks PMA using patches from the orthogonal planes (coronal, sagittal and axial) for each of the three patch sizes. Because these images are highly anisotropic, they were first interpolated to isotropic resolution using an interpolation factor of 6 in the out-of-plane direction. This resulted in voxel sizes of 0.34 $\times$ 0.34 $\times$ 0.33. Again, these orthogonal patches of three sizes were input for a network with nine branches (Table \[tab:diceseparatepatches\]: Experiment 9).
Third, two networks with interpolated 3D input patches and 3D convolutions were evaluated for the images acquired at 30 weeks PMA, one with input patches of 25 $\times$ 25 $\times$ 25 voxels, and one with input patches of 51 $\times$ 51 $\times$ 51 voxels (Table \[tab:diceseparatepatches\]: Experiments 2 and 4).
Fourth, because the images of the young adults were already acquired with a (nearly) isotropic resolution, the performance of the network with three orthogonal patches for each of the three patch sizes was evaluated for this set as well. This network was evaluated for the segmentation of the 6 combined tissue classes (Table \[tab:diceseparatepatches\]: Experiment 11), as well as for the segmentation of all 134 classes. The average Dice coefficient for the segmentation of 134 classes in the 20 test images was 0.7353 overall, 0.7170 for the cortical regions, and 0.7850 for the non-cortical regions. This performance would result in the 9th overall ranking out of 26 participants on the MICCAI challenge on multi-atlas labelling [@Land12].
[.15]{} {width="\textwidth"}\
{width="\textwidth"}
[.15]{} {width="\textwidth"}\
{width="\textwidth"}
[.15]{} {width="\textwidth"}\
{width="\textwidth"}
[.15]{} {width="\textwidth"}\
{width="\textwidth"}
[.15]{} {width="\textwidth"}\
{width="\textwidth"}
[.15]{} {width="\textwidth"}\
{width="\textwidth"}
[|c|l|l|c c c c c c c c|]{}\
& **Image set** & **Experiment** & **CB** & **mWM** & **BGT** & **vCSF** & **(u)WM** & **BS** & **cGM** & **eCSF**\
1 & Cor. 30 wks & 25$\times$25 patch & 0.69 $\pm$ 0.04 & 0.33 $\pm$ 0.13 & 0.70 $\pm$ 0.05 & 0.76 $\pm$ 0.06 & 0.91 $\pm$ 0.02 & 0.58 $\pm$ 0.03 & 0.78 $\pm$ 0.02 & 0.86 $\pm$ 0.02\
2 & & 25$\times$25$\times$25 patch & 0.78 $\pm$ 0.03 & 0.60 $\pm$ 0.03 & 0.77 $\pm$ 0.04 & 0.78 $\pm$ 0.02 & 0.92 $\pm$ 0.01 & 0.79 $\pm$ 0.02 & 0.75 $\pm$ 0.02 & 0.85 $\pm$ 0.02\
3 & & 51$\times$51 patch & 0.90 $\pm$ 0.02 & 0.57 $\pm$ 0.08 & 0.87 $\pm$ 0.02 & 0.84 $\pm$ 0.04 & 0.94 $\pm$ 0.01 & 0.85 $\pm$ 0.01 & 0.78 $\pm$ 0.02 & 0.88 $\pm$ 0.02\
4 & & 51$\times$51$\times$51 patch & 0.82 $\pm$ 0.02 & 0.48 $\pm$ 0.14 & 0.77 $\pm$ 0.06 & 0.70 $\pm$ 0.07 & 0.90 $\pm$ 0.01 & 0.77 $\pm$ 0.02 & 0.67 $\pm$ 0.03 & 0.80 $\pm$ 0.01\
5 & & 75$\times$75 patch & 0.91 $\pm$ 0.01 & 0.62 $\pm$ 0.08 & 0.88 $\pm$ 0.02 & 0.85 $\pm$ 0.06 & 0.94 $\pm$ 0.01 & 0.85 $\pm$ 0.01 & 0.78 $\pm$ 0.02 & 0.88 $\pm$ 0.02\
6 & & 75$\times$75, multi-scale & 0.89 $\pm$ 0.02 & 0.61 $\pm$ 0.07 & 0.89 $\pm$ 0.01 & 0.83 $\pm$ 0.07 & 0.94 $\pm$ 0.01 & 0.86 $\pm$ 0.02 & 0.74 $\pm$ 0.03 & 0.86 $\pm$ 0.01\
7 & & **3 patch sizes** & 0.92 $\pm$ 0.02 & 0.69 $\pm$ 0.06 & 0.91 $\pm$ 0.02 & 0.88 $\pm$ 0.05 & 0.96 $\pm$ 0.00 & 0.87 $\pm$ 0.02 & 0.84 $\pm$ 0.01 & 0.91 $\pm$ 0.02\
8 & & 3 patch sizes, slices & 0.92 $\pm$ 0.01 & 0.68 $\pm$ 0.08 & 0.90 $\pm$ 0.01 & 0.89 $\pm$ 0.05 & 0.96 $\pm$ 0.00 & 0.87 $\pm$ 0.02 & 0.84 $\pm$ 0.01 & 0.91 $\pm$ 0.02\
9 & & 3 patch sizes, ortho & 0.93 $\pm$ 0.01 & 0.68 $\pm$ 0.06 & 0.90 $\pm$ 0.01 & 0.89 $\pm$ 0.03 & 0.96 $\pm$ 0.00 & 0.88 $\pm$ 0.02 & 0.83 $\pm$ 0.01 & 0.91 $\pm$ 0.01\
10 & Young adults & **3 patch sizes** & 0.95 $\pm$ 0.01 & - & 0.85 $\pm$ 0.01 & 0.85 $\pm$ 0.04 & 0.94 $\pm$ 0.01 & 0.92 $\pm$ 0.01 & 0.91 $\pm$ 0.01 & -\
11 & & 3 patch sizes, ortho & 0.96 $\pm$ 0.01 & - & 0.88 $\pm$ 0.01 & 0.84 $\pm$ 0.04 & 0.94 $\pm$ 0.01 & 0.93 $\pm$ 0.02 & 0.91 $\pm$ 0.01 & -\
\
& **Image set** & **Experiment** & **CB** & **mWM** & **BGT** & **vCSF** & **(u)WM** & **BS** & **cGM** & **eCSF**\
1 & Cor. 30 wks & 25$\times$25 patch & 8.12 $\pm$ 1.25 & 2.83 $\pm$ 0.94 & 3.82 $\pm$ 0.99 & 1.53 $\pm$ 0.65 & 0.40 $\pm$ 0.07 & 2.09 $\pm$ 0.74 & 0.21 $\pm$ 0.03 & 0.18 $\pm$ 0.04\
2 & & 25$\times$25$\times$25 patch & 4.76 $\pm$ 0.95 & 1.40 $\pm$ 0.62 & 2.12 $\pm$ 0.55 & 0.85 $\pm$ 0.58 & 0.31 $\pm$ 0.04 & 0.91 $\pm$ 0.09 & 0.21 $\pm$ 0.03 & 0.19 $\pm$ 0.04\
3 & & 51$\times$51 patch & 0.47 $\pm$ 0.23 & 1.41 $\pm$ 0.64 & 0.79 $\pm$ 0.18 & 0.33 $\pm$ 0.09 & 0.19 $\pm$ 0.01 & 0.53 $\pm$ 0.21 & 0.19 $\pm$ 0.04 & 0.14 $\pm$ 0.03\
4 & & 51$\times$51$\times$51 patch & 1.82 $\pm$ 0.71 & 1.97 $\pm$ 0.70 & 2.66 $\pm$ 1.30 & 1.58 $\pm$ 0.65 & 0.36 $\pm$ 0.02 & 0.88 $\pm$ 0.13 & 0.25 $\pm$ 0.04 & 0.26 $\pm$ 0.05\
5 & & 75$\times$75 patch & 0.32 $\pm$ 0.16 & 1.33 $\pm$ 0.53 & 0.63 $\pm$ 0.16 & 0.29 $\pm$ 0.11 & 0.16 $\pm$ 0.01 & 0.36 $\pm$ 0.09 & 0.15 $\pm$ 0.02 & 0.15 $\pm$ 0.02\
6 & & 75$\times$75, multi-scale & 0.46 $\pm$ 0.18 & 1.19 $\pm$ 0.51 & 0.56 $\pm$ 0.13 & 0.33 $\pm$ 0.11 & 0.19 $\pm$ 0.02 & 0.29 $\pm$ 0.04 & 0.19 $\pm$ 0.03 & 0.18 $\pm$ 0.03\
7 & & **3 patch sizes** & 0.42 $\pm$ 0.32 & 0.90 $\pm$ 0.70 & 0.49 $\pm$ 0.20 & 0.23 $\pm$ 0.07 & 0.12 $\pm$ 0.01 & 0.29 $\pm$ 0.02 & 0.13 $\pm$ 0.02 & 0.10 $\pm$ 0.02\
8 & & 3 patch sizes, slices & 0.33 $\pm$ 0.15 & 0.72 $\pm$ 0.37 & 0.55 $\pm$ 0.15 & 0.25 $\pm$ 0.08 & 0.12 $\pm$ 0.01 & 0.35 $\pm$ 0.04 & 0.11 $\pm$ 0.01 & 0.10 $\pm$ 0.02\
9 & & 3 patch sizes, ortho & 0.19 $\pm$ 0.05 & 0.65 $\pm$ 0.56 & 0.70 $\pm$ 0.38 & 0.21 $\pm$ 0.05 & 0.12 $\pm$ 0.01 & 0.25 $\pm$ 0.04 & 0.12 $\pm$ 0.02 & 0.10 $\pm$ 0.02\
10 & Young adults & **3 patch sizes** & 1.17 $\pm$ 0.80 & - & 0.75 $\pm$ 0.07 & 0.52 $\pm$ 0.12 & 0.21 $\pm$ 0.05 & 0.64 $\pm$ 0.28 & 0.28 $\pm$ 0.06 & -\
11 & & 3 patch sizes, ortho & 0.65 $\pm$ 0.19 & - & 0.64 $\pm$ 0.20 & 0.77 $\pm$ 0.67 & 0.20 $\pm$ 0.05 & 0.52 $\pm$ 0.35 & 0.27 $\pm$ 0.06 & -\
Discussion {#sec:discussion}
==========
We have presented a method for the automatic segmentation of MR brain images, which has been evaluated on manually segmented preterm neonatal and adult images. Accurate segmentation results were obtained in images acquired at different ages (preterm neonatal vs. ageing adult) and with different acquisition protocols (coronal vs. axial, T~2~- vs. T~1~-weighted). The method uses CNNs to automatically extract the relevant information from the training data to learn convolution kernels, which allows adaptation to the problem at hand. Post-processing of the segmentation results, which is typically performed in brain image segmentation tasks, was here not necessary.
The method achieved accurate segmentations in terms of Dice coefficients for all tissue classes except mWM in the neonatal images. mWM consists, especially in the images acquired at 30 weeks PMA, of very few voxels, hence each mislabelled voxel strongly influences the Dice coefficient. Furthermore, this tissue is poorly visible in T~2~-weighted images and slightly better visible in T~1~-weighted images, which makes the manual annotation very difficult and prone to inter-observer variability [@Isgu15]. This consequently has influence on the performance of the neighbouring tissue classes, which can be seen from e.g. the performance on BS for the coronal images acquired at 40 weeks PMA (Table \[tab:diceresults\]: Experiment 3). Even though the performance in terms of Dice coefficients is lowest for mWM, the location of the tissue class is quite well recognised by the method, even based on T~2~-weighted images only. This can be seen from the mean surface distances (Table \[tab:diceresults\]) and from Figure \[fig:segmentations\]: e.g. for the coronal images acquired at 40 weeks PMA, where mWM is recognised but does not follow the exact shape of the manual segmentation.
The method uses patches of three different sizes. Because of the highly anisotropic voxel sizes in 4 of the 5 evaluated image sets, the use of orthogonal or 3D patches would result in anisotropic patches, non-square/cubic patches, or strongly resampled data. Therefore, only in-plane information was used, i.e. the volumetric images were segmented using information from the imaging plane only. This corresponds to the way a human observer performs annotation of these images: by selecting voxels that belong to each of the tissue types in a single image plane based on trained anatomical knowledge. This approach is further motivated by additional experiments using patches from consecutive slices and using (resampled) orthogonal patches (Table \[tab:diceseparatepatches\]: Experiments 8, 9 and 11), which did not provide information that allowed more accurate classification. The performance in terms of average Dice coefficient over all tissue classes remained 0.87 for the anisotropic images acquired at 30 weeks PMA, and improved slightly, from 0.90 to 0.91, for the (nearly) isotropic images of young adults. The limited increase in performance for the images of young adults was possibly influenced by the manual segmentations, which have been performed in the coronal plane and are therefore less accurate in the other planes. Furthermore, experiments with interpolated 3D patches of single sizes suggested that 3D convolutions also do not necessarily result in increased performance (Table \[tab:diceseparatepatches\]: Experiments 2 and 4), possibly influenced by the large increase in number of weights and biases in the optimisation. The performance of a multi-scale 3D CNN was not evaluated because training a network with three large 3D input patches would become computationally infeasible.
The results using each of the three patches separately show that each patch focusses on a different aspect of the segmentation problem. The smallest patch allows detailed analysis of local texture but misses spatial consistency. The largest patch results in a smooth segmentation but misses small details. This can especially be seen for the segmentation of cGM; the average Dice coefficients for each of the patch sizes separately is 0.78, while the combined patches result in an average Dice coefficient of 0.84 (Table \[tab:diceseparatepatches\]: Experiments 1, 3, 5 and 7). This can also be observed in Figure \[fig:zoomedsegmentations\]: the internal CSF in the lateral sulcus is recognised using only the smallest patch size, but not with the two larger sizes. On the other hand, spatially inconsistent results are obtained for the segmentation of the hippocampus using only the smallest patch size, whereas the largest patch size shows better consistency in that respect. In both cases, the result with the three patches combined shows the most accurate segmentation. The kernels and responses in the first layer (Figure \[fig:kernels\]) show that different kernels enhance different parts of the image and the larger kernels operate on a larger scale. In addition, the proposed multi-scale approach is compared with the multi-scale approach of Sermanet and LeCun [@Serm11] and shows better performance for every tissue class.
The method is based on T~2~-weighted images for the neonatal images and on T~1~-weighted images in adults. Furthermore, the segmentation of both coronal and axial images was evaluated. This shows that the method is able to adapt to different acquisition protocols based on representative training data. Furthermore, being able to segment the images based on a single T~1~- or T~2~-weighted image, instead of relying on multiple acquisitions, allows omitting registration between images and thus precludes possible registration errors.
The segmentation method was applied to MR images of the developing, the young adult, and the ageing brain. No images acquired between these three age ranges were evaluated, but the method can likely be straightforwardly applied to images acquired at other ages. The method might however be limited in the application to images with abnormalities that are not included in the training set.
Additional evaluation in the segmentation of 134 classes as provided in the MICCAI challenge on multi-atlas labelling demonstrated that the approach can straight-forwardly be applied to a different segmentation task, without any parameter tuning. This resulted in an overall average Dice coefficient of 0.7353 as compared with 0.725 reported by de Brébisson et al. [@Breb15].
A brain mask [@Smit02a] is used to limit the number of samples considered in the classification. Because the results of this brain mask were not always consistent for the images of ageing adults, a very conservative setting for the fractional intensity parameter was chosen for these images such that almost the whole head was included in the mask. It might be possible to completely omit the use of a brain mask, but this would also increase the computational load.
CNNs are known to need a large number of training samples. Because we applied CNNs in a voxel classification task, many training samples were available. They were, however, extracted from a limited number of training images. More training images could therefore increase the performance. In addition, including diverse training data from different data sets in a single training step could generalise the problem and would in this way not require retraining the network. This generalisability over a more diverse data set might, however, be achieved at the cost of a decreased performance on each of the data sets that are included.
Many adjustments to the architecture of the network are possible, including more or different patch and kernel sizes, a different number of convolution layers, or a different number of nodes or layers in the fully connected network. Several settings were evaluated in preliminary experiments, but future work could focus on additional optimisation in this respect. Note that the network architecture provides a search space for the method and all weights and biases are optimised by the system. Small variations in the architecture are therefore not likely to have a large effect on the performance.
Conclusion {#sec:conclusion}
==========
The presented CNN method for the automatic segmentation of MR brain images shows accurate segmentation results in images acquired at different ages and with different acquisition protocols.
Acknowledgment {#sec:acknowledgement .unnumbered}
==============
The authors gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used in this research.
The authors thank Bennett Landman for providing the data of the MICCAI challenge on multi-atlas labelling.
[^1]: This paper has been published in May 2016 as: Moeskops, P., Viergever, M. A., Mendrik, A. M., de Vries, L. S., Benders, M. J., and Išgum, I. (2016). Automatic segmentation of MR brain images with a convolutional neural network. *IEEE Transactions on Medical Imaging,* 35(5), 1252–1261.
[^2]: <https://masi.vuse.vanderbilt.edu/workshop2012>
[^3]: <http://neobrains12.isi.uu.nl/mainResults.php>
[^4]: <http://mrbrains13.isi.uu.nl/results.php>
[^5]: \[fn:nbs12\]<http://neobrains12.isi.uu.nl/reference.php>
[^6]: \[fn:mrbs13\]<http://mrbrains13.isi.uu.nl/details.php>
[^7]: <https://masi.vuse.vanderbilt.edu/workshop2012>
[^8]: <http://www.oasis-brains.org/>
[^9]: <http://neuromorphometrics.org:8080/Seg/>
[^10]: <http://braincolor.mindboggle.info/protocols/>
[^11]: <http://neobrains12.isi.uu.nl/mainResults.php>
|
---
author:
- 'Didier Caenepeel$^1$ and Martin Leblanc$^{1,2}$'
title: 'Effective potential for nonrelativistic non-Abelian Chern-Simons matter system in constant background fields$^*$'
---
phyzzx.tex =1000 = 22 true cm = 17 true cm = 0 true cm
[^1][This work is supported in part by funds provided by the Natural Sciences and Engineering Research Council of Canada and the Fonds pour la Formation de Chercheurs et l’Aide à la Recherche.]{}
20 pt
***$^1$ Laboratoire de Physique Nucléaire,***
***$^2$ Centre de Recherches Mathématiques,***
4 pt
***Université de Montréal***
***Case postale 6128, succ. centre-ville***
***Montréal, (Qc), Canada***
***H3C-3J7***
40 pt
Chern-Simons theories have been studied in many context in the last decade from the study of general relativity to condensed matter systems. An important line of developments occurred when it was shown that classical relativistic charged scalars minimally coupled to an Abelian Chern-Simons gauge field in (2+1) spacetime dimensions have vortex (soliton) solutions for self-dual equations when the coupling constant takes special values in a $\phi^6$-theory \[1,2\]. The presence of vortex solutions permits the emergence of new mechanisms for anyons superconductivity \[3\]. Evidence has been found showing that the existence of such systems possessing vortex solutions is due to the presence of an $N=2$ supersymmetry obtained by adding fermion fields in an appropriate way \[4,5\].
It is more reasonable to think that the physics of superconductors should be at lower energies and described by a nonrelativistic system. It turns out that the same statements as above can be made for the corresponding nonrelativistic field theory. Specifically, by taking the limit $c\rightarrow\infty$ ($c$ being the speed of light), one obtains a field theory of interacting nonrelativistic scalar fields minimally coupled to an Abelian Chern-Simons gauge field \[6,7\]. This theory also contains self-dual vortex (soliton) solutions when the coupling constant takes a special value \[7\]. Perhaps more surprisingly in the nonrelativistic case, the self-duality originates also from $N=2$ supersymmetry \[8\].
Much work has already been done in generalizing these ideas to non-Abelian theories. Relativistic and nonrelativistic models of matter fields coupled to non-Abelian Chern-Simons field \[9-11\] have been studied at the classical level, however, the relation between them as the limit $c\rightarrow\infty$ has never been analysed as above. Nevertheless, non-Abelian self-dual solitons exist in the corresponding nonrelativistic Chern-Simons field theory \[11\]. Supersymmetric extensions for the relativistic system have proven to show the same relation between supersymmetry and self-duality as is the case for the Abelian theories \[12,13\] and it could be interesting to see if a supersymmetric extension of the nonrelativistic non-Abelian Chern-Simons matter system is possible.
The quantization of the above models has been discussed in various context. In the case of the pure non-Abelian Chern-Simons theory, Pisarski [*et al*]{} \[14\] have shown using a perturbative analysis with dimensional regularization that a one-loop radiative correction to the Chern-Simons coupling constant $\kappa\to
\kappa + {c_2(G)\over 2}$ (shifted by the casimir of the group) occurs. The same result was then obtained by Witten \[15\] with a saddle point quantization around pure gauge vector potentials. Their calculations were confirmed by using a modified Pauli-Villars method \[16\], an $F^2$-type regulator \[10\] and a modified operator regularization method for determining phases of determinants \[17\]. However, it is possible that this shift of the Chern-Simons coupling constant be absent if a variant of dimensional regularization \[10\] or a BRST-invariant regulator is used \[18\].
When relativistic matter fields are included, Chen [*et al.*]{} showed that infinite renormalization for the matter fields as well as for the Chern-Simons gauge field is necessary at two loops and therefore that the fields obtain nontrivial anomalous dimensions. Also, the $\beta$-function for the gauge coupling constant is zero to two-loop order \[10\].
In the case when nonrelativistic matter fields are coupled to the Abelian Chern-Simons field, we know that the theory experiences conformal symmetry breaking at the quantum level unless the coupling constant takes the self-dual value and that this result holds up to three loop order \[19-22\]. Only recently was a perturbative analysis performed for the scattering of scalars in the nonrelativistic non-Abelian theory using Feynman’s diagrammatic \[23\]. Again, the conformal symmetry is restored upon choosing the self-dual point. All these computations were performed either in the conventional Feynman diagrammatic or within a functional method.
Our goal in this paper is to complement the above discussion and to compute the scalar field effective potential of the nonrelativistic non-Abelian Chern-Simons system with the help of a functional method.
We start with an SU(2) non-Abelian Chern-Simons system action \[${\rm diag}\;\;\eta~=(+,-,-)$\]
$$\eqalignno {
S= \int dt d^2{\bf x} \;\;
&\Bigl \{- \kappa \epsilon^{\alpha\beta\gamma}{\rm Tr}
(A_\alpha \partial_\beta A_\gamma + {2\over 3}A_\alpha A_\beta A_\gamma)
+i\phi^\dagger D_t \phi
- {1 \over 2 } |{\bf D}\phi|^2
-{\lambda_{pqrs}\over 4}\phi^\dagger_p\phi^\dagger_q \phi_r \phi_s
\Bigr \} \cr
&
&(1.1) \cr }$$ where the gauge fields belong to the su(2) Lie algebra $A_\mu=i{A_\mu^a \tau^a\over 2} $, and $ D_t = \partial_t + i A_0^a {\tau^a\over 2}$ and ${\bf D}={\bf \nabla }-i{\bf A}^a {\tau^a\over 2}$ are the time and space covariant derivatives respectively. $\phi_p$ is the two component nonrelativistic scalar field, $p=1,2$. The self-interaction coupling constants satisfy $\lambda_{pqrs}= \lambda_{qpsr}$ since the fields are bosonic and $\lambda^*_{pqrs}=\lambda_{rspq}$ for the Lagrangian to be real. $\tau^a$ are the Pauli matrices which satisfy the usual commutation relations $[{\tau^a\over 2},{\tau^b\over 2}]=\epsilon^{abc}{\tau^c\over 2}$ and trace relation ${\rm Tr} \bigl( {\tau^a\over 2}{\tau^b\over 2}\bigr)
= {1\over 2} \delta^{ab}$. We have omitted the mass parameter since in nonrelativistic systems, it is always possible to set it equal to unity. \[We will use a vector notation: for instance, in the plane the cross product is ${\bf V}\times {\bf W} = \epsilon^{ij}V^iW^j$, the curl of a vector is ${\bf\nabla}\times {\bf V}=\epsilon^{ij}\partial_iV^j$, the curl of a scalar is $({\bf\nabla}\times S)^i=\epsilon^{ij}\partial_jS$ and we shall introduce the notation $\bigl ({\bf A}\times {\bf {\hat z}}\bigr)^i =\epsilon^{ij}A^j$. The notation $x=(t, {\bf x})$ will also be used unless stated otherwise.\]
The action (1.1) enjoys several invariances at the classical level. It is obvious that the matter part of this action is Galilean invariant and conformally invariant \[6,7\]. The presence of the Chern-Simons term as the only kinematical term for gauge fields does not spoil these two sets of invariances as this term is topologically invariant [*i.e.*]{} it is invariant upon any space and time transformations \[9,24\]. Nevertheless, the interesting symmetry is gauge invariance. Let us for a moment forget the self-interacting part of the matter sector. The matter fields are minimally coupled, hence this part is gauge invariant. The self-interacting part however is gauge invariant only for $$\lambda_{1111}=2\lambda_{1212}=\lambda_{2222} \equiv \lambda \eqno (1.2)$$ with the other constants vanishing. The Chern-Simons term is not invariant against gauge transformation; rather it changes by total derivatives. Under special circumstances the total derivatives can be set to zero. However, if we need to consider large gauge transformations, only the exponential of $i\times {\rm action}$ need be gauge invariant. In this case, we speak of “gauge invariant action" if the quantization condition $4\pi\kappa={\rm integer}$ applies \[9\]. We will use these assessments in the course of the calculation.
To compute the scalar field effective potential of the action (1.1), we proceed with the functional method of Jackiw \[25\], which is a useful way to evaluate the effective potential without having to use a classical background field, in conjunction with the operator regularization method \[26,27\]. The difference here with other evaluations of effective potentials is that the electromagnetic field is linearly related to the matter field through Gauss’s law. The procedure involves shifting the fields present in the action by constants. Shifting the matter field by a constant implies that the magnetic field must be constant and consequently, quantal effects could emerge from the background gauge field towards the scalar field effective potential. In the Abelian version of this model, the same procedure was used to compute the effective potential for scalars \[22\]. However in that case, the magnetic field $B=\nabla\times {\bf A}(x)= {\rm constant}$ was satisfied by a vector potential, which depended linearly on $x$. In the non-Abelian case, it is possible to satisfy Gauss’s law with a constant vector potential \[see below\].
The paper is organised as follows: In the next section, we set up the problem and provide the classical equation of motion to show how we satisfy the ones for the electromagnetic fields with constant background fields. We show that although local gauge invariance is lost through such a choice of background fields, global gauge invariance is retained; we gauge fix in a Galilean fashion in a gauge reminiscent of the $R_\xi$-gauge, which is globally gauge invariant. In the third section, we present the results of the calculation and in the last section, we summarize the work and conclude .
0.4cm 0.1cm The functional evaluation of the effective potential starts with the definition of a new shifted action: $$\eqalignno {
S_{\rm new}&=S\Bigl\{\phi_p(x)=\varphi_p+\pi_p(x);
A^a_\mu (x)=a_\mu^a + Q^a_\mu (x)\Bigr \}\cr
&\quad -S\Bigl \{ \varphi_p, a^a_\mu \Bigr\}
- {\rm terms\;linear\;in\;quantum\;fields} &(2.1)\cr }$$ where we shift the scalar field and the vector potential by constant fields in such a way that the classical equations of motion for the electromagnetic fields are satisfied. It is the action (2.1) that enters the path integral for the evaluation of the effective potential.
Let us for a moment look at the classical equation of motion of the action (1.1) to see how the equation of motion respond to the constant shift. The gauge covariant classical equation of motion for arbitrary fields are $$\eqalignno {
B^a&\equiv {\bf \nabla}\times {\bf a}^a + {1\over 2}\epsilon^{abc}
{\bf a}^b\times {\bf a}^c
=-{1\over 2\kappa} \phi^\dagger\tau^a \phi &(2.2a) \cr
{\bf E}^a &\equiv - {\bf \nabla} a_0^a - \partial_t {\bf a}^a
+\epsilon^{abc}a_0^b{\bf a}^c
= {1\over 4\kappa} {\bf J}^a\times {\bf {\hat z}}
&(2.2b) \cr
i(D_t&\phi)_p+{1\over 2} ({\bf D}\cdot{\bf D}\phi)_p
-{1\over 2}\lambda_{pqrs}\phi^*_q\phi_r\phi_s = 0 &(2.2c) \cr }$$ where the current is given by ${\bf J}^a = {1\over 2i}
\bigl [\phi^\dagger({\tau^a\over 2}) ({\bf D}\phi) -
({\bf D}\phi)^\dagger({\tau^a\over 2}) \phi \bigr ] $ and ${\bf D}$ is the covariant derivative with respect to the background gauge field.
The equation for the magnetic field (2.2a) is recognized as Gauss’s law. Since scalar fields are shifted by constants, Eqs.(2.2) have to be read with $\phi_p=\varphi_p={\rm constant}$. To maintain consistency with Gauss’s law, we need to choose a background vector potential ${\bf a}^a$ such that the magnetic field is constant throughout the plane. The simplest choice is to take a constant background vector potential \[28\]. We can also choose $a_0^a$ constant with the help of Eq.(2.2b). Since Eqs.(2.2) with constant background fields are now globally gauge covariant, without fear of loosing generality, we can find an explicit solution to Eqs.(2.2a,b). If we choose for instance, $\varphi_p=(v,0)$ and if we label the SU(2) group structure with colors $a=(1,2,3)\equiv (Y,B,R)$ then we find that ${\bf a}_Y=(\sqrt {{v^*v\over 2\kappa}},0)$, ${\bf a}_B=(0,-\sqrt {{v^*v\over 2\kappa}})$, $a^0_R={v^*v\over 16\kappa}$ and the other components vanishing, is a solution to Eqs.(2.2a,b). Of course, this particular solution does not satisfy the equation of motion for the scalar field, Eq.(2.2c), unless $\varphi_p=0$ or if the coupling constants satisfy $\lambda=-{5\over 16\kappa}$. We will use the above solution for $a_\mu^a$ and $\varphi_p$ in the definition of $S_{\rm new}$ in Eq.(2.1) and extrapolate at the end of the calculation the form of the effective potential since it must be globally gauge invariant \[see below\].
We now turn to a proof that global gauge invariance is retained upon quantizing this theory around constant background fields. We follow the discussion of Abbott \[29\]. Under local gauge transformations $$\eqalignno {
A^\prime_\mu &= U^{-1}\partial_\mu U + U^{-1}A_\mu U &(2.3a) \cr
\phi^\prime &= U^{-1} \phi &(2.3b)\cr}$$ or infinitesimally with $U=\exp i\omega^a \tau^a/2$ $$\eqalignno {
\delta A^a_\mu&=\partial_\mu\omega^a-\epsilon^{abc}A_\mu^b \omega^c &(2.4a)\cr
\delta\phi_p&=-i\omega^a(\tau^a/2)_{pq}\phi_q &(2.4b)\cr }$$ the action of Eq.(1.1) transforms according to $$\delta S = (4\pi\kappa)(2\pi)w(U) \eqno (2.5)$$ where $w(U)$ is the winding number and the usual quantization condition over $4\pi\kappa~={\rm integer}$ follows if we want $\exp \{i\times {\rm action}\}$ to be gauge invariant under (large) gauge transformation.
In Jackiw’s approach the generating functional is defined as $$\eqalignno {
Z(\varphi_p; a^a_\mu)&= \int \delta \pi \delta Q \;\;det\bigl [
{\delta G^a\over \delta \omega^b } \bigr ] &(2.6) \cr
\exp i \int dt\;d^2{\bf x}
&\Bigl [ {\cal L}
(\varphi_p+\pi_p; a^a_\mu+Q^a_\mu)+{1\over 2\xi} G_a^\dagger G^a
-{\cal L}(\varphi_p, a^a_\mu)
- {\delta {\cal L}\over \delta A}|_{a,\varphi}\cdot Q
-{\delta {\cal L}\over \delta \phi}|_{a,\varphi}\cdot \pi \Bigr] \cr}$$ where $\varphi_p$ and $a^a_\mu$ are constants, and ${\delta G^a\over \delta \omega^b } $ is the ghost contribution and is given by the functional derivative of the gauge-fixing term under the infinitesimal quantum gauge transformation $\delta Q^a_\mu~=\partial_\mu\omega^a-\epsilon^{abc}(a_\mu^b+Q_\mu^b)\omega^c$. Then, just as in the conventional approach, the effective potential at vanishing external current and vanishing quantum field argument is $$\eqalignno {
V_{\rm eff}[\varphi_p, a^a_\mu]&= {i\over \int d^3x}
\ln Z [\varphi_p, a^a_\mu] &(2.7) \cr}$$
It remains to choose the background field gauge condition which reveals to be rather difficult for the problem at hand. The reason is as follows: when matter is not present, the Chern-Simons theory is defined without the introduction of a metric. Upon choosing the gauge-fixing condition the theory could loose its topological character \[15\]. Indeed, Witten chose a Lorentz-type family gauges in his derivation of the one-loop quantum correction to the pure non-Abelian Chern-Simons theory. He found, however, that the topological property of the action remained unaffected. In the case where matter is coupled to the Chern-Simons theory, we already have chosen a metric and we must preserve as many as the symmetry present there. In our case, we have to preserve the Galilean symmetry. We therefore choose $$\eqalignno {
G^a&= \nabla\cdot{\bf Q^a}+{i\over 2}\xi
\pi^\dagger \tau^a \varphi &(2.8)\cr }$$ and note that the gauge-fixing resembles the $R_\xi$-type gauge-fixing conditions.
Now, by making the following change of variables for the quantum fields $$\eqalignno {
Q_\mu &\to Q^\prime_\mu = U^{-1}Q_\mu U \cr
\pi &\to \pi^\prime = U^{-1}\pi &(2.9) \cr }$$ where $U$ is a gauge transformation with constant $\omega^a$, it is easy to show that $Z[\varphi_p,a^a_\mu]$ and hence the effective potential are invariant under the constant background gauge transformation $$\eqalignno {
a_\mu &\to a^\prime_\mu=U^{-1}a_\mu U \cr
\varphi &\to \varphi^\prime=U^{-1}\varphi &(2.10)\cr }$$ since each term is invariant. It is interesting to note that in retaining only global gauge invariance, the gauge-fixing condition becomes simpler since it is not written in an explicit background gauge covariant form \[29\]. This will of course be advantageous in the course of the explicit calculation since it will enables us to integrate out the gauge and matter fields by performing determinants as they are now diagonal in Fourier space \[see below\]. We now turn to the calculation of the effective potential in the $R_\xi$-gauge.
0.4cm 0.1cm
We perform the calculation of the scalar field effective potential following the procedure set up in the previous section. The quadratic part in quantum fields of the action appearing in Eq. (2.6) upon using the gauge-fixing condition of Eq. (2.8) and $\varphi_p=(v,0)$, ${\bf a}_Y=(\sqrt {{v^*v\over 2\kappa}},0)$, ${\bf a}_B=(0,-\sqrt {{v^*v\over 2\kappa}})$, $a^0_R={v^*v\over 16\kappa}$ and the other components vanishing is $$\eqalignno {
S=\int dt \; d^2{\bf x} \; &
\Bigl\{ {\kappa\over 2}(\partial_t{\bf Q}_a)\times {\bf Q}_a
-\kappa Q_a^0 {\bf \nabla}\times {\bf Q}_a
+ {1\over 2\xi}({\bf \nabla}\cdot {\bf Q}_a)^2 - {\rho\over 8}{\bf Q}_a\cdot
{\bf Q}_a\cr
&+i\pi^\dagger(D_t)\pi - {1\over 2}|{\bf D}\pi|^2 + {\cal L}_{\rm S.I}
+{\xi\over 8} (\pi^\dagger\tau^a \varphi)(\varphi^\dagger\tau^a\pi) \cr
&+ R^0Q^0_R + B^0Q^0_B + Y^0Q^0_Y +
{\bf R}\cdot {\bf Q}_R + {\bf B}\cdot {\bf Q}_B +
{\bf Y}\cdot {\bf Q}_Y \Bigr\} &(3.1)\cr }$$ where $\rho =v^*v$, ${\cal L}_{\rm S.I.}$ stands for the quadratic self-interacting part in $\pi$-fields, which will be treated later, and the currents are given by $$\eqalignno{
R^0&=[ j_R + \kappa ({\bf a}_B\times {\bf Q}_Y -
{\bf a}_Y \times {\bf Q}_B) ]\cr
B^0&= j_B \cr
Y^0&=j_Y &(3.2) \cr
{\bf R} &= \kappa [ {\bf a}_B Q^0_Y - {\bf a}_Y Q^0_B ] \times {\bf {\hat z}}
\cr
{\bf B} &= {1\over 2}{\bf a}_B j_R + \kappa a^0_R {\bf Q}_Y
\times {\bf {\hat z}} \cr
{\bf Y} &= {1\over 2}{\bf a}_Y j_R \cr }$$ with the useful definition for matter-currents $$\eqalignno{
j_R &= -{1\over 2}(\pi^*_1 v + v^*\pi_1 ) \cr
j_B &= -{i\over 2}(\pi^*_2 v - v^*\pi_2 ) &(3.3) \cr
j_Y &= -{1\over 2}(\pi^*_2 v + v^*\pi_2 ) \quad .\cr }$$
We are now ready to proceed with the functional integration in the $\xi\to 0$ limit and up to include ${\cal O}(v^4)$ contributions \[we refer to ${\cal O}(v^4)$ whenever we have ${\cal O}(\lambda^2)$, ${\cal O}({\lambda\over \kappa})$ or ${\cal O}({1\over \kappa^2})$\]. The contribution coming from the ghosts to one-loop order is given by the determinant of the functional derivative of the gauge-fixing term Eq. (2.8) with respect to an infinitesimal quantum gauge transformation as above Eq. (2.7) without terms having quantum fields $$\eqalignno {
{\rm det} {\delta G^a\over \delta \omega^b} &=
{\rm det} \Bigl [-\nabla^2 \delta^{ab} -\epsilon^{acb}{\bf a}_c\cdot\nabla
-{\xi\over 4}(v^*, 0) \tau^b\tau^a {v\choose 0} \Bigr ]\quad . &(3.4) \cr }$$ This contribution is easily calculated since it factorizes from the path integral and the determinant is performed on a $3\times 3$ matrix. The result to ${\cal O}(\xi)$ is $$\eqalignno {
V_{\rm ghosts}
&= -\tr \ln \Bigl ( 1- {( {\bf a}_B\cdot {\bf p})^2\over {\bf p}^4}
- {( {\bf a}_Y\cdot {\bf p})^2\over {\bf p}^4} \Bigr ) &(3.5) \cr }$$ where the trace is now taken only on energy/momentum space.
Next, we integrate out the quantum gauge fields by integrating first over the R-color. The first line in Eq. (3.1) is diagonal in the (R,B,Y) colors, however, the last line in Eq. (3.1) mixes the Q’s with different colors. For instance in the R-sector, the structure of the exponent in the functional integration is $-{1\over 2}Q^\mu_R \Delta^{-1}_{\mu\nu}Q^\nu_R +Q^\mu_R \; R_\mu$ where in Fourier space (i$\partial_\mu = p_\mu$) $${\Delta}^{-1}(v ;\omega, {\bf p})=
\pmatrix {0& -m & n \cr
m &{\rho\over 4}-{1\over \xi}p^1p^1 &-i\kappa\omega-{1\over\xi}
p^1p^2\cr
-n &i\kappa\omega-{1\over \xi}p^1p^2 &
{\rho\over 4}-{1\over \xi}p^2p^2 \cr }\quad ,\eqno (3.6)$$ with $m=ic\kappa p^2 $ and $n=ic\kappa p^1$. Upon the usual change of variable, one obtains a contribution to the effective potential of the type $\ln \det^{-1/2}\Delta^{-1}_{\mu\nu}$ and a modification to the original action by the amount ${1\over 2}R^\mu \Delta_{\mu\nu}R^\nu$, which does not contain any $Q^\mu_R$-dependence with $${\Delta}(v ;\omega, {\bf p})= -{1\over c^2\kappa^2 {\bf p}^2 }
\pmatrix {{\rho\over 4} & -m & n \cr
m &0 & 0 \cr
-n &0 & 0\cr }\quad +{\cal O}(\xi). \eqno (3.7)$$ The contribution to the effective potential vanishes in the limit $\xi\to 0$, however the amount ${1\over 2}R^\mu \Delta_{\mu\nu}R^\nu$ modifies the B-sector and the Y-sector and provides also contributions exclusive to the matter sector. For instance in the B-sector, the structure of the exponent in the functional integration is now $-{1\over 2}Q^\mu_B \Delta^{-1}_{\mu\nu}Q^\nu_B
-{1\over 2}Q^\mu_B \Theta_{\mu\nu}Q^\nu_B + Q^\mu_B\; B^\prime_\mu$ with currents given by $$\eqalignno {
B^\prime_0 &= B_0 + j_R { (i{\bf p}\cdot {\bf a}_Y)\over {\bf p}^2 } +
{ (i{\bf p}\cdot {\bf a}_Y)\over {\bf p}^2}
(\kappa {\bf a}_B\times {\bf Q}_Y) &(3.8)\cr
{\bf B}^\prime &= {\bf B} + (\kappa {\bf a}_Y\times {\bf {\hat z}})
{ (i{\bf p}\cdot {\bf a}_B)\over {\bf p}^2 } Q^0_Y -
{\rho\over 4\kappa {\bf p}^2} j_R {\bf a}_Y\times {\bf {\hat z}}
+{\rho \over 4 {\bf p}^2 } ({\bf a}_B\cdot{\bf a}_Y) {\bf Q}_Y
- {\rho\over 4 {\bf p}^2} {\bf a}_B ({\bf a}_Y\cdot{\bf Q}_Y) \cr }$$ and a matrix $\Theta$ which depends only on background fields. Upon integrating the B-sector, one gets two contributions to the effective potential, one that modifies the structure of the action in the Y-sector, and one which contribute only to the matter sector. The contributions to the effective potential are $\ln \det^{-1/2}\Delta^{-1}_{\mu\nu}$, which vanishes in the $\xi\to 0$ limit and the second is $\ln \det^{-1/2}\bigl ( 1 + \Delta\times \Theta \bigr )=
-{1\over 2} \ln \bigl ( 1- {({\bf p}\cdot{\bf a}_Y)^2\over {\bf p}^4}\bigr
)^2$. The modification to the action in the Y-sector is ${1\over 2}B^{\prime\mu} \Bigl \{
\bigl [ 1 + \Delta\times \Theta \bigr]^{-1}\Delta\Bigr
\}_{\mu\nu} B^{\prime\nu}$. Upon collecting all terms that depends on the $Q_Y^\mu$ variable, we can integrate the Y-sector in the same way. Finally, the result of the Q-integration is divided in a contribution to the effective potential and a part that modifies the matter sector. We get $$\eqalignno {
V_{\rm eff}(v, a_\mu^a) &= {1\over 2} \tr \ln \bigl (
1- {({\bf p}\cdot{\bf a}_Y)^2\over {\bf p}^4}\bigr )^2
+{1\over 2} \tr \ln \bigl ( 1- {({\bf p}\cdot{\bf a}_B)^2\over {\bf p}^4}
-{({\bf p}\cdot{\bf a}_B)^2({\bf p}\cdot{\bf a}_Y)^2\over {\bf p}^8}
\bigr )^2 \cr
&-\tr \ln \bigl ( 1- {( {\bf a}_B\cdot {\bf p})^2\over {\bf p}^4}
- {( {\bf a}_Y\cdot {\bf p})^2\over {\bf p}^4} \bigr )
+{i\over \int d^3x }\ln \int \delta\pi \exp iS_{\rm matter} &(3.9) \cr }$$ where the first contribution in Eq.(3.9) comes from integrating the B-sector while the second term comes from the Y-sector. The third contribution originates from the ghosts sector. Although the limit $\xi \to 0$ should be taken at the end of the calculation, we have carefully dropped terms of ${\cal O}(\xi)$ to clarify the expressions. The modified action $S_{\rm matter}$ is given by $$\eqalignno{
S_{\rm matter} = \int dt\;d^2{\bf x}\Bigl \{ &
i\pi^\dagger(D_t)\pi - {1\over 2}|{\bf D}\pi|^2 +{\cal L}_{{\rm S.I}}
+{\xi\over 8} (\pi^\dagger\tau^a \varphi)(\varphi^\dagger\tau^a\pi) \cr
&-{\rho \over 8\kappa^2} j_R{1\over {\bf p}^2}j_R \cr
&-{\rho \over 8\kappa^2} j_B{1\over {\bf p}^2}j_B
+ {1\over 2\kappa}(j_B-i{ {\bf p}\cdot{\bf a}_Y\over {\bf p}^2 } j_R)
{1\over {\bf p}^2}
(i{\bf p}\times {\bf a}_B) j_R \cr
&-{\rho \over 8\kappa^2} j_Y{1\over {\bf p}^2}j_Y
+{1\over 2\kappa}(j_Y-i{ {\bf p}\cdot{\bf a}_B\over {\bf p}^2 } j_R)
{1\over {\bf p}^2}
(i{\bf p}\times {\bf a}_Y) j_R \Bigr \}&(3.10) \cr }$$ where all expressions have the operators ${\bf p}$ and $\omega$ acting on the right, and the covariant derivatives read $D_t= -i(\omega-{1\over 2}a_0^R\tau^R)$ and ${\bf D}= i({\bf p}-{1\over 2}{\bf a}^a\tau^a)$. The first line comes from the original action while the second is from the R-integration. The third and fourth lines are from B and Y-integration respectively.
Some comments are in order at this point. The ghosts contribution in Eq. (3.9) cancels against the gauge field contributions to ${\cal O}(v^4)$ leaving only the remaining functional integration over the matter sector. Indeed, if we had not introduced any matter fields, we would have gotten a vanishing answer in contrast with ref. \[14-17\] but in agreement with \[10,18,23\].
In any case, when matter is present, the effective potential is given by the remaining functional integration over the matter fields. It is not too difficult to see that the structure of the action (3.10) is $$\eqalignno{
\int dt d^2{\bf x}\;\Bigl \{
& {1\over 2} \pi^{*a}_1(x) {\cal D}^{-1}_{ab}(x-x') \pi^b_1(x') +
{1\over 2} \pi^{*a}_2(x) {\cal E}^{-1}_{ab}(x-x') \pi^b_2(x') +
J \pi^*_2 + J^*\pi_2 \Bigr \}\cr
& &(3.11) \cr}$$ where the notation for the scalar fields is $\pi^a_i=(\pi, \pi^*)$ for each $i=1,2$, the current mixing the $\pi$’s is given by $J= {i\over 2} ({\bf a}_-\cdot {\bf p}) \pi_1
+{v\over 4\kappa} { ({\bf p}\times {\bf a}_-)\over {\bf p^2}} j_R$, and the matrix for the $\pi_1$ field in Fourier space to ${\cal O}(\xi)$ is $${\cal D}^{-1}(\varphi_p, a^\mu;\omega, {\bf p})
= \pmatrix { \omega - {1\over 2}{\bf p}^2 + A + {B\over {\bf p}^2}
+ {C\over {\bf p}^4}
&\bigl (-{1\over 2} \lambda -{\rho\over 16\kappa^2{\bf p}^2}
+{f \over 4\kappa{\bf p}^4}\bigr )v^2 \cr
\bigl ( -{1\over 2} \lambda -{\rho\over 16\kappa^2{\bf p}^2}
+ {f\over 4\kappa{\bf p}^4 } \bigr ) (v^*)^2
&-\omega - {1\over 2}{\bf p}^2 + A + {B\over {\bf p}^2} + {C\over {\bf p}^4}
\cr}\eqno (3.12)$$with ${\bf a}_\pm = {\bf a}_B \pm i {\bf a}_Y$, $A= {1\over 8}({\bf a}_+\cdot{\bf a}_-) -a^0_R/2 - \lambda\rho$, $B= -{\rho^2\over 16\kappa^2}$, $C={\rho\over 4\kappa}f$ and $f=-[({\bf p}\cdot{\bf a}_B)({\bf p}\times{\bf a}_Y) +
({\bf p}\cdot{\bf a}_Y)({\bf p}\times{\bf a}_B)]$.
Similarly, the matrix for the $\pi_2$ field is $${\cal E}^{-1}(\varphi_p, a^\mu;\omega, {\bf p})
= \pmatrix { \omega - {1\over 2}{\bf p}^2 + E + {F\over {\bf p}^2}
&0\cr
0
&-\omega - {1\over 2}{\bf p}^2 + E + {F \over {\bf p}^2} \cr
}\eqno (3.13)$$ with $E= {1\over 8}({\bf a}_+\cdot{\bf a}_-) + a^0_R/2 - {1\over
2}\lambda\rho$, and $F= -{1\over 8}{\rho^2\over \kappa^2}$. To perform the functional integration over $\pi_2$ is not difficult. Upon doing it, there remains only to perform the integration over $\pi_1$ and the result, keeping in mind that we are computing up to include ${\cal O}(v^4)$ in the limit $\xi \to 0$, is $$\eqalignno {
V_{\rm eff} (v, a_\mu^a)\int d^3x &=
i\ln \int \delta\pi \exp iS_{\rm matter}
=-{i\over 2}\ln {\rm Det} {\cal E}^{-1}_{ab}
-{i\over 2}\ln {\rm Det} \Bigl \{ {\cal D}^{-1}_{ab} + {\cal M}_{ab}\Bigr \}
\quad
&(3.14) \cr }$$ where the easily found ${\cal M}_{ab}$ matrix appears as a consequence of the mixing between the $\pi$’s and the determinant are taken functionally. Since the operators ${\cal E}^{-1}$ and ${\cal D}^{-1}$ are diagonal in Fourier space, we can write the operatorial form of the final contribution to the effective potential to ${\cal O}(v^4)$ and in the limit $\xi \to 0$ as $$\eqalignno {
V_{\rm eff} (v, a_\mu^a)=&
-{i\over 2} \tr \ln {1\over \mu^{\prime 4}}
\Bigl \{ -\omega^2 + ({\bf p}^2/2-E)^2 -F \Bigr \} \cr
&-{i\over 2} \tr \ln {1\over \mu^{\prime 4}}
\Bigl \{ -\omega^2 + ({\bf p}^2/2- A - {B\over {\bf p^2}} -{C\over {\bf
p^2}})^2
-{1\over 4}\lambda^2 \rho^2 \cr
& \quad\quad\quad +\omega (X_+ - X_-) + (A - {{\bf p^2}\over 2})
(X_+ + X_-) + X_+ X_- \Bigr \}
&(3.15) \cr}$$ where the trace is performed in energy/momentum space, the parameter $\mu^\prime$ of mass dimension one is introduced for dimensional reasons \[26\], and $$\eqalignno{
X_\pm =& {1\over 4}\Delta^{-1}_E\Bigl \{
({\bf p}\cdot {\bf a_+}) (\pm\omega-{\bf p^2}/2 + E)({\bf p}\cdot {\bf a_-})
\cr
&-i{\rho\over 4\kappa} ({\bf p}\times{\bf a}_+) (\pm\omega-{\bf p^2}/2)
({\bf p}\cdot {\bf a_-})
+i {\rho\over 4\kappa} ({\bf p}\times{\bf a}_-) (\pm\omega-{\bf p^2}/2)
({\bf p}\cdot {\bf a_+}) \Bigr \}
&(3.16)\cr }$$ with $\Delta_E\equiv -\omega^2+({\bf p^2}/2-E)^2$.
We pose for a moment to note that so far we have not used any form of regulator to extract the information we have in Eq.(3.15). This is because we have not encountered any ultraviolet divergences so far. However, Eq.(3.15) is divergent in the ultraviolet regime and therefore requires a regulator in order to evaluate its contribution to the effective potential. We will use operator regularization \[22,26,27\] to perform the computation since it preserve all symmetries present at the classical level modulo anomalies.
For each logarithm in Eq.(3.15), it is necessary to identify an operator $H_0$ and an operator $H_I$. Upon using operator regularization, the n-point function is easily identified as the n-th $H_I$ insertion with $H_0$ acting as the propagator for each internal lines. Following Ref.\[22\], we define $H_0=\{ -\omega^2 + ({\bf p}^2/2-A_1)^2\}/\mu^{\prime 4}$ for the first logarithm and $H_0=\{ -\omega^2 + ({\bf p}^2/2-E_1)^2\}/\mu^{\prime 4}$ for the second one where $A=A_1+A_2$, $E=E_1+E_2$, $A_1=-\lambda\rho$ and $E_1=-{1\over 2}\lambda\rho$. In $H_I$, we collect the rest of the expressions for each logarithm.
Both logarithm are easily regulated via $${\rm Tr} \ln H=-\lim_{s\to0}{d \over ds}{\rm Tr}
{1\over \Gamma(s)}\int_0^\infty dt\; t^{s-1}
\Bigl\{e^{-H_0t}+e^{-H_0t}(-t)H_I+e^{-H_0t}{(-t)^2\over 2}H_I^2+\dots\Bigr\}$$ and upon using the useful integral over the energy $d\omega$ $$\ick {
I\equiv& \int_{-\infty}^{\infty}{d\omega\over 2\pi}{1\over \Delta_a^{1+s}}
=i{(2s)!\over s!s!}({\bf p}^2+2a)^{-(1+2s)}\quad ,&(3.17) \cr }$$ the contribution to the effective potential from the first log is $$\ick{
&-{i\over 2}\int {d^2{\bf p}\over (2\pi)^2}\Bigl ({d\over ds}s\Bigr )
\Bigl\{[2E_1E_2+E_2^2-F]-{\bf p^2}E_2\Bigr\}i{(2s)!\over s!s!}
{(\mu^{\prime})^{4s}\over
({\bf p}^2-2E_1)^{1+2s}} \cr
&+{i\over 4}\int {d^2{\bf p}\over (2\pi)^2}\Bigl ({d\over ds}s(s+1)\Bigr )
{\bf p}^4E_2^2 i{(2s+2)!\over (s+1)!(s+1)!}
{(\mu^{\prime})^{4s}\over ({\bf p}^2-2E_1)^{3+2s}} &(3.18)\cr }$$ where the first term is a one-pts function while the second is a two-pts function. Upon symmetric integration over momentum integrals, (3.18) becomes $$\ick {
&-{1\over 8\pi}F\ln\Bigl({\mu^{\prime 2}\over -2E_1}\Bigr) \quad .
&(3.19) \cr }$$ The second logarithm is more tedious to compute as it involves many one and two-pts functions. We rewrite the second logarithm of Eq.(3.15) as $$\ick{
-{i\over 2}\int d\omega d^2{\bf p} \ln{1\over \mu^{\prime 4} } \Bigl\{
&-\omega^2+\Bigl({{\bf p}^2\over 2}-A_1\Bigr)^2-A_2{\bf p}^2
+2A_1A_2+A_2^2-{1\over 4}\lambda^2\rho^2-B-{C\over {\bf p}^2} \cr
&-{i\over 2\kappa} \Bigl( \omega^2 + {{\bf p}^4\over 4} \Bigr)
{\rho\over 4{\bf p}^2} [({\bf p}\times{\bf a}_+)({\bf p}\cdot{\bf a}_-)-
({\bf p}\times{\bf a}_-)({\bf p}\cdot{\bf a}_+)]{1\over \Delta_{E_1}}\cr
&+\Bigl(\omega^2+{{\bf p}^4\over 4}\Bigr)
{1\over 2}({\bf p}\cdot{\bf a}_+)({\bf p}\cdot{\bf a}_-)
{{\bf p}^2E_2\over \Delta_{E_1}^2}-{{\bf p}^2\over 4}(A+E)
{({\bf p}\cdot {\bf a}_+)({\bf p}\cdot {\bf a}_-)\over \Delta_{E_1}^2} \cr
&+\Bigl(\omega^2+{{\bf p}^4\over 4}\Bigr){1\over 2}
{({\bf p}\cdot {\bf a}_+)({\bf p}\cdot {\bf a}_-)\over \Delta_{E_1}}
+{1\over 16}{({\bf p}\cdot {\bf a}_+)^2({\bf p}\cdot
{\bf a}_-)^2\over \Delta_{E_1}}
\Bigr\} &(3.20) \cr}$$ Upon using the regulated form of the logarithm, Eq. (3.17) and symmetric integration over $d^2{\bf p}$, we obtain for the non-vanishing one-pts functions $$\ick{
-{1\over 32\pi}\Bigl\{&8A_1A_2 - 4[2A_1A_2+A_2^2-{1\over 4}\lambda^2\rho^2-B]
-E_2({\bf a}_+\cdot{\bf a}_-) \cr
&+(A+E)({\bf a}_+\cdot{\bf a}_-)
-(A_1+E_1)({\bf a}_+\cdot{\bf a}_-)
-{1\over 16}{\cal P}\Bigr\}\ln{\mu^{\prime 2}\over -2A_1} &(3.21) \cr}$$ where ${\cal P}=\bigl\{({\bf a}_+\cdot{\bf a}_+)({\bf a}_-\cdot{\bf
a}_-)+2({\bf
a}_+\cdot{\bf a}_-)^2\bigr\}$ and from the non-vanishing two-pts function $$\ick {
&-{1\over 32\pi}\Bigl\{4A_2^2-A_2({\bf a}_+\cdot{\bf a}_-)+{1\over 16}{\cal P}
\Bigr\}\ln{\mu^{\prime 2}\over -2A_1}&(3.22) \cr }$$ where each terms in Eq. (3.22) arises separately from squaring the $A_2{\bf p}^2$-term of Eq. (3.20), from crossing the $A_2{\bf p}^2$-term with the one before last of Eq. (3.20), and from squaring the one before last of Eq. (3.20), respectively.
Upon collecting all contributions of Eq. (3.19,21,22), we obtain for the unnormalized effective potential $$\ick{
V_{\rm eff}(\rho,a_\mu^a)
&={1\over 4}\lambda\rho^2 + c_1\rho^2 -
{1\over 8\pi}\Bigl (F+({1\over 4}\lambda^2\rho^2+B)\Bigr )
\ln{\mu^{\prime 2}\over -2A_1}\cr
&={1\over 4}\lambda\rho^2+c_2\rho^2+{1\over 8\pi}\Bigl (4\lambda^2-{3\over
\kappa^2}\Bigr ){\rho^2\over 16}\ln{\rho \over \mu^{\prime 2} } &(3.23) \cr }$$ where now global gauge invariance is restored with $\rho=v_p^{\ast}v_p$. In obtaining Eq. (3.19) and Eqs. (3.21-22), we drop an unimportant (const.$\rho^2$)-term arising from the first term independent of $H_I$ in the regulated form of the logarithm. We have inserted this contribution in the $c_1\rho^2$-term in Eq. (3.23) together with a term of the same form which arises from Eq. (3.19) because $A_1=2E_1$. The $c_2\rho^2$-term collects the $c_1\rho^2$-term with the term proportional to $\rho^2\ln 2\lambda$. In any case, the $c_2\rho^2$-term disappear upon normalizing the effective potential. Note that no ultraviolet divergences occur in Eq. (3.23) as expected upon using operator regularization.
After imposing the normalization condition $${d^2\over d\rho^2}V_{\rm eff}|_{\rho=\mu^2}={1\over 2}\lambda(\mu),\eqno (3.24)$$ the normalized effective scalar field potential in the ${\rm R}_\xi$-gauge in $\xi\to 0$ limit up to include ${\cal O}(v^4)$ contributions is $$\ick {
V_{\rm eff}(\rho,a_\mu^a)&={1\over 4}\rho^2\Bigl[\lambda(\mu)+{1\over
8\pi}\bigl(\lambda^2(\mu)-{4\over \kappa^2}\alpha^2 \bigr)
\bigl( \ln{\rho \over\mu^2} - {3\over 2}\bigr )\Bigr].&(3.25)\cr }$$ where the appearance of the group theoretical factor $\alpha^2=3/16$ is a consequence of the su(2) Lie algebra: 3 corresponds to the number of generators and 1/16 to a normalization of the generators. Note that the background gauge fields do not contribute to the scalar field effective potential.
0.6cm 0.4cm
We computed the scalar field effective potential of a nonrelativistic non-Abelian Chern-Simons field theory possessing various classical symmetries such as Galilean, conformal and gauge symmetries. We applied the traditional functional method using constant background gauge and matter fields in order to satisfy Gauss’s law. Simplifications in the course of the calculation are manifest when constant background gauge and matter fields are used since the determinants are taken on $3\times 3$ constant matrix, which are diagonal in Fourier space, and when an $R_\xi$ gauge-fixing condition is imposed, which respect Galilean invariance and global gauge invariance. We have regulated the divergences in the matter sector using operator regularization. We note that the scalar field effective potential does not depend, to the order considered, on the background gauge field, which satisfies Gauss’s law and that our result is in agreement with a diagrammatic analysis of the non-Abelian Aharonov-Bohm scattering. As a spin off of our calculation, we find that there are no infinite nor finite renormalization of the Chern-Simons coupling constant $\kappa$ in our method in contrast to the results of ref.\[14-17\] but in agreement with \[10,18,23\].
We note that the effective potential presented in Eq. (3.25) is a generalization of the effective potential found in the Abelian version of the model (1.1), which can be retrieved by setting $\alpha^2=1$ \[22\].
We did not discuss here the gauge parameter dependence of our result Eq. (3.25). However, in the Abelian version of the model (1.1), the effective potential was also computed with the $R_\xi$ gauge-fixing condition and with a Coulomb gauge with arbitrary $\xi$. We found in that case, that the effective potential was the same in either gauge-fixing conditions and was independent of the gauge parameter $\xi$ \[22\]. We therefore expect that our result for the effective potential presented in Eq. (3.25) to be gauge parameter independent.
Finally, we analyse the scale anomaly. Conformal symmetry is related to the $\beta$-function. A non-vanishing $\beta$-function indicates conformal symmetry breaking. Using the renormalization group equation $$0=\mu{d\over d\mu}V_{\rm eff}(\rho)=\Bigl [ \mu {\partial\over{\partial\mu}}
+\beta(\lambda_1(\mu)){\partial\over {\partial\lambda_1(\mu)}}
\Bigr ] V_{\rm eff} (\rho) \eqno(4.1)$$ the $\beta$-function reads $$\beta(\lambda(\mu))={1 \over 4\pi}\Bigl (\lambda^2(\mu) -
{4 \over \kappa^2}{3\over 16} \Bigr )\quad .\eqno(4.2)$$ For unrelated coupling constants the theory loses conformal symmetry. At the self-dual point $\lambda(\mu)=-{{\sqrt 3}\over 2\kappa}$ and at $\lambda(\mu)={{\sqrt 3}\over 2\kappa}$ the $\beta$-function vanishes; hence, the theory is conformally symmetric, recovering the result of Bak and Bergman \[23\].
0.6cm 0.4cm
We thank R.B. MacKenzie, D.G.C. McKeon and M.B. Paranjape for useful comments.
0.6cm 0.4cm
[1.]{} J. Hong, Y. Kim and P.Y. Pac, Phys. Rev. Lett. [**64**]{}, 2230 (1990).
[2.]{} R. Jackiw and E.J. Weinberg, Phys. Rev. Lett. [**64**]{}, 2234 (1990).
[3.]{} See for a review, J. D. Lykken, J. Sonnenschein and N. Weiss, Int. J. of Mod. Phys. [**A6**]{}, 5155 (1991).
[4.]{} C. Lee, K. Lee and E.J. Weinberg, Phys. Lett. [**B243**]{}, 105 (1990).
[5.]{} M. Leblanc and M.T. Thomaz, Phys. Lett. [**B281**]{}, 259 (1992).
[6.]{} C.R. Hagen, Phys. Rev. [**D31**]{}, 848 (1985).
[7.]{} R. Jackiw and S.Y. Pi, Phys. Rev. [**D49**]{}, 3500 (1990).
[8.]{} G. Lozano, M. Leblanc and H. Min, Ann. of Phys.[**219**]{}, 328 (1992).
[9.]{} S. Deser, R. Jackiw and S. Templeton, Phys. Rev. Lett. [**48**]{}, 975 (1982); Ann. Phys. (N.Y.) [**140**]{}, 372 (1982).
[10.]{} W. Chen, G.W. Semenoff and Y. Wu, Phys. Rev. [**D 46**]{}, 5521 (1992) \[and references therein\].
[11.]{} G. Dunne, R. Jackiw, S. Pi and C. Trugenberger, Phys. Rev. [**D 43**]{}, 1332 (1991).
[12.]{} E.A. Ivanov, Phys. Lett. [**268B**]{}, 203 (1991); S.J. Gates and H. Nishino, [*ibid.*]{}[**281**]{}, 72 (1991).
[13.]{} K. Lee, Phys. Rev. Lett. [**66**]{}, 553 (1991); K. Lee, Phys. Lett. [**255B**]{}, 381 (1991).
[14.]{} R. Pisarski and S. Rao, Phys. Rev. [**D32**]{}, 2081 (1985).
[15.]{} E. Witten, Commun. Math. Phys. [**121**]{}, 351 (1989).
[16.]{} L. Alvarez-Gaumé, J.M.F. Labastida and A.V. Ramallo, Nucl. Phys. [**B334**]{},103 (1990).
[17.]{} D. Birmingham, R. Kantowski and M. Rakowski, Phys. Lett. [**B251**]{}, 121 (1990).
[18.]{} G. Giavarini, C.P. Martin and F. Riuz Riuz, preprint FTUAM94/8, NIKHEF-H 94/14, UPRF94/395, hepth/9406034.
[19.]{} G. Lozano, Phys. Lett. [**B283**]{}, 70 (1992).
[20.]{} O. Bergman and G. Lozano, Ann. Phys. (N.Y.) [**229**]{}, 416 (1994).
[21.]{} D. Freedman, G. Lozano and N. Rius, Phys. Rev. [**D49**]{}, 1054 (1994).
[22.]{} D. Caenepeel, F. Gingras, M. Leblanc and D.G.C. McKeon, Phys. Rev. [**D49**]{}, (1994) (in press).
[23.]{} D. Bak and O. Bergman, preprint MIT-CTP-2283, (1994).
[24.]{} R. Jackiw, Ann. Phys. (N.Y.) [**201**]{}, 83 (1990).
[25.]{} R. Jackiw, Phys. Rev. [**D9**]{}, 1686 (1974).
[26.]{} D.G.C. McKeon, T.N. Sherry, Phys. Rev. [**59**]{}, 532 (1987); Phys. Rev. [**D35**]{}, 3584 (1987); Can. J. Phys. [**66**]{}, 268 (1988).
[27.]{} L. Culumovic, M. Leblanc, R.B. Mann, D.G.C. McKeon, and T.N. Sherry, Phys. Rev. [**D41**]{}, 514 (1990).
[28.]{} L.S. Brown and W.I. Weisberger, Nucl. Phys. [**B157**]{}, 285 (1979).
[29.]{} L.F. Abbott, Nucl. Phys. [**B185**]{}, 189 (1981).
**RÉSUMÉ**
Nous présentons le potentiel effectif pour un champ scalaire non-relativiste couplé à un champ de jauge de Chern-Simons non-Abélien. Pour ce faire, nous utilisons une méthode fonctionnelle basée sur des champs constants de manière à satisfaire l’équation de Gauss et pour simplifier le calcul. Nous intégrons le champ de jauge et le champ de matière à l’aide d’une condition de jauge $R_\xi$ dans la limite $\xi\to 0$. Les divergences qui apparaissent dans le secteur de la matière sont régularisées via la méthode de régularisation des opérateurs. Nous ne trouvons aucune correction radiative à la constante de couplage de Chern-Simons et le système démontre une brisure spontanée de la symétrie conforme à l’ordre considéré excepté au point critique auto-dual. Ces résultats sont en accord avec une analyse récente du phénomène de diffusion Aharonov-Bohm non-Abélien.
[^1]: \*
|
---
abstract: |
A static, spherically symmetric spacetime with negative pressures is conjectured inside a star. The gravitational field is repulsive and so a central singularity is avoided. The positive energy density and the pressures of the imperfect fluid are finite everywhere. The Tolman-Komar energy of the space is negative, as for a de Sitter geometry. From the Darmois-Israel junction conditions on the star surface one finds the constant length $b$ from the metric and the expression of the surface tension $\sigma$ of the thin shell separating the interior from the Schwarzschild exterior. Some properties of the timelike and null geodesics in the Painleve-Gullstrand coordinates are investigated.\
**Keywords** : negative pressures, Komar energy, matching conditions, regular star, geodesics.
author:
- |
Hristu Culetu,\
Ovidius University, Dept.of Physics and Electronics,\
B-dul Mamaia 124, 900527 Constanta, Romania,\
e-mail : [email protected]
title: Pattern for a star filled with imperfect fluid
---
Introduction
============
Current data supports the view that the matter content of the Universe consists of two basic components, namely dark matter (DM) and dark energy (DE) with ordinary matter playing a minor role. The nature and composition of DM and DE is still not understood. The entire motivation for the existence of DM and DE is based on their validity at all distance scales of the standard Newton - Einstein gravitational theory [@PM] and on the experimental fact that our Universe is accelerating. Mazur and Mottola [@MM1] extended the Bose-Einstein condensation to a gravitational system and constructed a dark, compact object, with a de Sitter interior and an exterior Schwarzschild geometry, being separated by a thin shell [@MM2; @EM]. More recently, Danila et al. [@BD] consider the physical properties of some classes of neutron and Bose-Einstein condensate stars in the hybrid metric-Palatini gravitational theory, which is very successful in observed phenomenology, unifying local constraints at the Solar system level and the late time cosmic acceleration.
Brandenberger and Frohlich [@BF] investigated the possibility that DM and DE would have a common origin. They introduced a complex axion field whose radial component gives rise to DE while the angular component plays the role of DM. Lemos and Zaslavskii [@LZ] studied a general relativistic solution of gravitational equations, composed of a Zel’dovich-Letelier star interior with a static configuration coupled to a Schwarzschild exterior through a spherical thin shell. The interior solution encloses some matter in a spacetime pit, dubbed pit solution. This string pits resemble Wheeler’s bags of gold. Comer and Katz [@CK] stated that there is no reason to suppose that the energy-momentum tensor always takes the perfect fluid form inside a star, admiting anisotropic pressures (when radial and tangential pressures are different), with some of them being even negative. A system with such features might be applied to relativistic regime (for example, for neutron stars).
Motivated by the previous studies on the interior properties of a relativistic star, we propose in Sec.2 the spherically-symmetric inner space is composed from an imperfect fluid with negative pressures but positive energy density. The geometry is regular throughout the star interior, being flat at the origin $r = 0$ and presenting a horizon at infinity. The classical field $\Phi$ we introduced is repulsive, having transversal pressures as the sources on the r.h.s. of the Poisson equation. In Sec.3 we apply the above model to a star interior, paying attention to the Darmois-Israel junction conditions at a thin shell separating the inner metric from the outer Schwarzschild geometry and focus especially on the shell properties - the surface tension and its energy density. Sec.4 is devoted to the timelike and null geodesics for the spacetime inside the star, with the metric written in Painleve-Gullstrand (PG) coordinates. We end with few comments in Sec.5. Geometrical units $G = c = 1$ are used throughout the paper, unless otherwise specified.
Regular imperfect fluid
=======================
We investigate a spacetime of the following form $$ds^{2} = -(1- e^{-\frac{b}{r}}) ~dt^{2} + \frac {dr^{2}}{1- e^{-\frac{b}{r}}} + r^{2} d \Omega^{2}
\label{2.1}$$ where $b$ is a positive constant and $d \Omega^{2}$ stands for the metric on the unit 2 - sphere. The coordinate ($t, r, \theta, \phi$) have the standard meaning. As we shell see, the term $e^{-\frac{b}{r}}$ plays the role of a regulator [@HC1]. We have, indeed, $f(r) \rightarrow 1$ if $r \rightarrow 0$ and $f(r) \rightarrow 0$ at infinity, where $f(r) = -g_{tt} = 1- e^{-\frac{b}{r}}$. The function $f(r)$ has an inflexion point at $r = b/2$, with $0<f(r)<1$. The line-element (2.1) is Minkowskian at the origin and has a horizon at infinity.
Let us consider a static observer in the geometry (2.1) with a velocity vector field $$u^{a} = \left(\frac{1}{\sqrt{1- e^{-\frac{b}{r}}}}, 0, 0, 0\right)
\label{2.2}$$ which gives us the corresponding covarian acceleration $$a^{b} = \left(0, -\frac{b}{2r^{2}} e^{-\frac{b}{r}}, 0, 0\right),
\label{2.3}$$ with the only nonzero component $a^{r} = f'(r)/2$, where $f' \equiv df/dr$. One also notes that $a^{r}<0$, that means our observer should accelerate towards the origin for to preserve the same position. In other words, the gravitational field is repulsive. The same effect undergoes a static observer in de Sitter geometry, in static coordinates. The function $a^{r}(r)$ vanishes at $r = 0$ and at infinity. In addition, it takes a minimum value $a^{r}_{min} = -4/e^{2}b$ (lne = 1), at $r = b/2$. In the region $r>>b$ one obtains $f(r)\approx b/r$ (first order in $b/r$), a situation analysed by Vaz [@CV] who identified a process by which an energy extraction from the center occurs, leaving behind a negative point mass at the center to which corresponds an energy density $\epsilon (r) \propto 1/r^{2}$ and a radial pressure $p_{r} = -\epsilon $, with zero transversal pressures (see also [@HC2; @DNY; @HC3]).
For the geometry (2.1) the scalar curvature is given by $$R^{b}_{~b} = \frac{1}{r^{2}}\left[1 + \left(1 + \frac{b}{r}\right)^{2}\right] e^{-\frac{b}{r}},
\label{2.4}$$ that is finite in the whole space (same is valid for the Kretschmann scalar). Moreover, (2.2) leads to a vanishing shear tensor $\sigma^{a}_{b}$. If we wrote formally $f(r)$ as $f(r) = 1 - \frac{2m(r)}{r}$, where $m(r)$ is the mass up to the radius $r$ (or the Misner-Sharp mass in our case), we would get $m(r) = \frac{r}{2} e^{-\frac{b}{r}}$, which shows clearly that, when $r \rightarrow \infty$ (or, more physically, if $r>>b$), we have $r \approx 2m(r)$, which is the horizon location. As a function of $r$, $m(r) \rightarrow 0$ when $r \rightarrow 0$ and, at infinity, $m(r)$ approaches its asymptote $m(r) = \frac{r}{2} - \frac{b}{2}$.
Let us find now the source of the metric (2.1), namely the stress tensor we need on the r.h.s. of Einstein’s equation $G_{ab} = 8\pi T_{ab}$ in order to have (2.1) its exact solution. We write down firstly the general expression of the energy-momentum tensor for an imperfect fluid [@KM; @HC4; @FC] $$T_{~a}^{b} = (\rho + p_{t})u_{a} u^{b} + p_{t} \delta_{a}^{b}+ (p_{r} - p_{t}) n_{a} n^{b} + u_{a}q^{b} + u^{b}q_{a},
\label{2.5}$$ where $\rho = T_{ab}u^{a}u^{b}$ is the energy density, $p_{t}$ represents the transversal pressures ($p_{t} = p_{\theta} = p_{\phi}$), $n^{b}$ is a spacelike vector with $ n_{a} n^{a} = 1$ and $n_{a} u^{a} = 0$, $q_{a}$ is the energy flux density four vector, $p_{r}$ is the radial pressure and $u^{b}$ is given by (2.2). From its properties we get $n^{a} = (0, \sqrt{1 - e^{-\frac{b}{r}}}, 0, 0)$. On the grounds of the above informations, one finds that $$\rho = \frac{1}{8\pi r^{2}}\left(1 + \frac{b}{r}\right) e^{-\frac{b}{r}} = -p_{r},~~~p_{t} = -\frac{b^{2}}{8\pi r^{4}}e^{-\frac{b}{r}},~~~q^{a} = 0.
\label{2.6}$$ It is worth noting that $\rho$ is always positive but all pressures are negative. In the region $r>>b$ (or $b/r \rightarrow 0$) we have $\rho = -p_{r} \approx 1/8\pi r^{2}$ and $p_{\theta} \approx 0$, values obtained previously in [@CV; @HC2; @DNY]. Moreover, $\rho$ and $p_{\theta}$ are finite everywhere, being zero at the origin and at infinity. The function $\rho (r)$ acquires a maximum value $\rho_{max} \approx 1.8/8\pi b^{2}$ at $r_{max} = b(\sqrt{3} - 1)/2$. In addition, $d\rho/dr>0$ before $\rho$ reaches its maximum value, due to the repulsive forces inside the star. That prevents the formation of a singularity at the center, where actually the geometry is Minkowskian.
We look now for a Newtonian potential given by $f(r) = 1 + 2\Phi$, whence $\Phi = -(1/2)e^{-\frac{b}{r}}$. We may write the Poisson equation for $\Phi$ $$\nabla^{2}\Phi = \frac{1}{r^{2}}\frac{d}{dr}(r^{2}\Phi) = 4\pi \cdot 2p_{\theta},
\label{2.7}$$ with $d\Phi /dr = -|a^{r}|$. Noting that the sources from the r.h.s. of (2.7) are the transversal pressures but not the energy density $\rho$. That is a consequence of the fact that relativistically, the contribution to the gravitational energy comes from $\rho_{Komar} = \rho + 3p$ [@TP] (for the perfect fluid) or $\rho_{Komar} = \rho + p_{r} + 2p_{\theta} = 2p_{\theta}$ (fo our imperfect fluid) [@HC5]. We could check that by calculating the Tolman-Komar energy [@TP] $$W = 2 \int(T_{ab} - \frac{1}{2} g_{ab}T^{c}_{~c})u^{a} u^{b} N\sqrt{h} d^{3}x ,
\label{2.8}$$ where $u^{a}$ is given by (2.2), $N = \sqrt{-g_{tt}}$ is the lapse function and $h$ is the determinant of the spatial 3 - metric, $h_{ab} = g_{ab} + u_{a} u_{b}$. Eq.(2.8) yields $$W = 4\pi \int_{0}^{\infty}(\rho + p_{r} + 2p_{\theta}) N r^{2}\sqrt{h}~ dr = -\frac{b}{2}.
\label{2.9}$$ One observes in (2.9) the decisive contribution to W from the negative pressure. It is not surprising that $W<0$; it is a consequence of the repulsive character of the forces created by the source of curvature. A negative W one obtains also for the de Sitter spacetime in static coordinates, where Komar energy is proportional to the radius of curvature.
Star interior geometry
======================
Let us apply the above recipe inside a relativistic star. We mean the star is filled with an imperfect fluid having negative pressures, up to a radius R, with $r\leq R$, so that the fluid is confined to a sphere of radius R and the metric inside is given by (2.1). In all the previous equations we have to replace the limit to infinity with $r \rightarrow R$. For example, the Komar energy will not be given by (2.9) but $W(R,b) = -\frac{b}{2}e^{-\frac{b}{R}}$.
As the exterior spacetime should be Schwarzschild, the Darmois-Israel junction conditions have to be satisfied at $r = R$. From the first junction condition one finds that the relation $1 - e^{-\frac{b}{R}} = 1 - \frac{2M}{R}$ would be obeyed. Hence, $$b = R ~ln\frac{R}{2M},
\label{3.1}$$ where $M$ is the Schwarzschild mass ($R>2M$ is mandatory for to obtain $b>0$). Using (3.1), the Komar energy becomes $W = -Mln\frac{R}{2M}$, with $W = -M$ if $R = 2eM$. In addition, when, say, $R = e^{10}\cdot 2M, W = -10 M$ and if $R = 2M + 10^{-4}\cdot 2M, W \approx -10^{-4}M$. In other words, when $R\rightarrow 2M$ from above, the Komar energy tends to zero, and if $R>>2M$, $W$ is negative and very large, i.e., $|W|>>M$.
We notice that the constant $b$ depends on two parameters, R and M. For instance, an R very close to $2M$ gives a very small $b$ w.r.t. R. Take $R = 2M + \epsilon$ ($\epsilon $ small) and, say, $r>>b$ leads to $r>>R~ln(1 + \frac{\epsilon}{2M}) \approx \frac{R}{2M}\epsilon$. If $\epsilon = 2M\cdot 10^{-3}$, we get $r>>10^{-3}R$, a possible situation. In other words, all situations are possible, i.e., $r\geq b$ or $r<b$ (when $R>2eM$). To get quasi-flat space inside, the condition $R>>2M$ should be observed. Anyway, when the star, viewed frm outside, tends to become a black hole ($R \rightarrow 2M$), $b/R$ becomes very small and $1 - e^{-\frac{b}{R}} \approx 0$, such that $r = R$ becomes a horizon even for an inner observer. That can be also seen from the new form of the interior metric $$ds^{2} = -\left[1- \left(\frac{2M}{R}\right)^{\frac{R}{r}}\right] ~dt^{2} + \frac {dr^{2}}{1- \left(\frac{2M}{R}\right)^{\frac{R}{r}}} + r^{2} d \Omega^{2}
\label{3.2}$$ We wish now to compare our geometry (3.2) with the Schwarzschild interior geometry [@MM2], which is known to be conformally-flat. It is infered by supposing that the energy density is constant, a hypothesis presumed to be unphysical [@MM2]. Moreover, the pressure and the scalar curvature are divergent at the horizon $r_{H} = 3R\sqrt{1 - \frac{4R}{9M}}$. In addition, the central pressure becomes divergent when star radius reaches Buchdahl value $9M/4$.
In contrast, the spacetime (3.2) is free of singularities (no infinite curvature) and $\rho$ and the pressures are finite everywhere. However, the weak, null and dominant energy conditions are satisfied only in the region $r\geq b(\sqrt{5} - 1)/2$, which is valid for $R\leq 10M$, namely when the star is on the verge to become a black hole. Nevertheless, the strong energy condition is not obeyed because $\rho + p_{r} + 2p_{\theta} <0$ everywhere.
As an example, we apply the previous results to the case of a neutron star (NS). We take the radius of the NS to be $R = 15 Km$, its mass of the order of Solar mass. With $2\cdot M_{\odot} \approx 3 Km$, we get $b = Rln5>R$ and $r_{max} \approx 8.76 Km<R$, so that $\rho_{max} = 1.36\cdot 10^{15} g/cm^{3}$, a value very close to the known densities of a neutron star. If we evaluate $\rho$ at, say, $r = R/2 = 7.5 Km <r_{max}$, we get $\rho (R/2) = 2.2\cdot 10^{14} g/cm^{3}$, one sixth from the value of $\rho_{max}$.
Let us turn now to the problem of matching conditions at the interface. For to write the second junctions conditions, we need the extrinsic curvature $K_{ab}$ at the boundary $r = R$. Having spherical symmetry and static conditions, we make use of the expressions [@KKP] $$K_{ab} = -\frac{f'}{\sqrt{f}} u_{a} u_{b} + \frac{\sqrt{f}}{r} q_{ab},
\label{3.3}$$ whence $$K = \gamma^{ab} K_{ab} = \frac{f'}{2\sqrt{f}} + 2\frac{\sqrt{f}}{r},
\label{3.4}$$ where $f = -g_{tt}, u_{a} = (\sqrt{f}, 0, 0, 0)$, $\gamma_{ab} = g_{ab} - n_{a} n_{b}$ is the induced metric on $r = const.$ surface, with $n^{a} = (0, \sqrt{f}, 0, 0)$ its normal vector and $q_{ab} = \gamma_{ab} + u_{a} u_{b}$ is the induced metric on a two-surface of constant $t$ and $r$. One obtains, in terms of the coordinates inside (-) the star $$K^{-}_{tt} = \frac{b}{2r^{2}} \sqrt{f} e^{-\frac{b}{r}},~~~K^{-}_{\theta \theta} = r \sqrt{f},~~~K^{-} = -\frac{b}{2r^{2}\sqrt{f}} e^{-\frac{b}{r}} + \frac{2}{r} \sqrt{f},
\label{3.5}$$ where all quantities are evaluated at $r = R$. For the Schwarzschild exterior (+) region we have [@HC4] $$K^{+}_{tt} = -\frac{M}{r^{2}} \sqrt{1 - \frac{2M}{r}} ,~~~K^{+}_{\theta \theta} = r \sqrt{1 - \frac{2M}{r}},~~~K^{+} = \frac{2r - 3M}{r^{2}\sqrt{1 - \frac{2M}{r}}} ,
\label{3.6}$$ evaluated at $r = R$. The discontinuity of the extrinsic curvature $[K_{ab}] = K_{ab}^{+} - K_{ab}^{-}$ is related to the stress tensor $S_{ab}$ on the hypersurface $r = R$ through the Lanczos equation $$[K_{ab}] - \gamma_{ab}[K] = -8\pi S_{ab}
\label{3.7}$$ Let us suppose $S_{ab}$ to have the perfect fluid form [@CL] $$S_{ab} = (\rho_{s} + p_{s}) u_{a} u_{b} + p_{s}\gamma_{ab},
\label{3.8}$$ where $\rho_{s}$ stands for the surface energy density, $p_{s}$ is the surface pressure, with $\sigma = -p_{s}$ the surface tension. By means of the expression of $b$ from (3.1) we get $$[K_{tt}] - \gamma_{tt}[K] = \frac{2R - 4M}{R^{2}} \sqrt{1 - \frac{2M}{R}} - \frac{2}{R} \left(1 - \frac{2M}{R}\right)^{3/2} = 0.
\label{3.9}$$ Therefore, we have $S_{tt} = 0$ which yields $\rho_{s} = 0$. Now, from the $\theta \theta$-component of (3.8), one finds that $$M \left(1 + \frac{b}{R}\right) = -8\pi R^{2} p_{s} \sqrt{1 - \frac{2M}{R}},
\label{3.10}$$ which shows that the surface tension $\sigma = -p_{s}$ is positive. We may express $\sigma$ in terms of the difference of the normal accelerations $a^{b}n_{b}$ for a static observer sitting at $r = R$ $$(a^{b}n_{b})_{+} - (a^{b}n_{b})_{-} = \frac{M}{R^{2}\sqrt{1 - \frac{2M}{R}}} + \frac{b e^{-\frac{b}{R}}}{2R^{2} \sqrt{1 - \frac{2M}{R}}} = \frac{M \left(1 + \frac{b}{R}\right)}{R^{2}\sqrt{1 - \frac{2M}{R}}} ,
\label{3.11}$$ which gives the same expression for $\sigma$ as (3.10) (see also [@MM2], where the authors worked instead with surface gravities and got the same result). Even though the surface tension is cohesive ($\sigma>0$) the repulsive gravity in the interior creates the possibility to have a balance of forces required for establishing a hydrostatic equilibrium [@CK].
Our aim now is to compare the above expression of $\sigma$ with that given by the Young-Laplace equation [@HC4] $$\rho = -p_{r} = \frac{2\sigma}{r},
\label{3.12}$$ with $\rho$ and $p_{r}$ given by (2.6). From (3.12) one obtains $$\sigma = \frac{M \left(1 + \frac{b}{R}\right)}{8\pi R^{2}}
\label{3.13}$$ which is not equal to the expression of $\sigma$ obtained from (3.10), because of the relativistic factor $\sqrt{1 - \frac{2M}{R}}$. The origin of this mismatch is due, in our view, to the classical nature of (3.11).
PG coordinates. Geodesics
=========================
We follow the standard procedure to pass to the Painleve-Gullstrand coordinates, from the original Schwarzschild-form (2.1) of the geometry supposed to be valid inside the star and introduce a new time coordinate $$T = t - g(r).
\label{4.1}$$ When we put (4.1) in (2.1) and choose the function $g(r)$ such that $$g'(r) = \frac{1}{f(r)} e^{-\frac{b}{2r}},
\label{4.2}$$ we get $g_{rr} = 1$ and therefore, the metric (2.1) acquires the PG form $$ds^{2} = -(1 - e^{-\frac{b}{r}}) dT^{2} - 2~ e^{-\frac{b}{2r}} dTdr +dr^{2} + r^{2} d \Omega^{2},
\label{4.3}$$ which, at $r = R$ matches the standard Schwarzschild line-element in PG coordinates. The geometry (4.3) is an exact solution of gravitational equations when the source is the same energy-momentum tensor $T_{~a}^{b}$ from (2.5), with the same expressions for the energy density and pressures due to the fact that the metric is static and the transformation (4.1) changes only the time variable.
Let us take the following congruence of observers with the velocity vector field $$u^{a} = (1, e^{-\frac{b}{2r}}, 0, 0),~~~u^{a}u_{a} = -1.
\label{4.4}$$ One easily finds that the form (4.4) of the velocity field gives us $a^{b}= 0$, such that $u^{a}$ is tangent to the timelike geodesics, comoving with the fluid. The normal vector orthogonal to $u^{a}$ is given by $n_{a} = (-e^{-\frac{b}{2r}}, 1, 0, 0)$. From the general expression (2.5) of the stress tensor one obtains again zero energy flux density, namely $q^{a} = 0$. In contrast, we get a nonzero shear tensor and expansion scalar in the PG coordinates, inside the star $$\sigma^{r}_{~t} = -\sigma^{r}_{~r} = 2\sigma^{\theta}_{~\theta} = 2\sigma^{\phi}_{~\phi} = \frac{2}{3r}\left(1 - \frac{b}{2r}\right) e^{-\frac{b}{r}}
\label{4.5}$$ with $\sigma^{a}_{~a} = 0$ and $\Theta = \frac{2}{r}(1 + \frac{b}{4r}) e^{-\frac{b}{2r}}$.\
**Timelike radial geodesics**\
From the expression (4.4) of $u^{a}$ we observe that $T$ represents the proper time and therefore $$v \equiv u^{r} = \frac{dr}{dT} = e^{-\frac{b}{2r}}.
\label{4.6}$$ However, the trajectories $r(T)$ cannot be determined exactly but from (4.6) we may estimate that $v \rightarrow 0$ at the origin and $v(R) = e^{-\frac{b}{2R}} = \sqrt{\frac{2M}{R}}$, the Newtonian escape velocity. Consequently, $0<v\leq \sqrt{2M/R}$ inside the star. It is worth noting that $v_{max} = \sqrt{2M/R}$ matches exactly the escape velocity on the surface, measured from outside the star, using the exterior PG coordinates. If we took $g_{rT} >0$ in the line-element (4.3), the timelike geodesics would become ingoing, with $v = -e^{-\frac{b}{2r}}$ and $-\sqrt{2M/R} \leq v <0$.\
**Null radial geodesics**\
The null radial geodesics are directly obtained from (4.3), with $ds^{2} = 0$ $$-(1 - e^{-\frac{b}{r}}) - 2~ e^{-\frac{b}{2r}} \frac{dr}{dT} + \left(\frac{dr}{dT}\right)^{2} = 0
\label{4.7}$$ which yields $\frac{dr}{dT} = e^{-\frac{b}{2r}} \pm{1}$. Only the minus sign is convenient and we have $$V \equiv \frac{dr}{dT} = e^{-\frac{b}{2r}} - 1,
\label{4.8}$$ which are ingoing null geodesics, with $-1 \leq V < \sqrt{2M/R} - 1$. In contrast, $g_{rT} >0$ will lead us to $$V = 1 - e^{-\frac{b}{2r}} >0,
\label{4.9}$$ (outgoing geodesics), whence one finds that $1>V>1 - \sqrt{2M/R}$. We notice also that the scalar expansion $\Theta$ is positive when the timelike geodesics are outward and it changes sign if $g_{rT} >0$ (ingoing timelike geodesics).
Conclusions
===========
It is a known fact that the interior Schwarzschild solution discovered by Schwarzschild in 1916 has a lot of drawbacks, such as divergent pressure and scalar curvature at the horizon. In addition, the assumption of constant density is presumed unphysical.
We looked in this paper for a regular star interior, with nonsingular energy density and pressures, by means of an exponential regulator. Because of the negative pressures the gravitational field inside is repulsive, as for a de Sitter geometry in static coordinates. As a consequence, the Komar energy is negative. With the help of the matching conditions at the star surface we found the constant length $b$ from the metric and the surface tension $\sigma$ of the boundary with the exterior Schwarzschild metric. The timelike and null geodesics are studied in Painleve-Gullstrand coordinates.
[17]{}
P Mannheim, Prog. Part. Nucl. Phys. 56, 340 (2006). P. O. Mazur and E. Mottola, Proc. Nat. Acad. Sci. 101, 9545 (2004); arXiv: gr-qc/0407075. P. O. Mazur and E. Mottola, arXiv: 1501.03806. E. Mottola, Acta Phys. Pol. B41, 2031 (2010). B. Danila et al., Phys. Rev. D95, 044031 (2017). R. Brandenberger and J. Frohlich, arXiv: 2004.10025. J. Lemos and O. Zaslavskii, arXiv: 2004.06117. G. L. Comer and J. Katz, Mon. Not. R. Astron. Soc. 267, 51 (1994). H, Culetu, Phys. Dark. Univ. 14 (2016) 1-3; arXiv: 1508.01102. C. Vaz, Nucl. Phys. B891, 558 (2015); arXiv: 1407.3823. H. Culetu, arXiv: 1407.7119. N. Dadhich, K. Narayan and V. A. Yajuik, Pramana 50, 307 (1998); gr-qc/9703034. H. Culetu, ArXiv: 1303.7376. T. Koivisto and D. F. Motta, JCAP 0806: 018 (2008); ArXiv : 0801.3676 \[astro-ph\]. H. Culetu, IJMPA Conf. Ser. 3, 455 (2011); ArXiv: 1101.2980. V. Faraoni and J. Cote, Phys. Rev. D98, 084019 (2018); arXiv: 1808.02427. T. Padmanabhan, Phys. Rev. D81, 124040 (2010); arXiv: 1003.5665. H. Culetu, arXiv: 1305.5964. S. Kolekar, D. Kothawala and T. Padmanabhan, Phys. Rev. D85, 064031 (2012); arXiv: gr-qc/1111.0973. C. A. Lopez, Phys. Rev. D38, 3662 (1988).
|
---
abstract: 'We revisit the construction of elliptic class given by Borisov and Libgober for singular algebraic varieties. Assuming torus action we adjust the theory to equivariant local situation. We study theta function identities having geometric origin. In the case of quotient singularities $\C^n/G$, where $G$ is a finite group the theta identities arise from McKay correspondence. The symplectic singularities are of special interest. The Du Val surface singularity $A_n$ leads to a remarkable formula.'
address:
- |
Warsaw University of Technology\
ul. Koszykowa 75, 00-662, Warszawa, Poland
- |
Institute of Mathematics, University of Warsaw\
Banacha 2, 02-097 Warszawa, Poland
author:
- 'Ma[ł]{}gorzata Mikosz'
- Andrzej Weber
title: |
Elliptic classes, McKay correspondence\
and theta identities
---
=1
The theory of theta functions is a classical subject of analysis and algebra. It had a prominent role in 19$^{\rm th}$ century mathematics, as one can see reading the monograph about Algebra [@We]. Nowadays it seems that intriguing combinatorics related to theta function has been put aside. Nevertheless there are modern sources treating the subject of theta functions in a wider context. For example the Mumford three volume book [@Mu] is devoted to theta function. The mysterious pattern of Riemann relation is described in initial chapters. This pattern repeats on and it is present in our formulas described in Theorem \[A1main\].
At the same time from mid-seventies the theta function found an application in algebraic topology, in particular in cobordism theory. Formal group laws generated by elliptic curves and the associated elliptic genera, allowed to define elliptic cohomology by a Conner-Floyd type theorem. The collection of articles in LNM 1326 and in particular [@Lan] is a good reference for that approach. The article of Segal written for Bourbaki seminar [@SegalEll] is an accessible survey of the beginnings of this theory.
In the beginning of 2000’ theta function was applied by Borisov and Libgober [@BoLi0] to construct an elliptic genus of singular complex algebraic varieties. Totaro [@Totaro] links this construction with previous work on cobordisms. The elliptic genus is defined in terms of resolution of singularities, but it does not depend on the particular resolution. The elliptic genus is the degree (i.e. the integral) of some cohomology characteristic class called [*elliptic class*]{}, denoted by $\Ell(-)$. That class is the main protagonist of our paper. The elliptic class is defined in terms of theta function applied to the Chern roots of the tangent bundle of the resolution. Independence on the resolution translates to some relations involving theta function. For example, the Fay trisecant relation [@Fay; @GTL] corresponds to a blowup of a surface at one point.
The idea to study global invariants via contributions of singularities is common in mathematics and originates from Poincaré-Hopf theorem. It re-appeared in the work of Atiyah and Singer on the equivariant index theorem. In the presence of a torus action the Atiyah-Bott-Berline-Vergne localization techniques apply. Each fixed point component gives a local summand of the global invariant. The local equivariant Chern-Schwartz-MacPherson classes were studied in [@WeChern] and the local contributions to the Hirzebruch class were described in [@WeSel]. The role of the local contributions to the elliptic class in the Landau-Ginzburg model is demonstrated in [@LibLG]. Also, local computation is the key ingredient of the work on elliptic classes of Schubert varieties in the generalized flag variety, [@RW].
In the present paper we adjust the theory of Borisov-Libgober to the local equivariant situation. The initial step was done by Waelder [@Wae], but we believe that our approach and formalism makes the theory accessible and convenient for further application. Recently theta function has served as a basic brick in construction of diverse objects, such as representations of quantum groups [@GTL], weight function (in integrable systems) [@RTV] and stable envelopes [@AO]. The works [@RW; @KRW] form a bridge between these theories and strictly geometric theory of characteristic classes. Here we present the Borisov-Libgober elliptic class in a way which fits to the context of the mentioned papers. In §\[explanation\] we also present a conceptual approach to the elliptic class as the equivariant Euler class in a version of elliptic cohomology. This point of view appeared in [@RW]. We take an opportunity to extend and clarify this approach. We explain the normalization constants, which allow to place the elliptic class in the common formalism used in algebraic topology.
Further we study the elliptic classes of quotient singularities. In the local context these are the quotients $V/G$, where $V$ is a vector space and $G\subset GL(V)$ is a finite subgroup. The quotient $V/G$ is considered as a variety with the $\C^*$–action, coming from the scalar multiplication on $V$. Occasionally the quotient admits an action of a larger torus. If $G$ acts diagonally on $V=\C^n$, then whole torus $\T=(\C^*)^{n}$ acts on the quotient. The elliptic classes of global quotients were described in the Borisov-Libgober paper on McKay correspondence [@BoLi]. The elliptic genus is equal to the [*orbifold elliptic genus*]{} which is defined in terms of the action of $G$ on $V$.
We wish to discuss the McKay correspondence showing examples. In its full generality the definitions are quite involving. If $G\subset SL(V)$, then the canonical divisor of $X$ is trivial. Sometimes $X$ admits a crepant resolution, i.e. a map $Y\to X$ from a smooth variety with (in this case) trivial canonical bundle. McKay correspondence is a general rule which says that geometric invariants of $Y$ can be expressed in terms of group properties of $G$ and its representation $V$. The equivariant version of McKay correspondence was established in [@Wae Theorem 7]). It becomes particularly apparent when applied to the quotients of representations $V/G$, see [@DBW §6]. The original proof of [@BoLi] is based on the analysis of the toric singularities. The Lemma 8.1 of [@BoLi] states the strongest form of an identity involving theta function, but its formulation is considerably complicated. The formula becomes much simpler for the symplectic quotients. For du Val singularities it can be related to combinatorics of Dynkin diagram. We would like to state explicitly what McKay correspondence says for symplectic quotient singularities. Among other examples we study symplectic quotients of $\C^2$, that is du Val singularities. Suppose $\Z_n\subset SL_2(\C)$ consists of diagonal matrices ${\rm diag}(\xi^k,\xi^{-k})$, $\xi= e^{2\pi \ii/n}$. The equality of elliptic classes (computed via resolution and the orbifold class) of $\C^2/\Z_n$ implies Theorem \[A1main\], which is an identity involving Jacobi theta function. The formula specializes to the following simplified form $$\label{wstepid}n\, \Delta(n x,z)\Delta(-nx,z)=\tfrac1n\sum_{k,\ell=0}^{n-1}\Delta\left(\tfrac{k-\ell\tau}n+x,z\right)
\Delta\left(\tfrac{\tau\ell-k}n-x,z\right)\,,$$ where $$\Delta(a,b)=\frac{\theta_\tau(a+b)\theta'_\tau(0)}{\theta_\tau(a)\theta_\tau(b)}\,.$$ Similar identities can be obtained for various quotient singularities. Any quotient singularity resolution gives rise to a theta identity. But except few cases the resulting formulas have complexities not allowing to present them in a compact form.
The example of $A_{n-1}$ singularity given above is particularly appealing. The related theta function identity is interesting on its own. In addition the elliptic class is self-dual, in the sense that if we exchange the ,,equivariant parameter” $x$ with the ,,dynamical parameter” $z$ the class only changes its sign. A similar effect of duality appears in the work on stable envelopes, [@RSVZ].
The identities we consider here involve the Jacobi theta function $\theta_\tau$, but can be specialized (taking $\tau\to \ii\infty$) to some trigonometric identities. One of these is the following one: $$\frac 1n\sum_{k=0}^{n-1}\frac{1}{1-2\cos(\frac{2k\pi }n) T+T^2}=\frac{1-T^{2 n}}{\left(1-T^2\right)
\left(1-T^{n}\right)^2}\,.$$ We encourage the reader to give an elementary proof of this formula.
The second author would like to thank Jaros[ł]{}aw Wi[ś]{}niewki for multiple conversations on the quotient singularities and for his good spirit in general.
Basic functions
===============
Theta function
--------------
Let us introduce the notation $$\operatorname{e}(x)=e^{2\pi \ii x}\,.$$ For $\upsilon,\, \tau\in \C$, ${\rm im}(\tau)>0$ let $q_0=\operatorname{e}(\tau/2)=e^{\pi \ii \tau}$. We define the theta function $\theta_\tau(\upsilon)$ as in [@Cha]: $$\begin{aligned}
\theta_\tau(\upsilon)&=\frac 1\ii\sum_{n=-\infty}^\infty(-1)^n q_0^{ (n+\frac12)^2}\operatorname{e}((2n+1)\upsilon/2)=\\
&=2\sum_{n=0}^\infty(-1)^n q_0^{ (n+\frac12)^2}\sin((2n+1)\pi \upsilon)
\,,\end{aligned}$$ see [@We §24 (4)] or [@Cha Ch. V.1]. According to the Jacobi product formula [@Cha Ch V.6] $$\theta_\tau(\upsilon)=2q_0^{\frac14} \sin (\pi \upsilon)
\prod_{l=1}^{\infty}(1-q_0^{2l})
\big(1-q_0^{2l} \operatorname{e}(\upsilon )\big)\big(1-q_0^{2l} /\operatorname{e}(\upsilon)\big)\,.$$ In [@BoLi] the variable $q=q_0^2=e^{2\pi\ii \tau}$ is used, it also fits to the convention of [@RTV; @RW]. Therefore further we will express our formulas in that variable. $$\label{Jacobi-prod-for}
\theta_\tau(\upsilon)=2q^{\frac18} \sin (\pi \upsilon)
\prod_{l=1}^{\infty}(1-q^{l})
\big(1-q^{l} \operatorname{e}(\upsilon)\big)\big(1-q^{l}/ \operatorname{e}(\upsilon)\big)\,.$$ Here the symbol $q^{\frac18}$ simply means $\operatorname{e}(\tau/8)=e^{\pi \ii \tau/4}$. The function theta $\theta_\tau$ is odd $$\theta_\tau(-\upsilon)=-\theta_\tau(\upsilon)\,,$$ and it satisfies quasi-periodic identities $$\theta_\tau(\upsilon+1)=-\theta_\tau(\upsilon)\,,$$ $$\label{deltatau0}\theta_\tau(\upsilon+\tau)=-q^{-1/2}\, \operatorname{e}(- \upsilon)\, \theta_\tau(\upsilon)\,.$$ Moreover $$\theta_{\tau+1}(\upsilon)=\sqrt{\ii}\,\theta_\tau(\upsilon)\,,$$ where $\sqrt{\ii}=e^{\pi \ii/4}$, $$\theta_{-1/\tau}\left(\tfrac\upsilon\tau\right)=\sqrt{\tfrac\tau \ii} \,\operatorname{e}\!\left(\tfrac{\upsilon^2}{ 2 \tau}\right) \theta_\tau(\upsilon)\,.$$
The variable $\tau$ is treated as parameters and we will omit it later. It is convenient to use multiplicative notation for variables. Let $$\begin{aligned}
\vt(x)&=x^{1/2}(1-x^{-1})\prod_{\ell=1}^\infty(1-qx)(1-q/x)\\
&=x^{1/2}(1-x)^{-1}\, (x;q)_\infty\, (x^{-1},q)_\infty\,.\end{aligned}$$ Here $$(x;q)_\infty=\prod_{\ell=0}^\infty(1-q^\ell x)=(1-x)(1-qx)(1-q^2x)\dots$$ is the $q$-Pochhammer symbol. The function $\vt$ is defined on the double cover of $\C^*$, since we have $x^{1/2}$ in the formula. We have $$\label{constant}\theta(\upsilon)=c\; \vt(\operatorname{e}(\upsilon))\,,\qquad c=\tfrac 1 \ii\,q^{\tfrac18}(q;q)_\infty$$ The constant term of $\theta$ is irrelevant for us. In the formula for the elliptic class we prefer to use $\vt$ rather than $\theta$ notation. The function $\theta$ can be treated as a section of the line bundle ${\mathcal O_E}([0])$ over the elliptic curve $E=\C/\langle 1,\tau\rangle$ defined by $\tau$.
Wolfram–Mathematica package has implemented the Jacobi theta function with the following convention: $$\theta_\tau(\upsilon)=\ii\; \rm{EllipticTheta}[1,\pi\upsilon,q_0]\,,$$ where $q_0=e^{\pi \ii\tau}$. In MAGMA package $$\theta_\tau(\upsilon)=\rm{JacobiTheta}(q_0,\pi\upsilon)\,.$$ The identities discussed in this paper were checked numerically to make sure that the exponents and the conventions agree.
The function $\operatorname{\Delta}_\tau(a,b)$ and $\delta_\tau(a,b)$
---------------------------------------------------------------------
In the definition of elliptic characteristic classes the quotient of the form $$\operatorname{\Delta}_\tau(a,b)=\frac{\theta'_\tau(0)\,\theta_\tau(a+b)}{\theta_\tau(a)\,\theta_\tau(b)}$$ appears. The normalizing factor , which is independent from $\upsilon$, cancels out. This factor is omitted e.g. in [@AO; @RTV; @RW; @KRW]. The meromorphic function $\operatorname{\Delta}_\tau$ is odd $$\operatorname{\Delta}_\tau(-a,-b)=-\operatorname{\Delta}_\tau(a,b)\,,$$ it satisfies the following quasi-periodic identities: $$\operatorname{\Delta}_\tau(a,b)=\operatorname{\Delta}_\tau(b,a)\,,$$ $$\label{per1}\operatorname{\Delta}_\tau(a+1,b)=\operatorname{\Delta}_\tau(a,b)\,,$$ $$\label{per2}\operatorname{\Delta}_\tau(a+\tau,b)=\operatorname{e}(-b)\, \operatorname{\Delta}_\tau(a,b)\,.$$ Moreover $$\operatorname{\Delta}_{\tau+1}(a,b)=\operatorname{\Delta}_\tau(a,b)\,,$$ $$\operatorname{\Delta}_{-1/\tau}(a,b)=\tau \operatorname{e}\left(\tfrac{ab}\tau\right)\,\operatorname{\Delta}_\tau(a,b)\,.$$ The function $\operatorname{\Delta}_\tau$ defines a section of a bundle over $E^2$.
We prefer the multiplicative notation We will write $\delta(\tfrac AB,\tfrac CD)$ instead of $\operatorname{\Delta}(a-b,c-d)$: $$\theta(a-b)=c\vt(\tfrac AB)\,,\qquad\tfrac1{2\pi \ii}\Delta(a-b,c-d)=\delta(\tfrac AB,\tfrac CD)$$ when $$A=\operatorname{e}(a)=e^{2\pi \ii a},\quad B=\operatorname{e}(b)=e^{2\pi \ii b},\quad C=\operatorname{e}(c)=e^{2\pi \ii c},\quad D=\operatorname{e}(d)=e^{2\pi \ii d}.$$ The function $\delta$ is well defined on $\C^*\times \C^*$ since the fractional powers cancel out. It has the following quasi-periodic property $$\label{deltatau}\delta(qA,B)=B^{-1}\delta(A,B)\,,\qquad \delta(A,qB)=A^{-1}\delta(A,B)\,.$$
The $A_{n-1}$ identity
======================
Resolutions of quotient singularities give rise to certain identities for theta function. We will explain this mechanism in the subsequent sections, but first we would like to give an example of the cyclic symplectic quotient $\C^2/\Z_n$, i.e. the singularity $A_{n-1}$. It leads to the following interesting identity:
\[A1main\] Let $n>0$ be a natural number. Let $t_1$, $t_2$, $h_1$ and $h_2$ be indeterminate. The following two meromorphic functions in four variables (and $\tau$) are equal: $$\begin{aligned}
(*)&\quad A_n(t_1,t_2,h_1,h_2)=
\sum_{k=1}^{n}\operatorname{\delta\!}\left( \tfrac{t_1^{n-k+1}}{t_2^{k-1}},h_1^{k}h_2^{n-k}\right)
\operatorname{\delta\!}\left(\tfrac{t_2^{k}}{t_1^{n-k}},{h_1^{k-1}}{h_2^{n-k+1}}\right),\\
(**)&\quad B_n(t_1,t_2,h_1,h_2)=\frac1n \sum_{k=0}^{n-1}
\sum_{\ell=0}^{n-1}\big(\tfrac{h_2}{h_1}\big)^\ell\operatorname{\delta\!}\left(\operatorname{e}\left(\tfrac{k-\ell\tau}n\right)t_1,h_1^n\right)
\operatorname{\delta\!}\left(\operatorname{e}\left(\tfrac{\tau\ell-k}n\right)t_2,h_2^n\right).\end{aligned}$$
In particular when we specialize the identity to the case $t_1=t_2^{-1}=t$, $h_1=h_2$, and setting $\h=h_1^n$, we obtain $$n\, \delta(t^n,\h)\delta(t^{-n},\h)=\tfrac1n\sum_{k,\ell=0}^{n-1}\operatorname{\delta\!}\left(\operatorname{e}\left(\tfrac{k-\ell\tau}n\right)t,\h\right)
\operatorname{\delta\!}\left(\operatorname{e}\left(\tfrac{\tau\ell-k}n\right)t^{-1},\h\right)\,,$$ which is the identity written in the multiplicative notation. To visualize the sum (\*) we can apply the following quiver, which is the Goreski-Kottwitz-MacPherson (GKM) graph of the minimal resolution of the quotient singularity $A_{n-1}$. The quiver (oriented graph) is the following:
- $n$ vertices indexed by $V=\{1,2,\dots,n\}$,
- $n+1$ arrows indexed by ,,weights”. For a vertex $k\in V$ there is one arrow outgoing with the weight ${\rm out}(k)=(n-k,k)$ and one incoming with the weight ${\rm in}(k)=(n-k+1,k-1)$. The external arrows have loose ends.
\#1[-1pt\^[\#1]{}-1pt]{} \#1[-1pt\^[\#1]{}-1pt]{}
$$\label{GKM}\strzk{(n,0)}\boxed{1}
\strzd{(n-1,1)}\boxed{2}\strzd{(n-2,2)}
~~\dots~~~\strzd{(2,n-2)}\boxed{n-1}\strzd{(1,n-1)}
\boxed{n}\strzk{(0,n)}$$
Let $${\bf n}=(n,n)\,,\quad\tt=\{t_1,t_2^{-1}\}\,,\quad \tt^{(a,b)}=t_1^{a}t_2^{-b}\,,\quad\hh=\{h_1,h_2\}\,,\quad \hh^{(a,b)}=h_1^{a}h_2^b\,.$$ Then $$(*)=
\sum_{k\in V}\delta(\tt^{{\rm in}(k)},\hh^{{\bf n}-{\rm out}(k)})\cdot\delta(\tt^{-{\rm out}(k)},\hh^{{\bf n}-{\rm in}(k)})$$ The second sum (\*\*) is clearly the summation over the lattice points contained in a parallelogram $$\Lambda=\big\{\tfrac kn-\tfrac \ell n \tau\in\C\;\big|\;k\in\{0,1,\dots,n-1\}\,,\;\ell\in\{0,1,\dots,n-1\}\big\}\,.$$ The set $\Lambda$ is in bijection with $n$-torsion points of $E=\C/\langle 1,\tau\rangle$. Note that in $(*)$ the variables are mixed in arguments of $\delta$, while in $(**)$ they are separated.
\[A1\] For $n=2$ we have $$\begin{aligned}
(*)\quad&
\delta(t_1^2,h_1h_2)\delta(t_2/t_1,h_2^2)\;+
\delta(t_1/t_2,h_1^2)\delta(t_2^2,h_1h_2)\,\\
(**)\quad&\frac12 \Big(
\delta(t_1,h_1^2)\;\delta(t_2,h_2^2)\\
&\phantom{aaaa}+\delta(-t_1,h_1^2)\;\delta(-t_2,h_2^2)\\
&\phantom{aaaaaaa}+\delta(t_1q^{-1/2},h_1^2)\;\delta(t_2q^{1/2},h_2^2)\\
&\phantom{aaaaaaaaaa}+\delta(-t_1q^{-1/2},h_1^2)\;\delta(-t_2q^{1/2},h_2^2)\Big)\,.\end{aligned}$$ Note that for $t_1=t_2$ only $(**)$ makes sense since $\delta(x,y)$ has a pole at $x=1$. In the additive notation the formula reads $$\begin{aligned}
(*)\quad
&\operatorname{\Delta}(2t_1,h_1+h_2)\operatorname{\Delta}(- t_1+t_2,2h_2)+
\operatorname{\Delta}(t_1-t_2,2h_1)\operatorname{\Delta}(2t_2,h_1+h_2)
\,,\\
(**)\quad&\frac1n\operatorname{\Delta}(t_1+\tfrac kn -\tau\tfrac\ell n)
\cdot\operatorname{\Delta}(t_2-\tfrac kn +\tau\tfrac\ell n)\Big)=\\&=\frac12 \Big(\operatorname{\Delta}(t_1,2h_1)\operatorname{\Delta}(t_2,2h_2)\\
&\phantom{aaaaaa}+\operatorname{\Delta}(t_1+\tfrac12,2h_1)\operatorname{\Delta}(t_2-\tfrac12,2h_2)\\
&\phantom{aaaaaaaaa}+\operatorname{\Delta}(t_1-\tfrac12\tau,2h_1)\operatorname{\Delta}(t_2+\tfrac12\tau,2h_2)\\
&\phantom{aaaaaaaaaaaa}+\operatorname{\Delta}(t_1+\tfrac12-\tfrac12\tau,2h_1)\operatorname{\Delta}(t_2-\tfrac12+\tfrac12\tau,2h_2)\Big)\,.\end{aligned}$$
The Theorem \[A1main\] is a consequence of the equality of elliptic classes: the class computed from the resolution and the orbifold elliptic class. In the next sections we will recall the necessary definitions and transform them into a more convenient form, specific for symplectic singularities. The proof of Theorem \[A1main\] is given in §\[A1proof\].
Elliptic class
==============
Smooth variety
--------------
Suppose that $X$ is smooth complex variety. Then the elliptic class in cohomology is defined by the formula $$\Ell_0(X)=\prod_{k=1}^{\dim X} x_k \frac{\theta(x_k-z)}{\theta(x_k)}\in H^*(X)[[q,z]]\,,$$ where $x_k$’s are the Chern roots of $TX$. In the multiplicative notation $$\Ell_0(X)=\prod_{k=1}^{\dim X} x_k \frac{\vt(\xi_k\h)}{\vt(\xi_k)}\in H^*(X)[[q,z]]\,,$$ where $\xi_k=e^{x_k}$, $\h=\operatorname{e}(-z)$. The normalized elliptic class is defined by $$\label{normalization}\Ell(X)=\left(\tfrac{\vt'(1)}{\vt(h)}\right)^{\dim X}\Ell_0(X)=eu(TX)\prod_{k=1}^{\dim X}\delta(e^{x_k},\h)\,.$$
Here $eu(TX)$ denotes the Euler class in $H^*(X;\Q)$. The constant $\tfrac{\vt'(1)}{\vt(h)}$ appearing in the definition of $\Delta$ is chosen to have $$\lim_{x\to 0}\left(x\delta(e^x,h)\right)=1\,.$$ (We note that for the normalization used in [@BoLi0] the analogous limit is equal to $\frac1{2\pi \ii}$.) The multiplicative notation has a deeper sense. Instead of the elliptic class in cohomology we can consider ,,elliptic bundle” in the K-theory, see [@BoLi Formula (3)].
The elliptic nonequivariant genus of the projective line is equal to $$Ell(\P^1)=\int_{\P^1}\Ell(\P^1)=\lim_{t\to 1}\left(\delta(t,\h)+\delta(t^{-1},\h)\right)=2\frac{h \vt '(h)}{\vt(h)}\,.$$
Interpretation of the elliptic class as the equivariant Euler class in elliptic cohomology {#explanation}
------------------------------------------------------------------------------------------
Naturally the elliptic class belongs to a version of equivariant elliptic cohomology $E_{\T\times \C^*}(X)$. We present below an explanation. We assume that $E$ is an equivariant, complex oriented generalized cohomology theory, with a map $\Theta$ to equivariant cohomology extended by a formal parameter $q$, such that the Euler class in $E$ of a line bundle $L$ is mapped to the theta function: $$\begin{matrix}\Theta:&E_\T(X)&\to&H_\T(X;\Q)\otimes \C((q))\\ \\
&eu^E_{\T}(L)&\mapsto &\tfrac{2\pi \ii}{\theta'(0)}\, \theta\!\left(\frac{c_1(L)}{2\pi \ii}\right)&=&\tfrac{1}{\vt'(1)}\, \vt\!\left(e^{c_1(L)}\right)\,.\end{matrix}$$ For the notion of the orientation and Euler class in generalized cohomology theories see [@Stong Chapter 5], [@FF §42]. A version of the theta function as the choice for the Euler class (or equivalently the choice of the related formal group law) appears in the literature, see [@SegalEll p.197] and in [@Totaro]. For our purposes it is enough to take as the equivariant elliptic cohomology the usual (completed) Borel equivariant cohomology $E_\T(-)=\hat H_\T(-;\Q)\otimes \C((q))$ with the formal group law $F(x,y)=\theta(\theta^{-1}(x),\theta^{-1}(y))$. Suppose a torus $\T$ (possibly $\T$ is trivial) acts on $X$. We consider a bigger torus $\wTT=\T\times \C^*$, where $\C^*$ acts on $X$ trivially. The bundle $TX$ is an $\wTT$-equivariant bundle with the $\C^*$ action via the scalar multiplication. Formally we write $TX\otimes \h$, while $TX$ denotes the tangent bundle with the trivial action of $\C^*$. Let $$z=-\tfrac{c_1(\h)}{2\pi \ii }\in H^*_\wTT(\{{\rm pt}\})\,,\qquad \h=e^{-z}\,.$$ The elliptic class of $X$ is defined as the elliptic Euler class of the equivariant bundle $TX\otimes \h$. The elliptic genus of $X$ is defined as the push forward to the point of $eu^E_{\wTT}(TX\otimes \h)$. By the generalized Riemann-Roch theorem [@FF 42.1.D] the push-forward in $E$ can be replaced by the push forward of the cohomology class $$\begin{gathered}
\tfrac{eu_\wTT(TX)}{\Theta(eu_\wTT^E)}\cdot\Theta\!\left(eu^E_\wTT(TX\otimes \h)\right)=
\tfrac{\prod_{k=1}^{\dim X}x_k}{\prod_{k=1}^{\dim X}\vt(e^{x_k})}\cdot\prod_{k=1}^{\dim X}\vt(e^{x_k} \h)=\\
=eu_\wTT(TX)\cdot\prod_{k=1}^{\dim X}\frac{\vt(e^{x_k} \h)}{\vt(e^{x_k})}\;\;\in\;\; H_\wTT(X;\Q)\otimes
\C((q))
\,.\end{gathered}$$ Normalization by the factor $\left(\frac{\vt'(1)}{\vt(h)}\right)^{\dim X}$ might be interpreted as computing the ,,virtual” Euler class of the bundle $$TX\otimes \h-\h^{\oplus \dim X}\,.$$ The price we have to pay is that we invert $q$. The result belongs to $\hat H^*_\wTT(X;\Q)((q))$.
It is natural to write the analogous transformation $\Theta^K$ to K-theory. Nevertheless the requirement $eu^E(L)\mapsto \vt(L)\in K_\T(-)\otimes\C((q))$ does not make sense, since in the definition of $\vt$ the square root of the argument appears. On the other hand the formula $$eu^K(L)\,\delta(L,\h)=(1-L^{-1})\,\delta(L,\h)$$ does make sense. Therefore the (image of the) elliptic class is defined in the equivariant K-theory. If the fixed point $x\in X^\T$ is isolated, then localized classes $$\Theta^K\big(eu^E_\T(TX\otimes \h-\h^{\oplus \dim X})\big)_x= \prod_{k=1}^{\dim X}\vt({\xi_k})\delta({\xi_k}, \h)$$ (where $\xi_k$ are the Grothendieck roots of $TX$) is understood formally in a completion of the ring ${\rm R}(\T)\otimes \C((q))$.
The elliptic cohomology as a generalized cohomology theory was constructed in several setups, still the construction of an equivariant version does not seems to be satisfactory. Of course rationally the theory is much easier. In the modern approach the elliptic cohomology ring $E_\T(X)$ is replaced by a scheme, and the Euler class is a section of some line bundle over that scheme, see [@GKV], [@Ga §7.2], [@ZZ §3.2].
Elliptic class of a singular variety admitting a crepant resolution
-------------------------------------------------------------------
The elliptic class of a singular variety was defined by Borisov and Libgober. In their original paper [@BoLi] they define a cohomology class for a suitable resolution of singularities $Y\to X$. The classes agree whenever one resolution is dominated by another. The push forward of that class to $X$ defines a homology class of the singular variety itself, which does not depend on the choice of the resolution. The equivariant version of the theory works equally well: it is treated in [@Wae; @DBW; @RW]. We will assume that a torus $\T$ is acting on $X$ and $Y$ and the resolution map $f:Y\to X$ is equivariant. For convenience suppose that $X$ is embedded in a smooth ambient space and the set of fixed points $M^\T$ is finite. Let’s assume that the fixed point set $Y^\T$ is finite as well. Then using Lefschetz-Riemann-Roch localization theorem for the push forward (see e.g. [@ChGi Th. 5.11.7], [@RW Prop. 2.7]) we can express the elliptic characteristic class as the sum of expressions depending on the local data coming from the fixed points $Y^\T$. If the resolution is crepant, then the localized elliptic class of $X$ restricted to a fixed point $x\in X^\T$ satisfies $$\label{rescrepformula}\frac{\Ell^\T(X)_{|x}}{eu(x,M)}=
\sum_{\tx\in f^{-1}(x) }\frac{\Ell^\T(Y)_{|\tx}}{eu(\tx,Y)}=\frac{\Ell^\T(X)_{|x}}{eu(x,M)}=
\sum_{\tx\in f^{-1}(x) }
\prod_{k=1}^{\dim Y} \delta(\tt^{w_i}(\tx),\h)
\,,$$ where $w_1(\tx),w_2(\tx),\dots,w_{\dim Y}(\tx)$ are the weights of the torus action on $T_\tx Y$ and $eu(x,M)$ is the product of the Chern roots of $T_xM$.
The original notation, given by Borisov-Libgober, is additive and uses an indeterminate $z$. Our $\h$ translates to $\operatorname{e}(-z)$. Also the equivariant variables $\tt=(t_1,t_2,\dots, t_n)$ should be understood as the exponents of the generators of $H^2_\T({{\rm pt}})$, or the elements of the representation ring ${\rm R}(\T)$.
The relative elliptic class
---------------------------
If the resolution $f:Y\to X$ is not crepant, then the formula should be corrected by the discrepancy divisor. It is natural to consider the pairs $(X,D_X)$ from the beginning, where $D_X$ is a Weil $\Q$-divisor, such that $K_X+D_X$ is $\Q$-Cartier. Then $\Ell^\T(X,D_X)$ is equal to $f_*\Ell^\T(Y,D_Y)$ whenever $f^*(K_X+D_X)=K_Y+D_Y$ and the pair $(Y,D_Y)$ is a resolution of $(X,D_X)$. The formula for $\Ell^\T(Y,D_Y)$ makes sense only when the coefficients of $D_Y$ are not equal to 1, and to show independence on the resolution Borisov and Libgober assume that the coefficients are smaller then 1 (equivalently, that the pair $(X,D_X)$ is Kawamata log-terminal). We give the formula for $\Ell^\T(Y,D_Y)$ in the equivariant case with isolated fixed point set. We assume that in some equivariant coordinates $$D_Y=\{ z_1^{a_1}\, z_2^{a_2}\dots z_n^{a_n}=0\}$$ and the weight of the coordinate $z_k$ is equal to $w_k$ for $k=1,2,\dots, n=\dim Y$. Then $$\label{resolutionformula}\frac{\Ell^\T(Y,D_Y)_{|\tx}}{eu(\tx,Y)}=
\prod_{k=1}^n \delta(
\tt^{w_k},\h^{1-a_k})
\,.$$ The symbol $\delta(
\tt^{w_k},\h^{1-a_k})$ is understood formally. It is a Laurent power series belonging to a suitable extension of the representation ring ${\rm R}(\T\times \C^*)={\rm R}(\T)[\h^{\pm1}]$, or via the Chern character to the completion of $H^*(B\T)[z]$, see [@DBW; @RW] for further explanations.
If the fixed point set $Y^\T$ is not finite, as it happens for the singularity $D_4$ then the summation runs over the components of the fixed points. In that case one has to apply Atiyah-Bott-Berline-Vergne localization theorem in its full form. In concrete cases one can find a neighbourhood of the fixed component on which a bigger torus is acting with finitely many fixed points. In the case of $D_4$ it is possible to proceed that way.
We can extend the point of view presented in §\[explanation\]. Locally we compute the equivariant elliptic Euler characteristic of the bundle $TX$ twisted by $h$ in some power, not in the homogeneous manner, but the twisting depends on the direction. The exponents are encoded in the divisor $D_Y$. The relative elliptic class is the image of the Euler class in elliptic cohomology of the virtual bundle $$\bigoplus_{k=1}^{\dim Y} \xi_k\otimes \h^{1-a_k}\,- \,\bigoplus_{k=1}^{\dim Y} \h^{1-a_k}\,\,,$$ where $\xi_k$ are the Grothendieck roots of $T_\tx Y$ adapted to the divisor $D_Y$. To make sense of that formula we extend the coefficients by the roots of line generators. Equivalently, we replace the torus $\T$ by its finite cover. The localized elliptic classes considered in [@RW] belong to that ring.
Resolution of singularities and theta identities
================================================
Before discussing theta identities related to the quotient singularities let us give examples of identities which can be deduced from the invariance of the elliptic class with respect to the change of the resolution.
Blowup and the Fay trisecant relation {#Fay}
-------------------------------------
Let $X=\C^2$, $D_X=a_1 D_1+a_2 D_2$, where $D_i=\{z_i=0\}$. Let $Y$ be the blowup of $\C^2$ at 0. The exceptional divisor is denoted by $E$. Then $$D_Y=a_1\widetilde D_1+a_2\widetilde D_2+(a_1+a_2-1)E$$ where $\widetilde D_i$ is the strict transform of $D_i$. The torus $\T=(\C^*)^2$ acts coordinatewise on $X$. By Lefschetz-Riemann-Roch [@ChGi Th. 5.11.7] for the blow down map $Y\to X$ we have an equality of the equivariant elliptic classes $$\begin{aligned}
\frac{\Ell^\T(X,D_X)_{|0}}{eu(0,X)_{|0}}&=\delta(t_1,\h^{1-a_1})\delta(t_2,\h^{1-a_2})=\\
&=\delta\big(t_1,\h^{2-a_1-a_2}\big)\delta\big(\tfrac{t_2}{t_1},\h^{1-a_2}\big)+
\delta\big(t_2,\h^{2-a_1-a_2}\big)\delta\big(\tfrac{t_1}{t_2},\h^{1-a_1}\big)\,.\end{aligned}$$ Setting $h_i=\h^{1-a_i}$ and expanding the definition of $\delta$ we obtain $${\frac{\vt\big(t_1 h_1\big) } {\vt\big(t_1\big)\vt\big(h_1\big)}}
{\frac{\vt\big(t_2h_2\big) } {\vt\big(t_2\big)\vt\big(h_2\big)}}
=
{\frac{\vt\big(t_1 h_1h_2\big) } {\vt\big(t_1\big)\vt\big(h_1h_2\big)}}
{\frac{\vt\big(\tfrac{t_2}{t_1} h_2\big) } {\vt\big(\tfrac{t_2}{t_1}\big)\vt\big(h_2\big)}}
+
{\frac{\vt\big(t_2 h_1h_2\big) } {\vt\big(t_2\big)\vt\big(h_1h_2\big)}}
{\frac{\vt\big(\tfrac{t_1}{t_2} h_1\big) } {\vt\big(\tfrac{t_1}{t_2}\big)\vt\big(h_1\big)}}\,.$$ Additively and having $t_i=\operatorname{e}(x_i)$ and $h_i=\operatorname{e}(\lambda_1)$, $${\frac{\theta(x_1 +\lambda_1) }{\theta(x_1)\theta(\lambda_1)}}
{\frac{\theta(x_2 +\lambda_2) }{\theta(x_2)\theta(\lambda_2)}}
=
{\frac{\theta(x_1 +\lambda_1+\lambda_2) }{\theta(x_1)\theta(\lambda_1+\lambda_2)}}
{\frac {\theta(x_2-x_1 +\lambda_2) }{\theta(x_2-x_1)\theta(\lambda_2)}}
+
{\frac{\theta(x_2 +\lambda_1+\lambda_2) }{\theta(x_2)\theta(\lambda_1+\lambda_2)}}
{\frac{\theta(x_1-t_2 +\lambda_1) }{\theta(x_1-x_2)\theta(\lambda_1)}}\,.$$ This equality is equivalent to Fay’s trisecant identity (see [@GTL]) $$\begin{gathered}
\theta(a+c)\theta(a-c)\theta(b+d)\theta(b-d)=\\
\theta(a+b)\theta(a-b)\theta(c+d)\theta(c-d)+ \theta(a+d)\theta(a-d)\theta(b+c)\theta(b-c)
\label{trisecant}\end{gathered}$$ by setting $$a =\frac12
( 2 x_1 - x_2+\lambda_1)\,,\;
b =\frac12(x_2+\lambda_1 )\,,\;
c =\frac12(x_2+\lambda_1 + 2 \lambda_2)\,,\;
d=\frac12(x_2-\lambda_1)\,.$$
In general, the blowup of $\C^n$ at 0 gives rise to the following identity:
For $n\geq 2$ we have $$\prod_{i=1}^n\delta\big(t_i,h_i\big)=\sum_{i=1}^n\Big(\delta\big(t_i,{\textstyle\prod_{k=1}^n h_k}\big)\cdot\prod_{j\neq i}\delta\big(\tfrac{t_j}{t_i},h_j\big)
\Big)\,.$$
Braid relation
--------------
The following expressions are equal: $$\label{s1s2s1}
\delta\left(\frac{t_2}{t_1},\frac{\mu_3}{\mu_2}\right)
\delta\left(\frac{t_3}{t_2},\frac{\mu_3}{\mu_1}\right)
\delta\left(\frac{t_2}{t_1},\frac{\mu_2}{\mu_1}\right)
+
\delta\left(\frac{t_1}{t_2},h\right)
\delta\left(\frac{t_3}{t_1},\frac{\mu_3}{\mu_1}\right)
\delta\left(\frac{t_2}{t_1},h\right)$$ $$\label{s2s1s2}
\delta\left(\frac{t_3}{t_2},\frac{\mu_2}{\mu_1}\right)
\delta\left(\frac{t_2}{t_1},\frac{\mu_3}{\mu_1}\right)
\delta\left(\frac{t_3}{t_2},\frac{\mu_3}{\mu_2}\right)
+
\delta\left(\frac{t_2}{t_3},h\right)
\delta\left(\frac{t_3}{t_1},\frac{\mu_3}{\mu_1}\right)
\delta\left(\frac{t_3}{t_2},h\right),$$ [@RW §9]. Geometrically the above expressions come from two natural resolution of the singularity $$z_{31}(z_{31}-z_{21}z_{32})=0$$ describing the boundary of the big cell in the complete flag variety $GL_3(\C)/B$ intersected with the opposite cell, see [@WeSel Ex. 16.1]. The equality is equivalent to the [*four term identity*]{} [@RTV eq. (2.7)].
Strangely, the Braid relation for the group $Sp_2(\C)$ is trivial and for $G_2$ it can be deduced from $SL_3(\C)$.
Lehn–Sorger example {#Lehn-Sor}
-------------------
Let $G\to Sp_2(\C)\subset GL_4(\C)$ be the example described in [@LeSo; @Grab]. Here $G$ is the bi-tetrahedral group. The example is the direct sum $W\oplus W^*$, where $W$ is one of two-dimensional non-self-conjugate representations of $G$. The quotient $(W\oplus W^*)/G$ admits two crepant resolutions. The computation of the tangent weights can be found in [@Grab Th. 4.5]. The equivariant elliptic class computed from these resolutions give the same result, but the expressions for the elliptic class differ by the switch of variables. After subtraction of the identical summands on both sides, the resulting identity can be written as: $$F(t_1,t_2)=F(t_2,t_1)\,,$$ where $$\begin{gathered}
\label{F-def}F(t_1,t_2)=\delta\left(\tfrac{t_1}{t_2},h\right)\delta\left(\tfrac{t_1^3}{t_2},h\right)\delta\left(t_2^2,h\right)\delta\left(\tfrac{t_2^2}{t_1^2},h\right)+\\+\delta\left(t_1^2,h\right)\delta\left(\tfrac{t_1^4}{t_2^2},h\right)\delta\left(\tfrac{t_2}{t_1},h\right)\delta\left(\tfrac{t_2^3}{t_1^3},h\right)+\\+\delta\left(\tfrac{t_1^3}{t_2^3},h\right)\delta\left(\tfrac{t_1^2}{t_2^2},h\right)\delta\left(\tfrac{t_2^3}{t_1},h\right)\delta\left(\tfrac{t_2^4}{t_1^2},h\right)\,.\end{gathered}$$ Our formula is nothing but plugging in the exponents (of $t$-variables) given by [@Grab Th. 4.5]. This example is continued in §\[LScont\].
The orbifold elliptic class
===========================
The orbifold elliptic class is defined in the presence of an action of a finite group $G$. Again, for a singular equivariant pair it is defined as the image of the orbifold elliptic class of a resolution: $$\Ell_{orb}(X,D_X,G)=f_*\Ell_{orb}(Y,D_Y,G)\,.$$ Let us quote the original definition of [@BoLi Def. 3.2] in a precise form. Let $(Y,D_Y)$ be a Kawamata log-terminal $G$-normal pair (cf [@BoLi Def. 3.1]) with $D_Y=-\sum_\ell d_\ell E_\ell$. We define *orbifold elliptic class* of the triple $(Y,D_Y,G)$ as an element of $H^*(Y)$ (or the Chow group) by the formula $$\Ell_{orb}(Y,D_Y,G):=
\frac 1{\vert G\vert }\sum_{g,h,gh=hg}\sum_{Z\subset Y^{g,h}}(i_{Y^Z})_*
\Bigl(
\prod_{k:\;\lambda^Z_k=\mu^Z_k=0} x_k
\Bigr)$$ $$\times\prod_{k} \frac{ \theta(\frac{x_k}{2 \pi \ii }+
\lambda^Z_k-\tau \mu^Z_k-z )}
{ \theta(\frac{x_k}{2 \pi \ii }+
\lambda^Z_k-\tau \mu^Z_k)} e^{2 \pi \ii \mu^Z_kz}$$ $$\times\prod_{\ell}
\frac
{\theta(\frac {e_\ell}{2\pi\ii}+\epsilon_\ell^Z-\zeta_\ell^Z\tau-(d_\ell+1)z)}
{\theta(\frac {e_\ell}{2\pi\ii}+\epsilon_\ell^Z-\zeta_\ell^Z\tau-z)}
\,\frac{\theta(-z)}{\theta(-(d_\ell+1)z)} e^{2\pi\ii d_\ell\zeta_\ell^Zz}.$$ Here $Z\subset Y^{g,h}$ is an irreducible component of the fixed set of the commuting elements $g$ and $h$ and $i_{Z} : Z\to Y$ is the corresponding embedding. The restriction of $TY$ to $Z$ splits as a sum of one dimensional representations on which $g$ (respectively $h$) acts with the eigenvalues $\operatorname{e}(\lambda^Z_k)$ (resp. $\operatorname{e}(\mu^Z_k)$), $\lambda^Z_k,\mu^Z_k\in \Q \cap [0, 1)$. The Chern roots of $(TY)_{|Z}$ are denoted by $x_k$. In addition, $e_\ell = c_1 (E_\ell )$ and $\operatorname{e}(\epsilon_\ell^Z)$, $\operatorname{e}(\zeta_\ell^Z)$ with $\epsilon_\ell^Z,\zeta_\ell^Z \in \Q \cap [0, 1)$ are the eigenvalues of $g$ and $h$ acting on ${\mathcal O}(E_\ell)$ restricted to $Z$ if $E_\ell$ contains $Z$ and is zero otherwise.
We assume that the torus action commutes with the action of $G$. The formula for the $\T$-equivariant orbifold elliptic class simplifies significantly when the fixed point set is discrete. For a normal crossing divisor situation with isolated fixed points we can assume that (at the fixed points) the divisor classes coincide with the Chern roots. Then in the multiplicative notation (and after correcting by the factor $(2\pi \ii)^{\dim Y}$) the local formula for the orbifold class takes form $$\label{elldefor}\frac{\Ell_{orb}^\T(Y,D_Y,G)}{eu(\tx,Y)}\;=\;
\frac1{|G|}\sum_{g,h,gh=hg}
\prod_{k=1}^n \delta\left(\,\operatorname{e}(\lambda^{g,h}_k- \mu^{g,h}_k\tau)\tt^{w_k}\,,\,\h^{1-a_k}\,\right)\,\h^{(a_k-1)\mu^{g,h}_k}$$ $$\label{mckformula}
=\frac1{|G|}\sum_{g,h,gh=hg}
\prod_{k=1}^n \delta\left(\,
\zeta^{g,h}_k q^{-\mu^{g,h}_k}\tt^{w_k}\,,\,\h^{1-a_k}\,\right)\,\h^{(a_k-1)\mu^{g,h}_k}
\,,$$ where $
\zeta^{g,h}_k=\operatorname{e}(\lambda^{g,h}_k)$ for $k=1,2,\dots,\dim Y$ are the eigenvalue of $g$ acting on $(TY)_{|Y^{g,h}}$. The fixed point sets $Y^{g,h}$ are considered locally, therefore we do not use $Z$ as the superscript of eigenvalues, but the pair $(g,h)$. The main result of [@BoLi] states that
\[BoLimK\] Let $(X;D_X)$ be a Kawamata log-terminal pair which is invariant under an effective action of $G$ on $X$. Let $\psi: X\to X/G $ be the quotient morphism. Then $$\psi_* \Ell_{orb}(X,D_X,G)=\Ell(X/G,D_{X/G}),$$ provided that $\psi^*(K_{X/G} + D_{X/G}) = K_X + D_X$.
For the equivariant version see [@Wae; @DBW]. If $X$ is a vector space with the linear action of a finite group $G$ commuting with $\T$, then the $\T$-equivariant version of the equality above is equivalent to the equality of Laurent power series $$\frac{\Ell_{orb}^\T(X,D_X,G)_{|0}}{eu(0,X)}=\frac{\Ell^\T(X/G,D_{X/G})_{|\psi(0)}}{eu(\psi(0),M)}\,.$$ If $D_{X/G}=0$ and $Y\to X/G$ is a resolution then the right hand side is given by and the left hand side is the sum over the fixed points $Y^\T$ of the expressions given in the formula . If the resolution is crepant, then the formula becomes simpler, the summands are of the form .
Interpretation of the orbifold elliptic class in the spirit of §\[explanation\] is somehow ambiguous. The conjugate elements $h\in G$ give the same contribution to the sum . Therefore this sum can be reorganized, so that we sum over the conjugacy classes $[h]\in Conj(G)$ and $g$ belongs to the centralizer $C(h)$: $$\label{inertia}\frac{\Ell_{orb}^\T(Y,D_Y,G)}{eu(\tx,Y)}\;=\;
\sum_{[h]\in Conj(G)}\frac1{|C(h)|}\sum_{g\in C(h)}
\prod_{k=1}^n \delta\left(\,
\zeta^{g,h}_k q^{-\mu^{g,h}_k}\tt^{w_k}\,,\,\h^{1-a_k}\,\right)\,\h^{(a_k-1)\mu^{g,h}_k}
\,.$$ In the limit with $q\to 1$ we obtain the formula [@DBW Th. 13, Cor. 14], which can be interpreted as the summation over components of the *extended quotient* $$\bigsqcup_{[h]\in Conj(G)}X^h/C(h)\,.$$ For the elliptic orbifold class this interpretation is only partial: the formula depends on the normal bundle of $X^h$ in $X$. The normal factor becomes trivial only in the limit, due to [@DBW equation (21)]. The summand corresponding to $h=1$ is equal to $$\frac1{|G|}\sum_{g\in G}
\prod_{k=1}^n \delta
\left(\,\zeta^{g}_k \tt^{w_k}\,,\,\h^{1-a_k}\,\right)\,,$$ where $\zeta^g_k=\zeta^{g,1}_k$ for $k=1,\dots n$ are the eigenvalues of $g$ acting on $T_\tx Y$. The formula can be treated as the averaged elliptic class of $(Y,D_Y)$. If $Y=V$ is a vector space, $a_k=0$, $\tt^{w_k}=t$, then the formula is a deformation of the expression for the classical Molien series, see [@DBW §9]. Precisely, it is the weighted dimension of the invariants of the ,,elliptic representation”
$\left(\h^{\dim V/2}\, \bigotimes_{n \ge 1}
\Bigl(\bigwedge_{-q^{n-1}\h^{-1}}V^* \otimes \bigwedge_{- q^n\h}
V \otimes S_{q^{n-1}}V^* \otimes
S_{q^n}V \Bigr)\right)^G\,.$
This elliptic representation only differs from [@BoLi Formula (3)] by the factor $S_1V^*=Sym(V^*)$ playing the role of the inverse of K-theoretic Euler class. The remaining summands of the formula for $h\neq 1$ are more complicated.
Orbifold elliptic class of symplectic singularities {#genus}
===================================================
Let us concentrate on the case of quotient singularities. Let $X\simeq \C^{2n}$ be a complex vector space with the standard symplectic structure and let $G\subset Sp_n(\C)
$ be a finite subgroup. Then the eigenvalues $\lambda^{g,h}_k$ and $\mu^{g,h}_k$ come in pairs. We can assume that $$\mu^{g,h}_{k+n}=\begin{cases}0&\text{if }\mu^{g,h}_{k}=0\\
1-\mu^{g,h}_{k}&\text{if }\mu^{g,h}_{k}>0\,.\end{cases}$$ and similarly for $\lambda^{g,h}_k$. For $\mu^{g,h}_{k}>0$ the pair of factors can be transformed: $$\begin{gathered}
\delta\left(\zeta^{g,h}_k q^{-\mu^{g,h}_k} \tt^{w_k},\h^{1-a_k}\right)\h^{(a_{k}-1)\mu^{g,h}_{k}}\cdot
\delta\left(\zeta^{g,h}_{n+k}q^{-\mu^{g,h}_{n+k}} \tt^{w_{n+k}},\h^{1-a_{n+k}}\right)
\h^{(a_{n+k}-1)\mu^{g,h}_{n+k}}\\=
\delta\left(\zeta^{g,h}_k q^{-\mu^{g,h}_k} \tt^{w_k},\h^{1-a_k}\right)\cdot
\delta\left((\zeta^{g,h}_{k})^{-1}q^{\mu^{g,h}_{n+k}-1} \tt^{w_{n+k}},\h^{1-a_{n+k}}\right)\cdot
\h^{(a_{k}-1)\mu^{g,h}_{k}+(a_{n+k}-1)(1-\mu^{g,h}_{k})}\end{gathered}$$ By we obtain $$\begin{gathered}
\label{sympmckay}=
\delta\left(\zeta^{g,h}_k q^{-\mu^{g,h}_k} \tt^{w_k},\h^{1-a_k}\right)\cdot\delta\left((\zeta^{g,h}_{k})^{-1}q^{\mu^{g,h}_{k}} \tt^{w_{n+k}},\h^{1-a_{n+k}}\right)\cdot \h^{1-a_{n+k}}\cdot
\h^{(a_{k}-1)\mu^{g,h}_{k}+(a_{n+k}-1)(1-\mu^{g,h}_{k})} \\
=
\delta\left(\zeta^{g,h}_k q^{-\mu^{g,h}_k} \tt^{w_k},\h^{1-a_k}\right)\cdot\delta\left((\zeta^{g,h}_{k})^{-1}q^{\mu^{g,h}_{k}} \tt^{w_{n+k}},\h^{1-a_{n+k}}\right)\cdot
\h^{\mu^{g,h}_{k}(a_k-a_{n+k})} \,.\end{gathered}$$ The same holds for $\mu^{g,h}_{k}=0$.
If the torus acts via the scalar multiplication and the divisor is empty (i.e. $a_k=0$), then the formula reduces to $$\delta\left(\zeta^{g,h}_k q^{-\mu^{g,h}_k} t,\h\right)\cdot\delta\left((\zeta^{g,h}_{k})^{-1}q^{\mu^{g,h}_{k}} t,\h\right)$$ or equivalently setting $\lambda=\lambda^{g,h}_k-\mu^{g,h}_k\tau$ we obtain the factor $$\label{Phi-def}\Phi(\lambda):=\delta\left(\operatorname{e}(\lambda) t,\h\right)\cdot\delta\left(\operatorname{e}(-\lambda) t,\h\right)$$
If the divisor is empty, and $\T=(\C^*)^2$ acts via the scalar multiplication on each summand in $V=W\oplus W^*$, then we obtain the factor $$\label{Psi-def}\Psi(\lambda):=\delta\left(\operatorname{e}(\lambda) t_1,\h\right)\cdot\delta\left(\operatorname{e}(-\lambda) t_2,\h\right) \,.$$ The elliptic class of this kind of singularity has a symmetry property:
Suppose $V=W\oplus W^*$, $G\subset GL(W)$, and $\T=(\C^*)^2$ acts via the scalar multiplication on each summand, as in the example §\[Lehn-Sor\]. Then the elliptic class of $V/G$ is a symmetric function with respect to the coordinate characters $t_1,t_2$. Indeed Since $\delta(a,b)=-\delta(a^{-1},b^{-1})$ the factor in the orbifold elliptic class has the property $$\Psi(\lambda)(t_1,t_2,h)=\Psi(\lambda)(t_2^{-1},t_1^{-1},h^{-1})\,.$$ Therefore $$\frac{\Ell^\T_{orb}(V,G,\emptyset)}{eu(0,V)}(t_1,t_2,h)=\frac{\Ell^\T_{orb}(V,G,\emptyset)}{eu(0,V)}(t_2^{-1},t_1^{-1},h^{-1})\,
.$$ In the expression for the elliptic class coming from a resolution there is no shift of variables therefore $$\frac{\Ell^\T(V/G,\emptyset)}{eu([0],M)}(t_2^{-1},t_1^{-1},h^{-1})=\frac{\Ell^\T(V/G,\emptyset)}{eu([0],M)}(t_2,t_1,h)\,
.$$ By Theorem \[BoLimK\] we obtain the conclusion. The symmetry may also be deduced geometrically.
Examples
========
The singularity $D_4$
---------------------
The singularity $D_4$ is the quotient of $\C^2$ by the bi-dihedral group of 8 elements which is isomorphic to the quaternionic group generated by the matrices $${\bf i}=\left(\begin{matrix}\ii& 0\\0& -\ii\end{matrix}\right)\,,\quad{\bf j}=
\left(\begin{matrix}0& 1\\-1& 0\end{matrix}\right)\quad\text{and}\quad {\bf k}={\bf i\,j}=
\left(\begin{matrix}0& \ii\\\ii& 0\end{matrix}\right)\,.$$ We consider the one dimensional torus $\T=\C^*$ acting via the scalar multiplication.
### The elliptic class computed via resolution
The following quiver represents the resolution of the singularity $D_4$ $$\begin{matrix}\nwarrow \\
^4\phantom{a}&\!\!\!\bullet&\\
&&\!\!\!\!\!_2\!\!\nwarrow&\\
&&&\boxed{\P^1}&\stackrel{2}{\longrightarrow}&\bullet&\stackrel{4}{\longrightarrow}\\
&&\!\!\!\!\!^2\!\!\swarrow\\
_4\phantom{a}&\!\!\!\bullet&\\
\swarrow
\end{matrix}$$
The internal arrows represent the exceptional divisors with nontrivial torus action. The divisor $\boxed{\P^1}$ is fixed by $\C^*$. The external arrows represent the normal directions pointing out from the exceptional divisor at the isolated fixed points. The number at each arrow stands for weight of the action along the divisor. $$\label{*d4}3\delta(t^{-2},\h)\cdot\delta(t^4,\h)+\int_{\P^1}\frac{\Ell^\T(Y)_{|\P^1}}{eu(\nu(\P^1))}\,.$$ Here the integral is the localized elliptic genus integrated along the fixed component. It can be computed as in Example \[A1\] artificially extending the torus: There exists a neighbourhood of the fixed component which admits a two dimensional torus action having only two fixed points. This neighbourhood is isomorphic to the neighbourhood of the exceptional divisor for the singularity $A_1$ therefore the integral is the specialization of the sum (\*\*) of Example \[A1\]: $$\label{calka}\int_{\P^1}\frac{\Ell^\T(Y)_{|\P^1}}{eu(\nu(\P^1))}= \tfrac12\left(\Phi(0)+
\Phi(\tfrac 12 )+
\Phi(-\tfrac1 2\tau)+
\Phi(\tfrac 12 -\tau\tfrac1 2)
\right)
\,,$$ where $\Phi(\lambda)$ is defined by .
### The computation of the orbifold elliptic class
The sub-sum of indexed by the pairs $(g,1)$ is equal to $$S_{1}=\Phi(0)+\Phi(-\tfrac12)+6\Phi(-\tfrac14)\,.$$ The sub-sum of indexed by the pairs $(g,-1)$ is equal to $$S_{-1}=\Phi(-\tfrac12\tau)+\Phi(\tfrac12-\tfrac12\tau)+6\Phi(\tfrac14-\tfrac12\tau)\,.$$
The remaining six possibilities $h\in\{\pm {\bf i},\pm {\bf j},\pm {\bf k}\}$ give $$S_{{\bf i}}=\Phi(-\tfrac14\tau)+
\Phi(\tfrac14-\tfrac14\tau)
+\Phi(\tfrac12-\tfrac14\tau)
+\Phi(\tfrac34-\tfrac14\tau)\,.$$ Therefore $$\frac{\Ell^\T_{orb}(\C^2,\emptyset,G)}{eu(0,\C^2)}=\tfrac18\left(S_{1}+S_{-1}+6S_{{\bf i}}\right)\,.$$ After simplification we obtain $$\begin{gathered}
\tfrac68\left(\Phi(\tfrac14)+
\Phi(\tfrac14-\tfrac14\tau)
+\Phi(\tfrac14-\tfrac12\tau)
+\Phi(\tfrac14-\tfrac34\tau)
+\Phi(-\tfrac12\tau)
+\Phi(\tfrac12-\tfrac12\tau)\right)-\\
-\tfrac38
\left(\Phi(0)
+
\Phi(\tfrac 12 )+
\Phi(-\tfrac1 2\tau)+
\Phi(\tfrac 12 -\tau\tfrac1 2)
\right)=3\delta(t^{-2},\h)\cdot\delta(t^4,\h)\,.\end{gathered}$$ The formula can be further transformed: Since $\Phi(\lambda)=\Phi(1+\tau-\lambda)$ we can write $6\Phi(\tfrac14-\tfrac14\tau)=3\Phi(\tfrac34-\tfrac34\tau)+3\Phi(\tfrac14-\tfrac14\tau)$. We break in two other terms with coefficients $6$ and obtain (dividing by 3) a remarkable formula $$\delta(t^{-2},\h)\cdot\delta(t^4,\h)=\tfrac18\sum_{k,\ell=0}^3
(-1)^{(k+1)(\ell+1)}\,\Phi(\tfrac k4-\tfrac\ell4\tau)\,,$$ where $\Phi(\lambda)$ is given by . This is a nontrivial relation involving theta function.
Lehn–Sorger example continued
-----------------------------
We compute the orbifold elliptic class for the example considered in §\[Lehn-Sor\]. \[LScont\] The Lehn-Sorger group $G\subset GL_2(\C)=GL(W)$ is generated by $$h_1=
\tfrac{-1+\ii
\sqrt{3}}{2}
\left(
\begin{array}{cr}
\frac{1+\ii}{2} & -\frac{1-\ii}{2} \\
\frac{1+\ii}{2} & \frac{1-\ii}{2} \\
\end{array}
\right)
\,,\qquad h_2=\left(\begin{array}{cr}\ii&0\\0&-\ii\\\end{array}\right)
\,.$$ It is important to know that $$h_1^3=h_2^2=-id\in Z(G)\,,\qquad C(h_1)=C(h_1^2)=\langle h_1\rangle\simeq\Z_6 \,,\qquad C(h_2)=\langle h_2\rangle\simeq\Z_4\,.$$ The conjugacy classes of $h\in G$ with the logarithms of the eigenvalues are listed below. $$\def\arraystretch{1.5}\begin{array}{|c|c|c|c|}
\hline
h&|C(h)|&(\mu_1,\mu_2)&\text{logarithms of the eigenvalues of }g\in C(h):\;\;(\lambda_1,\lambda_2)\\
\hline\hline
id&24&(0,0)&
(0,0),\;(\tfrac12,\tfrac12),\;4\times(\tfrac16,\tfrac12),\;4\times(\tfrac13,0),\;4\times(\tfrac23,0),\;4\times(\tfrac56,\tfrac12),\;6\times(\tfrac14,\tfrac34)
\\
-id&24&(\tfrac12,\tfrac12)&
\\
\hline
h_1&6&(\tfrac16,\tfrac12)&\\
h_1^2&6&(\tfrac13,0)&(0,0),\;(\tfrac16,\tfrac12),\;(\tfrac13,0),\;(\tfrac12,\tfrac12),\;(\tfrac23,0),\;(\tfrac56,\tfrac12)
\\
h_1^4&6&(\tfrac23,0)&
\\
h_1^5&6&(\tfrac56,\tfrac12)&
\\\hline
h_2&4&(\tfrac14,\tfrac34)&(0,0),\;(\tfrac14,\tfrac34),\;(\tfrac12,\tfrac12),\;(\tfrac34,\tfrac14)
\\
\hline
\end{array}$$ Each quadruple $(\lambda_1,\lambda_2.\mu_1,\mu_2)$ contributes the summand $$\Psi(\lambda_1+\tau\mu_1)\cdot\Psi(\lambda_2+\tau\mu_2)$$ to $\Ell_{orb}^\T(\C^4,G,\emptyset)/eu(0,\C^4)$. It is counted with the weight $1/C(h)$. The formula for $\Psi$ is given in .
On the other hand, applying the computation of the tangent weights of the resolution presented in [@Grab Th. 4.5] we find that $$\frac{\Ell^\T(\C^4/G,\emptyset)}{eu([0],M)}=F_0(t_1,t_2)+F_0(t_2,t_1)+F(t_1,t_2)\,,$$ where $$F_0(t_1,t_2)=
\delta \left(\tfrac{t_1}{t_2^5},h\right)
\delta \left(\tfrac{t_1}{t_2^3},h\right)
\delta \left({t_2^4},h\right)
\delta \left({t_2^6},h\right)+
\delta \left(\tfrac{t_1^2}{t_2^4},h\right)
\delta \left(\tfrac{t_1}{t_2},h\right)
\delta \left(t_2^2,h\right)
\delta \left(\tfrac{t_2^5}{t_1},h\right)$$ and $F(t_1,t_2)$ is given by . The equality of elliptic genera implies an identity for theta functions. The explicit expanded form is too long to present it here.
Diagonal quotient
=================
The quotient $\C_m/\Z^n$
------------------------
Let $\Z_n$ act on $\C^m$ via scalar multiplication by the $n$-th root of unity. The quotient $X=\C^m/\Z_n$ has an isolated singularity and admits a desingularization via blow up at the origin. The resolution $Y$ is isomorphic to the total space of the bundle $\O(-n)$ over $\P^{m-1}$. The torus $\T=(\C^*)^m$ acts on $\C^m$ coordinatewise and the action commutes with $\Z_n$. Setting $D_X=0$ we find that $D_Y=\pi^*K_X-K_Y$ is supported by the exceptional divisor $\P^{m-1}$ $$D_Y\;=\;(1-\tfrac mn)\P^{m-1}\,.$$ Therefore the localized equivariant elliptic class is equal to $$\frac{\Ell(X)_{[0]}}{eu([0],M)}=\int_{\P^{m-1}} \delta(e^{c_1^\T(\O(-n))},\h^{m/n})\Ell(\P^{m-1})\,.$$ By the localization theorem for the full torus we obtain $$\label{LGformula}\frac{\Ell(X)_{[0]}}{eu([0],M)}=\frac1n\sum_{i=1}^m\left(\delta\left(t_i^n,\h^{m/n}\right) \prod_{j\neq i}\delta\left(\tfrac{t_j}{t_i},\h\right)\right)\,.$$ The orbifold elliptic class is equal to $$\label{MKformula}\frac{\Ell_{orb}(\C^m,\emptyset,\Z_n)_{0}}{eu(0,\C^m)}=\sum_{k,\ell=0}^{n-1}\left(\h^{-\tfrac{m\ell}n}\prod_{i=1}^m\delta\left(\operatorname{e}\left(\tfrac kn-\tfrac \ell n\tau\right) t_i,\h\right)\right)\,.$$ Again = is a nontrivial identity for the theta function. This equality for $m=1$ is mentioned in [@BoLi Cor. 8.2].
Mixing variable types {#LaGi}
---------------------
Assume that $m=n$. It was observed in [@LibLG] while analyzing the Landau-Ginzburg model, that if we restrict the action to the one dimensional torus and set the equivariant variable $t=\h^{-1/n}$ (note that $\C^*/\Z_n$ acts effectively on $X=\C^n/\Z_n$) then we obtain the formula for the Elliptic genus of the Calabi-Yau hypersurface in $H_{CY}\subset\P^{n-1}$. Let us transform this calculation to our notation. In K-theory $T\P^{n-1}=n\O(1)-\O$, hence $$\Ell(\P^{n-1})=x^n\,\delta(e^x,\h)^n\,,$$ where $x=c_1(\O(1))$. Let $$u=ch(\O(1))=e^{x}\in H^*(\P^{n-1})$$ be the exponent of the nonequivariant Chern class. Then $$\begin{gathered}
\frac{\Ell(X)_{[0]}}{eu([0],M)}=\int_{\P^{n-1}}\frac{\Ell^\T(Y)_{|\P^{n-1}}}{eu(\nu(\P^{n-1}))}=\\=\int_{\P^{m-1}} x^n\delta((t/u)^n,\h)\delta(u,h)^n=\int_{\P^{m-1}} x^n\tfrac{\vt((t/u)^n\h)\vt'(1)}{\vt((t/u)^n)\vt(\h)}
\delta(u,\h)^n=\\
\stackrel{t:=\h^{-1/n}}=\;\int_{\P^{m-1}} x^n
\tfrac {\vt(u^{-n})\vt'(1)}
{\vt(u^{-n}\h^{-1})\vt(\h)}
\delta(u,\h)^n=\int_{\P^{m-1}}x^n
\tfrac {\vt(u^{n})\vt'(1)}
{\vt(u^{n}\h)\vt(\h)}
\delta(u,\h)^n=\\
=\left(\tfrac {\vt'(1)}
{\vt(\h)}\right)^2\int_{\P^{m-1}} x^n
\delta(u^{n},\h)^{-1}
\delta(u,\h)^n=\left(\tfrac {\vt'(1)}
{\vt(\h)}\right)^2Ell(H_{CY})\,.\end{gathered}$$ When we consider unreduced elliptic genera then we get rid of the factor $\left(\tfrac {\vt'(1)}
{\vt(\h)}\right)^2$. It is interesting to observe that the integral can be expressed by a residue: $$\begin{gathered}
\int_{\P^{n-1}}\frac{\Ell^\T(Y)_{|\P^{n-1}}}{eu(\nu(\P^{n-1}))}=\int_{\P^{n-1}}x^n\delta(t/e^x,\h)\delta(e^x,\h)^n=\\
=\text{Coefficient of }x^{n-1}\text{ in } x^n \delta(t/e^x,\h)\delta(e^x,\h)^n={\rm Res}_{x=0}\big(\delta(t/e^x,\h)\delta(e^x,\h)^n\big)\\
={\rm Res}_{u=1}\big(\delta(t/u,\h)\delta(u,\h)^n/u\big)\,.\end{gathered}$$
Proof of Theorem \[A1main\] {#A1proof}
===========================
Let $\T=(\C^*)^2$ be the torus acting on $\C^2$ coordinatewise. Denote the coordinates on $\C^2$ by $z_1, z_2$. Let $\Z_n\subset SL_n(\C)\cap \T$ be the subgroup generated by ${\rm diag}(\operatorname{e}(\tfrac 1n),\operatorname{e}(\tfrac {-1}n))$. The action of $\T$ passes to the quotient $X=\C^2/\Z_n$. The minimal resolution of $X$ singularity is a toric variety having $n$ fixed points joined by the chain of one dimensional orbits of $\T$. The GKM graph of $Y$ is given in . The external edges have loose ends since $Y$ is not compact. They correspond to the strict transforms of the divisors coming from $X$, namely $D_1=\{z_1=0\}/\Z_n$ and $D_2=\{z_2=0\}/\Z_n$. Suppose $D_X= a_1 D_1+a_2 D_2$ is the divisor given by the function $f=z_1^{a_1}z_2^{a_2}$. The divisor $D_Y$ is supported by the strict transforms and $\widetilde D_1$, $\widetilde D_2$ and the exceptional divisor. At the fixed point corresponding to the vertex $\boxed{k}$ the toric coordinate functions are $u_1=\frac{z_1^{n-k+1}}{z_2^{k-1}}$ and $u_2=\frac{z_2^k}{z_1^{n-k}}$. The monomial $f$ is equal to $$z_1^{a_1}z_2^{a_2}=u_1^{\tfrac{a_1}nk+\tfrac{a_2}n(n-k)}\,u_2^{\tfrac{a_1}n (k-1)+\tfrac{a_2}n(n-k+1)}\,.$$ Therefore the contribution of the fixed point $\boxed{k}$ to the elliptic class is equal to $$\delta\left(\tfrac{z_1^{n-k+1}}{z_2^{k-1}},\h^{1-\left(\tfrac{a_1}nk+\tfrac{a_2}n(n-k)\right)}\right)\,
\delta\left(\tfrac{z_2^k}{z_1^{n-k}},\h^{1-\left(\tfrac{a_1}n(k-1)+\tfrac{a_2}n(n-k+1)\right)}\right)\,.$$ Setting $h_i=\h^{\tfrac{1-a_i}n}$ we obtain the formula (\*). The second formula describes the orbifold elliptic class by application of . By Theorem \[sympmckay\] the expressions for the above elliptic classes are equal.
Self-duality of $A_{n-1}$ singularity
=====================================
As it has been shown in the subsection \[LaGi\] mixing the equivariant variables $t_i$ with the variable $\h$ leads to interesting results. The variable $\h$ modified by the parameters depending on the divisor multiplicity sometimes play a role similar to equivariant variables. In [@RW; @RSVZ] the parameters depending on line bundles are called dynamical parameters and in [@AO] – the Kähler variables. It is shown in [@RSVZ] that exchanging the equivariant variables with dynamical parameters for elliptic classes of the Schubert varieties in the complete flag variety leads to a mirror self-symmetry. We will show that the $A_n$-singularities are self-symmetric in some sense.
The formula (\*\*) of Theorem \[A1main\] clearly does not look like being (anti)symmetric with respect to exchange $h$- and $t$-variables. On the other hand:
The expression (\*) of Theorem \[A1main\] is antisymmetric with respect to the change of variables $$t_1\leftrightarrow h_1,\qquad t_2\leftrightarrow h_2^{-1}\,.$$ that is $$A_n(h_1,h_2^{-1},t_1,t^{-1}_2)=-A_n(t_1,t_2,h_1,h_2)\,.$$
Each summand of (\*) after the substitution takes form $$\begin{gathered}
\operatorname{\delta\!}\left( {h_1^{n-k+1}}{h_2^{k-1}},\tfrac{t_1^{k}}{t_2^{n-k}}\right)
\operatorname{\delta\!}\left(\tfrac1{h_1^{n-k}h_2^{k}},\tfrac{t_1^{k-1}}{t_2^{n-k+1}}\right)=
\\=-\operatorname{\delta\!}\left( {h_1^{n-k+1}}{h_2^{k-1}},\tfrac{t_1^{k}}{t_2^{n-k}}\right)
\operatorname{\delta\!}\left({h_1^{n-k}h_2^{k}},\tfrac{t_2^{n-k+1}}{t_1^{k-1}}\right)=
\\=-\operatorname{\delta\!}\left( \tfrac{t_1^{k}}{t_2^{n-k}},{h_1^{n-k+1}}{h_2^{k-1}}\right)
\operatorname{\delta\!}\left(\tfrac{t_2^{n-k+1}}{t_1^{k-1}},{h_1^{n-k}h_2^{k}}\right)\,.\end{gathered}$$ Setting $k'=n-k+1$ we obtain exactly the summand of the original formula with the minus sign.
We have $$\begin{gathered}
\sum_{k=0}^{n-1}
\sum_{\ell=0}^{n-1}\big(\tfrac{h_2}{h_1}\big)^\ell\operatorname{\delta\!}\left(\operatorname{e}\left(\tfrac{k-\ell\tau}n\right)t_1,h_1^n\right)
\operatorname{\delta\!}\left(\operatorname{e}\left(\tfrac{\tau\ell-k}n\right)t_2,h_2^n\right)
=\\=
-\sum_{k=0}^{n-1}
\sum_{\ell=0}^{n-1}(t_1t_2)^{-\ell}\operatorname{\delta\!}\left(\operatorname{e}\left(\tfrac{k-\ell\tau}n\right)h_1,t_1^n\right)
\operatorname{\delta\!}\left(\operatorname{e}\left(\tfrac{\tau\ell-k}n\right)h_2^{-1},t_2^{-n}\right)\,.\end{gathered}$$
This effect does not appear in general. It is due to the special form of the action of the torus on the resolution of $A_{n-1}$ singularity. For example the symplectic subtorus acts via the same character along the exceptional divisors.
Hirzebruch class — the limit with $q\to 0$
==========================================
The limit $q\to 0$ corresponds to $\tau\to i\infty$. We set $y=\operatorname{e}(z)=\h^{-1}$. Then $$\lim_{\tau\to i\infty}\frac{\theta(\upsilon+\mu\tau-z)}{\theta(\upsilon+\mu\tau)}=\begin{cases}y^{-1/2}& \text{ if } -1<\mu<0\\
y^{-1/2}\,\frac{1-y\operatorname{e}(-\upsilon)}{1-\operatorname{e}(- \upsilon)}& \text{ if }\mu=0\\
y^{1/2}& \text{ if } 0<\mu<1\,,\end{cases}$$ or equivalently $$\lim_{q\to 0}
\frac{\vt(t\,\h\, q^\mu)}{\vt(t\, q^\mu)}
=\begin{cases}\h^{1/2}& \text{ if } -1<\mu<0\\
\h^{1/2}\,\frac{1-\h^{-1}t^{-1}}{1-t^{-1}}& \text{ if }\mu=0\\
\h^{-1/2}& \text{ if } 0<\mu<1\,,\end{cases}$$ see [@BoLi proof of Prop. 3.13] for the proof of the first two limits, and use (\[per2\]) to deduce the third one. Let $T_j=t_j^{-1}$ for $j=1,2$. The equality of unreduced equivariant elliptic classes of $A_{n-1}$ singularity with $D_X=\emptyset$ specializes to the equality of rational functions [\#1\#2 $$(*)_\infty=y^{-1}\sum_{k=1}^{n}\Dellim{t_1^{n-k+1}}{t_2^{k-1}}
\Dellim{t_2^{k}}{t_1^{n-k}},\\$$ ]{} $$(**)_\infty=y^{-1}\frac 1n\sum_{k=0}^{n-1}\frac{1-y\,\operatorname{e}(\tfrac kn) t_1^{-1}}{1-\,\operatorname{e}(\tfrac k n) t_1^{-1}}\cdot\frac{1-y\,\operatorname{e}(-\tfrac k n) t_2^{-1}}{1-\,\operatorname{e}(-\tfrac k n) t_2^{-1}}+(n-1)\,.$$ By elementary transformations the sum $(*)_\infty$ can be written as $$(***)_\infty=y^{-1}(1 - y) (1 -
y(t_1t_2)^{-1} ) \frac{1 - (t_1 t_2)^{-n}}{(1 - (t_1 t_2)^{-1}) (1 - t_1^{-n}) (1 - t_2^{-n})} +n \,.$$ Setting $t_1=t_2=T^{-1}$ we obtain a trigonometric identity $$\frac 1n\sum_{k=0}^{n-1}\frac{1-2\cos(\frac{2k\pi }n) y\, T+y^2T^2}{1-2\cos(\frac{2k\pi }n) T+T^2}=(1-y)\left(1-y
T^2\right)\frac{1-T^{2 n}}{\left(1-T^2\right)
\left(1-T^{n}\right)^2}+y\,.$$ In particular for $y=0$ $$\frac 1n\sum_{k=0}^{n-1}\frac{1}{1-2\cos(\frac{2k\pi }n) T+T^2}=\frac{1-T^{2 n}}{\left(1-T^2\right)
\left(1-T^{n}\right)^2}\,.$$ The equality of expressions $(*)_\infty$, $(**)_\infty$ and $(***)_\infty$ was already noticed in [@DBW §5.5].
[[Web]{}98]{}
Mina Aganagic and Andrei Okounkov. Elliptic stable envelopes, 2016. arXiv:1604.00423.
Lev Borisov and Anatoly Libgober. Elliptic genera of singular varieties. , 116(2):319–351, 2003.
Lev Borisov and Anatoly Libgober. Mc[K]{}ay correspondence for elliptic genera. , 161(3):1521–1569, 2005.
Neil Chriss and Victor Ginzburg. . Birkhäuser Boston, Inc., Boston, MA, 1997.
K. Chandrasekharan. , volume 281 of [*Grundlehren der Mathematischen Wissenschaften \[Fundamental Principles of Mathematical Sciences\]*]{}. Springer-Verlag, Berlin, 1985.
Maria Donten-Bury and Andrzej Weber. Equivariant [H]{}irzebruch classes and [M]{}olien series of quotient singularities. , 23(3):671–705, 2018.
John D. Fay. . Lecture Notes in Mathematics, Vol. 352. Springer-Verlag, Berlin-New York, 1973.
Anatoly Fomenko and Dmitry Fuchs. , volume 273 of [*Graduate Texts in Mathematics*]{}. Springer, \[Cham\], second edition, 2016.
Nora Ganter. The elliptic [W]{}eyl character formula. , 150(7):1196–1234, 2014.
Victor Ginzburg, Mikhail Kapranov, and Eric Vasserot. Elliptic algebras and equivariant elliptic cohomology, 1995. arXiv:q-alg/9505012.
Maksymilian Grab. Lehn-sorger example via cox ring and torus action, 2018. arXiv:1807.11438.
Sachin Gautam and Valerio Toledano-Laredo. Elliptic quantum groups and their finite-dimensional representations, 2017. arXiv:1707.06469v2.
Shrawan Kumar, Rich[á]{}rd Rim[á]{}nyi, and Andrzej Weber. Elliptic classes of schubert varieties. In preparation.
Peter S. Landweber. Elliptic genera: an introductory overview. In [*Elliptic curves and modular forms in algebraic topology ([P]{}rinceton, [NJ]{}, 1986)*]{}, volume 1326 of [*Lecture Notes in Math.*]{}, pages 1–10. Springer, Berlin, 1988.
Anatoly Libgober. Elliptic genus of phases of [$N=2$]{} theories. , 340(3):939–958, 2015.
Manfred Lehn and Christoph Sorger. A symplectic resolution for the binary tetrahedral group. In [*Geometric methods in representation theory. [II]{}*]{}, volume 24 of [*Sémin. Congr.*]{}, pages 429–435. Soc. Math. France, Paris, 2012.
David Mumford. , volume 28 of [*Progress in Mathematics*]{}. Birkhäuser Boston, Inc., Boston, MA, 1983. With the assistance of C. Musili, M. Nori, E. Previato and M. Stillman.
R. Rim[á]{}nyi, A. Smirnov, A. Varchenko, and Z. Zhou. Three dimensional mirror self-symmetry of the cotangent bundle of the full flag variety, 2017. arXiv:1906.00134v2.
R. Rimányi, V. Tarasov, and A. Varchenko. Elliptic and [$K$]{}-theoretic stable envelopes and [N]{}ewton polytopes. , 25(1):Art. 16, 43, 2019.
Rich[á]{}rd Rim[á]{}nyi and Andrzej Weber. Elliptic classes of [S]{}chubert cells via [B]{}ott-[S]{}amelson resolution, 2019. arXiv:1904.10852.
Graeme Segal. Elliptic cohomology (after [L]{}andweber-[S]{}tong, [O]{}chanine, [W]{}itten, and others). , 161-162:Exp. No. 695, 4, 187–201 (1989), 1988. Séminaire Bourbaki, Vol. 1987/88.
Robert E. Stong. . Mathematical notes. Princeton University Press, Princeton, N.J.; University of Tokyo Press, Tokyo, 1968.
Burt Totaro. Chern numbers for singular varieties and elliptic homology. , 151(2):757–791, 2000.
Robert Waelder. Equivariant elliptic genera and local [M]{}c[K]{}ay correspondences. , 12(2):251–284, 2008.
H. [Weber]{}. , 1898.
Andrzej Weber. Equivariant [C]{}hern classes and localization theorem. , 5:153–176, 2012.
Andrzej Weber. Equivariant [H]{}irzebruch class for singular varieties. , 22(3):1413–1454, 2016.
Gufang Zhao and Changlong Zhong. Elliptic affine hecke algebras and their representations, 2015. arXiv:1507.01245.
|
---
abstract: 'In this paper we present the methods of our submission to the ISIC 2018 challenge for skin lesion diagnosis (Task 3). The dataset consists of 10000 images with seven image-level classes to be distinguished by an automated algorithm. We employ an ensemble of convolutional neural networks for this task. In particular, we fine-tune pretrained state-of-the-art deep learning models such as Densenet, SENet and ResNeXt. We identify heavy class imbalance as a key problem for this challenge and consider multiple balancing approaches such as loss weighting and balanced batch sampling. Another important feature of our pipeline is the use of a vast amount of unscaled crops for evaluation. Last, we consider meta learning approaches for the final predictions. Our team placed second at the challenge while being the best approach using only publicly available data.'
author:
- 'Nils Gessert$^{*ab}$, Thilo Sentker$^{ac}$, Frederic Madesta$^{ac}$, Rüdiger Schmitz$^{ad}$, Helge Kniep$^{ag}$, Ivo Baltruschat$^{aef}$, René Werner$^{ac}$ and Alexander Schlaefer$^{b}$[^1] [^2] [^3] [^4] [^5] [^6] [^7] [^8]'
bibliography:
- 'egbib.bib'
title: 'Skin Lesion Diagnosis using Ensembles, Unscaled Multi-Crop Evaluation and Loss Weighting'
---
[Gessert : Skin Lesion Diagnosis using Ensembles, Unscaled Multi-Crop Evaluation and Loss Weighting]{}
Skin Lesion Diagnosis, Ensembles, Multi-Crop, Loss Weighting.
Introduction
============
Deep learning and in particular convolutional neural networks (CNNs) have become the standard approach for automated diagnosis based on medical images [@litjens2017survey]. For the problem of skin lesion diagnosis, a new dataset has recently been made available to the public [@Tschandl2018_HAM10000]. The dataset consists of 10000 dermoscopic images showing skin lesions which have been diagnosed based on expert consensus, serial imaging or histopathology. Using this dataset, the challenge “ISIC 2018: Skin Lesion Analysis Towards Melanoma Detection” has been proposed [@codella2018skin]. We participate in this challenge with an automated method that relies on multiple state-of-the-art CNNs, heavy data augmentation, loss weighting, extensive, unscaled cropping and meta learning. In the following, we describe the details of our approach. Our code is available to the public.
Methods {#sec:methods}
=======
Evaluation Metric
-----------------
The key metric of this challenge is a weighted accuracy (WACC) across the seven classes. This is equivalent to the average recall or sensitivity. Hence, the metric is defined as:
$$WACC = \frac{TP}{TP+FN}$$
where $TP$ denotes true positive cases and $FN$ denotes false negative cases. These can be derived from a standard confusion matrix. We perform all preliminary evaluation and hyperparameter tuning with this metric.
Dataset {#sec:dataset}
-------
The baseline dataset is the HAM10000 dataset introduced by Tschandl et al. [@Tschandl2018_HAM10000]. In the following, we refer to this dataset as HAM. In addition, we used the public ISIC dataset which comprises roughly 13500 images. In the following, we refer to this dataset as ISIC. We checked all images for potential overlap between HAM and ISIC.
The first obvious problem of these datasets is heavy class imbalance. Table \[tab:datasets\] shows the class distribution of HAM and ISIC. Therefore, we consider countering class imbalance as a key challenge to be addressed. Also, we have to assume that ISIC will not be that useful as it is even more imbalanced than HAM. For this reason, we only consider ISIC for few, high performing models in our final ensemble.
For internal validation, we split HAM into five sets with equal (imbalanced) class distribution for 5-fold cross-validation. All images were separated based on lesion affiliation, i.e., we made sure that images from the same lesion cannot occur both in a training and validation split. The information for this separation was provided by the organizers. We add the entire ISIC dataset to each training subset, if it is used.
[l l l l l l l l]{} & MEL & NV & BCC & AKIEC & BKL & DF & VASC\
\
HAM & $1113$ & $6705$ & $514$ & $327$ & $1099$ & $115$ & $142$\
ISIC & $1056$ & $11861$ & $72$ & $2$ & $477$ & $7$ & $0$\
Preprocessing {#sec:preproc}
-------------
For HAM, we kept the image size of $600\times 450$. Note, that histogram equalization was applied to selected images by the dataset publishers [@Tschandl2018_HAM10000]. For ISIC, we resized all images to the size of HAM using bicubic interpolation.
During training, we applied online data augmentation to each image. First, we applied random cropping with a fixed size of $224\times224$. Note, that we did not use any scaling or aspect-ratio changes which are typically used [@Szegedy.2016b]. Then, we randomly flipped images along both dimensions with a probability of $p=0.5$. Furthermore, we distorted the images with random changes in brightness and saturation. Last, we subtracted the per-channel training set mean from the images.
We tested random rotations, scaling, contrast, hue and aspect-ratio changes without an improvement in performance.
Models {#sec:models}
------
Overall, we rely on an ensemble for our final submission which has been successful in most challenges [@russakovsky2015imagenet]. In terms of model choice, we first assessed the value of using pretrained architectures. We found that fine-tuning a model trained on ImageNet performed signficantly better than training from scratch. Therefore, we chose to build our ensemble from pretrained models available to the public. We use the popular frameworks Tensorflow [@Abadi.2016] and PyTorch [@paszke2017automatic]. In particular, we use the Tensorflow Slim model library [@tensorflowslim] and the PyTorch pretrained models library [@pytorchmodels].
The models to be included are selected based on 5-fold cross-validation performance. We tested the Inception [@Szegedy.2016b; @Szegedy.2017] and ResNet [@He.2016; @He.2016b] variants as a baseline first. These included InceptionV3, InceptionResNetV2, ResNet50-V1, ResNet50-V2 (post norm structure) and ResNet101-V2 (post norm structure). Next, we considered more recent architectures which included PolyNet [@zhang2017polynet], ResNeXt [@xie2017aggregated], Densenet [@huang2017densely], SENets [@hu2017squeeze] and DualPathNet [@chen2017dual]. Compared to the baseline models, the more recent architectures performed better which is why we quickly excluded the baseline.
Thus, the models that are included in the final ensemble are variants of Densenet, ResNeXt, PolyNet and SENets. For most of our hyperparameter searches we used Densenet121 and assumed that the choices translate well to the other architectures.
Training {#sec:training}
--------
We found the training strategy to be crucial as well. We identified the most relevant hyperparameters to be the starting learning rate, the learning rate schedule and the choice of potential early stopping.
Especially the latter is important in terms of model training for the final submission. Usually, the final submission model should be trained on all available data with a fixed learning rate schedule. During cross-validation, we performed early stopping which means that we saved a checkpoint of the model when the best WACC was achieved throughout the entire training process. We compared this to the last checkpoint being saved and noticed a significant difference in the WACC metric. In general, this implies that our chosen learning rate schedule is not optimal. However, we found it difficult to obtain the optimal learning rate schedule and the difference between the best and last model remained large. We did not observe this difference in such strength for normal accuracy or the area-under-curve (AUC) metric. We suspect that the WACC is very sensitive to changes of the classes with a small number of examples. Overall, this implies that our final model for submission could be trained using a validation set with early stopping instead of training a model on the entire available dataset. As this leads to suboptimal exploitation of the available data, we included both models from cross-validation with early stopping (5 models) and models trained on the entire dataset (1 model). The fully trained models are weighted $5$ times higher than individual CV models in order to achieve a reasonable balance.
The other hyperparameter choices are more straight forward. We tested Adam [@Kingma.2014], Nadam [@dozat2016incorporating] (Adam + Nesterov momentum) and RMSprop in terms of optimizers and noticed only slight difference in performance with Adam generally performing best. We chose Adam for all models. In terms of learning rate schedule, we follow a stepwise approach. We chose a starting learning rate of $l_r = 0.0005$ and started reducing it with a factor of $\lambda = 0.2$ after $50$ epochs. Then, we continued reducing it with $\lambda$ every $25$ epochs. We stopped optimization after $125$ epochs. In between, we saved the model with the best WACC on the validation set, based on evaluation every $5$ epochs. We used a batch size of $40$.
For the loss function we used a standard cross-entropy loss as a basis:
$$\mathcal{L} = - \sum_{c=1}^{C} p_c \log{\hat{p}_c}$$
where $p$ is the ground-truth label, $\hat{p}$ is the softmax-normalized model output and $C$ the number of classes. As the seven classes in the dataset are highly imbalanced we considered different balancing approaches. In particular, we explored different ways of loss balancing and also sampling balanced batches. For loss balancing, we considered multiplying each classes’ loss by its inverse normalized frequency, i.e., the weighting term $w_i$ is defined as
$$w_i = \frac{N}{n_i}$$
where $N$ is the total number of samples and $n_i$ is the number of samples for class $i$. This method puts a very strong weight on the highly underrepresented classes DF and VASC. Moreover, we considered a less extreme approach with
$$w_i = \frac{N}{cn_i}$$
{width="100.00000%"}
where $c$ denotes the total number of classes. Last, we considered sampling balanced batches by oversampling the underrepresented classes during training. While all approaches improved results over no balancing at all, the differences were minor. We used the first loss balancing approach for all cases as it performed best for most models. It should be noted that we adjusted this method for use with HAM and ISIC combined. As ISIC is even more imbalanced than HAM, this balancing strategy would lead to unreasonable high loss amplification for underrepresented classes. Therefore, we derived the weighting from HAM only, both for training with HAM only and with HAM and ISIC.
Besides class balancing, we also tried to incorporate the meta information on the method of diagnosis. The organizers provided the information whether each lesion was diagnosed by “single image expert consensus”, “serial imaging showed no change”, “confocal microscopy with consesus dermoscopy” or “histopathology”. Assuming that, within a given class, the means of diagnosis relates to the diagnostic difficulty, we tried to incorporate this meta information into the loss by increasing the loss for more difficult cases. Although this approach showed slightly increased performance for some models it appeared to be inconsistent across models and we did not incorporate it into all models of our final ensemble. E.g., for our reference model Densenet121 performance got even worse.
All training is performed on NVIDIA GeForce GTX 1080 Ti graphics cards. As some of the larger models have large memory requirements due to their feature map sizes, the graphics cards’ memory was insufficient for our standard batch size. For these cases, we scaled down the learning rate and batch size by the same factor.
Evaluation
==========
Our evaluation strategy for the generation of the final predictions is shown in Figure \[fig:eval\]. We made use of extensive multi-crop evaluation. The crops are unscaled and of size $224\times224$ which is identical to the size chosen for training. We perform $36$ evaluations per model which results in $36$ predictions that need to be combined. For the models that do not have a validation set, we performed averaging across the $36$ predictions. For the CV models, we incorporated a meta learning step. We constructed a flattened feature vector out of the $36$ predictions and used the results from the validation set for training an SVM. Then, we predicted the final label of the test set based on the $36$ CNN predictions. For this last model, we considered both random forests and SVMs with different kernels. We found SVMs with an RBF kernel to work best. Note, that a similar meta learning strategy was also used by one of last year’s challenge winners [@menegola2017recod].
As a last step, we combined the predictions from the CV models and the fully trained models by averaging over all models. Instead of averaging we also considered voting, i.e., we counted how many models predicted a certain class. We found that averaging generally performs slightly better.
During training, we kept evaluating on the validation set using 16 crops in order to keep the computational effort at reasonable levels. For evaluation, we also tested increased numbers of crops, however, after 36 crops the improvement was negligible.
We performed model selection for our final ensemble based on the 5-Fold CV performance with $36$ crop evaluation of each architecture. For this selection, we simply averaged the predictions from all crops for evaluation. Then, we searched for an optimal combination of our architectures by averaging the predictions of a subset. In theory, an exhaustive search over all possible architecture combinations could be performed. However, this search would lead to millions of possible combinations which takes a significant amount of time. Instead, we first ranked all architectures by their 5-Fold CV performance. Then, we considered combinations where the best $15$ architectures are included only. We noticed that this approach usually leads to an optimal combination which includes roughly $6$ to $7$ out of the available architectures.
Results {#sec:results}
=======
[l l l l]{} & MACC & MAUC & WACC\
\
Densenet121 & $0.823$ & $0.967$ & $0.795$\
Densenet121 **with SVM** & $0.827$ & $-$ & $\bm{0.822}$\
Densenet121 with ISIC & $0.870$ & $0.974$ & $0.804$\
Densenet121 no pretraining & $0.678$ & $0.931$ & $0.694$\
Densenet121 16-crop eval. & $0.822$ & $0.966$ & $0.785$\
Densenet121 4-crop eval. & $0.737$ & $0.932$ & $0.662$\
Densenet121 no weighting & $0.883$ & $0.976$ & $0.757$\
Densenet121 batch balancing & $0.797$ & $0.965$ & $0.789$\
Densenet121 diagnosis weighting & $0.874$ & $0.971$ & $ 0.738$\
Densenet161 & $0.861$ & $0.976$ & $0.809$\
Densenet169 & $0.852$ & $0.971$ & $0.806$\
ResNet50 & $0.862$ & $0.971$ & $0.779$\
SE-ResNet50 & $0.829$ & $0.966$ & $0.790$\
SE-ResNet101 & $0.838$ & $0.969$ & $0.810$\
ResNeXt101 32x4d & $0.836$ & $0.970$ & $0.808$\
SE-ResNeXt50 & $0.834$ & $0.968$ & $0.797$\
SE-ResNeXt101 & $0.860$ & $0.971$ & $0.803$\
DualPathNet92 & $0.862$ & $0.972$ & $0.804$\
**SENet154** & $0.854$ & $0.974$ & $\bm{0.817}$\
PolyNet & $0.845$ & $0.970$ & $0.802$\
**Ensemble** & $-$ & $-$ & $\bm{0.851}$\
We report some preliminary results for the key parts of our approach. The results are derived from 5-Fold CV as the official validation set is not supposed to provide an indication of the performance on the test set. Since we did not use a held-out test set, we do not follow the procedure shown in Figure \[fig:eval\] for the results reported in this section. Instead, we simply average the predictions from the 36 crops and use them as the final prediction on each fold. We summarize the mean accuracy, mean AUC and WACC in Table \[tab:results\] for important architecture variations and an ensemble. In terms of models, we found that SENet performed best as a single model. Moreover, a large ensemble performs better than any single model approach. Our final ensemble contains 54 models with the following architectures: SENet154, ResNeXt101 32x4d, Densenet201, Densenet161, Densenet169, SE-Resnet101, PolyNet.
Discussion and Conlusion {#sec:discussion}
========================
In this paper we propose an approach for automatic skin lesion diagnosis for the “ISIC 2018: Skin Lesion Analysis Towards Melanoma Detection” challenge. We use a large ensemble of state-of-the-art CNN models. One of our key choices was to use full-sized images with unscaled, smaller crops for training in combination with extensive unscaled multi-crop evaluation. We combine the crops both by simple averaging and a meta learning strategy. This allows us to capture detailed, high-resolution features while also taking global context into account. Moreover, the HAM dataset is very challenging as it is highly unbalanced in terms of classes and the evaluation metric treats all classes equally. As this imbalance represents the real-world case where most examined lesions are benign, this is an important issue to be addressed. Therefore, we considered several balancing approach where simple loss weighting with inverse, normalized class frequency performed best. We also considered incorporating the meta information on how difficult it was to diagnose the lesion. We used the information by weighting the loss additionally by factor for each type of diagnosis. However, we observed inconsistent results across models. This indicates that our way of using the knowledge is not optimal. Also, the assumption that more extensive evaluation equals cases that are harder to learn is likely oversimplified. Finally, we constructed a large ensemble whose models were selected based on 5-Fold CV performance for our final predictions. Regarding single model performance, it is notable, that more recent architectures outperformed older standard architectures. Considering that many researches still use plain ResNets or even VGG as a baseline we suggest that it is reasonable to move to more recent architecture proposed for the natural image domain (ImageNet, etc.). With the overall goal of providing the best diagnosis we see this as an important step. For future work, our method could be refined with a more extensive, less intuition-driven hyperparameter search. Moreover, the combination of local features and global context could be incorporated into a single end-to-end trainable architecture.
Acknowledgements {#acknowledgements .unnumbered}
================
This work was partially funded by the Forschungszentrum Medizintechnik Hamburg (02fmthh2017). Also, we gratefully acknowledge the support of this research by the NVIDIA Corporation under the GPU Grant Program.
[^1]: $^*$Corresponding Author (e-mail: [email protected])
[^2]: $^a$DAISYlab, Forschungszentrum Medizintechnik Hamburg, Germany
[^3]: $^b$Institute of Medical Technology, Hamburg University of Technology, 21073 Hamburg, Germany
[^4]: $^c$Institute of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
[^5]: $^d$Department of Interdisciplinary Endoscopy and Institute of Anatomy and Experimental Morphology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
[^6]: $^e$Section for Biomedical Imaging, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
[^7]: $^f$Institute for Biomedical Imaging, Hamburg University of Technology, 21073 Hamburg, Germany
[^8]: $^g$Department of Diagnostic and Interventional Neuroradiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany
|
---
abstract: 'We study the evolution of the number of close companions of similar luminosities per galaxy ($N_c$) by choosing a volume-limited subset of the photometric redshift catalog from the Red-Sequence Cluster Survey (RCS-1). The sample contains over 157,000 objects with a moderate redshift range of $0.25 \leq z \leq 0.8$ and $M_{R_c} \leq -20$. This is the largest sample used for pair evolution analysis, providing data over 9 redshift bins with about 17,500 galaxies in each. After applying incompleteness and projection corrections, $N_c$ shows a clear evolution with redshift. The $N_c$ value for the whole sample grows with redshift as $(1+z)^m$, where $m = 2.83 \pm 0.33$ in good agreement with $N$-body simulations in a $\Lambda$CDM cosmology. We also separate the sample into two different absolute magnitude bins: $-25 \leq M_{R_c} \leq -21$ and $-21 < M_{R_c} \leq -20$, and find that the brighter the absolute magnitude, the smaller the $m$ value. Furthermore, we study the evolution of the pair fraction for different projected separation bins and different luminosities. We find that the $m$ value becomes smaller for larger separation, and the pair fraction for the fainter luminosity bin has stronger evolution. We derive the major merger remnant fraction $f_{rem} = 0.06$, which implies that about 6% of galaxies with $-25 \leq M_{R_c} \leq -20$ have undergone major mergers since $z = 0.8$.'
author:
- 'B. C. Hsieh, H. K. C. Yee, H. Lin, M. D. Gladders, D. G. Gilbank'
title: 'Pair Analysis of Field Galaxies from the Red-Sequence Cluster Survey'
---
Introduction
============
Galaxy interactions and mergers play a very important role in the evolution and properties of galaxies. Although mergers are rare in the local universe, a high merger rate in the past can change the morphology, luminosity, stellar population, and number density of galaxies dramatically. The evolution of the merger rate is directly connected to galaxy formation and structure formation in the Universe. Therefore, the measurement of merger rates for galaxies at different merging stages provides important information in the interpretation for various phenomena, such as galaxy and quasar evolution. There are many stages of mergers: close but well-separated galaxies (early-stage mergers), strongly interacting galaxies, and galaxies with double cores (late-stage mergers). Measuring the morphological distortion of galaxies [e.g., @lefevre2000; @reshetnikov2000; @conselice2003; @lavery2004; @lotz2006], as well as studying the spatially resolved dynamics of galaxies [e.g., @puech2007a; @puech2007b] are definitive methods for studying on-going mergers, However, they are challenging to observe at high redshift and difficult to quantify. Instead of directly studying on-going mergers, close pairs are much easier to observe and are still able to provide statistical information on the merger rate. A close pair is defined as two galaxies which have a projected separation smaller than a certain distance. Without redshift measurements, a close pair could be just an optical pair, which contains two unrelated galaxies with small separation in angular projection. With spectroscopic redshift measurements, a physical pair can be picked out by choosing two galaxies with similar redshifts and small projected separation. We note that only a fraction of physical pairs are real pairs, which have true physical separations between galaxies smaller than the chosen separation, and are going to merge is a relatively short timescale. Nevertheless, studying any kind of close pairs allows us to glean statistical information on the merger rate.
From $\Lambda$ cold dark matter ($\Lambda$CDM) $N$-body simulations [@governato1999; @gottlober2001], it is found that the merger rates of halos increases with redshift as $(1+z)^m$, where $2.5 \leq m \leq 3.5$. However, observational pair results produce diverse results of $0 \leq m \leq 4$ [e.g., @zepf1989; @burkey1994; @carlberg1994; @woods1995; @YE1995; @patton1997; @neuschaefer1997; @lefevre2000; @carlberg2000; @patton2000; @patton2002; @bundy2004; @lin2004; @cassata2005; @bridge2007; @kartaltepe2007]. Often, there are only a few hundred objects in these samples because most studies use spectroscopic redshift data, so that the error bars on the number of pairs are very large. @kartaltepe2007 utilized a photometric redshift database and included 59,221 galaxies in their sample to perform the pair analysis. The large sample makes their result robust. However, a pair study with a 2 deg$^2$ field could still be affected by cosmic variance ($\sim 30$ Mpc $\times$ 30 Mpc at $z = 0.5$). Furthermore, some studies use morphological methods to identify pairs on optical images; some close pairs they find could just be late-type galaxies with starburst regions because they have double or triple cores and look asymmetrical. Therefore, the merger rates could be over-estimated. In contrast, other observations have yielded results with no strong evolution at high redshift [e.g., @bundy2004; @lin2004]. @bundy2004 also use a morphological method to identify pairs. However, they use near-IR images which reveal real stellar mass distributions and are insensitive to starburst regions. No matter what these previous studies conclude about pair fractions, their results are obtained over a small number of redshift bins and have large error bars due to an inadequate number of objects [with the exception of @kartaltepe2007]. Furthermore, none of these previous studies separate their sample into field and cluster environments. The evolution of the pair fraction can be very different in different environments. The dynamics of galaxy in clusters is also very different compared to that in field; hence, these studies could choose pairs with different properties even using exactly the same pair criteria.
In this paper, a subset of the photometric redshift catalog from @hsieh2005 is used to investigate the evolution of close galaxy pair fraction. The catalog is created using photometry in $z', R_c, V$, and $B$ from the Red-Sequence Cluster Survey [RCS; @GY2005]. The sky coverage is approximately 33.6 deg$^2$. The large sample of photometric redshifts allows us to perform a pair analysis with good statistics. We use about 160,000 galaxies in our sample for the primary galaxies, with a moderate redshift range of $0.25 \leq z \leq 0.8$ and $M_{R_c} \leq -20$, allowing us to derive very robust results with relatively small error bars.
This paper is structured as follows. In §\[photoz\] we briefly describe the RCS survey and the photometric redshift catalog used in our analysis. In §\[sample\] we provide a description of the sampling criteria for this study. Section \[method\] presents the method of the pair analysis and the definition of a pair. We describe the selection effects and the methods dealing with the projection effects and incompleteness in §\[selection\_effect\]. The error estimation of the pair fraction is discussed in §\[err\_est\]. We then present the results in §\[result\] and discuss the implications of the pair statistics in our data in §\[discussion\]. In §\[conclusion\] we summarize our results and discuss future work. The cosmological parameters used in this study are $\Omega_\Lambda = 0.7$, $\Omega_M = 0.3$, $H_0 = 70$ km s$^{-1}$Mpc$^{-1}$, and $w = -1$.
The RCS Survey and Photometric Redshift Catalog {#photoz}
===============================================
The RCS [@GY2005] is an imaging survey covering $\sim 92$ deg$^2$ in the $z'$ and $R_c$ bands carried out using the CFH12K CCD camera on the 3.6m Canada-France-Hawaii Telescope (CFHT) for the northern sky, and the Mosaic II camera on the Cerro Tololo Inter-American Observatory (CTIO) 4m Blanco telescope for the southern sky, to search for galaxy clusters in the redshift range of $z < 1.4$. Follow-up observations in $V$ and $B$ were obtained using the CFH12K camera, covering 33.6 deg$^2$ ($\sim$ 75% complete for the original CFHT RCS fields).
The CFH12K camera is a 12k $\times$ 8k pixel$^2$ CCD mosaic camera, consisting of twelve 2k $\times$ 4k pixel$^2$ CCDs. It covers a 42 $\times$ 28 arcminute$^2$ area for the whole mosaic at prime focus (f/4.18), corresponding to 0".2059 pixel$^{-1}$. For the CFHT RCS runs, the typical seeing was 0.62 arcsec for $z'$ and 0.70 arcsec for $R_c$. The integration times were typically 1200s for $z'$ and 900s for $R_c$, with average $5\sigma$ limiting magnitudes of $z'_{AB}=23.9$ and $R_c=25.0$ (Vega) for point sources. The observations, data, and the photometric techniques, including object finding, photometric measurement, and star-galaxy classification, are described in detail in @GY2005. For the follow-up CFHT RCS observations, the typical seeing was 0.65 arcsec for $V$ and 0.95 arcsec for $B$. The average exposure times for $V$ and $B$ were 480s and 840s, respectively, and the median $5\sigma$ limiting magnitudes (Vega) for point sources are 24.5 and 25.0, respectively. The observations and data reduction techniques are described in detail in @hsieh2005.
With the four-color ($z', R_c, V,$ and $B$) data, a multi-band RCS photometry catalog covering 33.6 deg$^2$ is generated. Although the photometric calibration has been done for all the filters [@GY2005; @hsieh2005], we refine the calibration procedure to achieve a better photometric accuracy for this paper. First, we calibrate the patch-to-patch zeropoints for $R_c$ by comparing the $R_c$ photometry of stars to that in the SDSS DR5 database [^1]. We use the empirical transformation function between SDSS filter system and $R_c$ determined by Lupton (2005) [^2]. The average photometry difference in each patch is the offset we apply. However, patches 0351 and 2153 have no overlapping region with the SDSS DR5. For these two patches, we use the galaxy counts to match their zeropoints to the other patches. The patch-to-patch zeropoint calibration of $z'$ is performed using the same method.
We also utilize the SDSS DR5 database to calibrate the zeropoints of $z'$ and $R_c$ between pointings within each patch. However, some patches do not entirely overlap with the SDSS DR5, we have to use an alternative way [@glazebrook1994] to calibrate the zeropoints for those pointings having no overlapping region with the SDSS DR5. There are fifteen pointings in each patch; the pointings overlap with each other over a small area. By calculating the average differences of photometry for same objects in the overlapping areas for different pointings, we can derive the zeropoint offsets between pointings overlapping with the SDSS DR5 and those having no overlapping region with the SDSS DR5. For patches 0351 and 2153, the pointing-to-pointing zeropoint calibration is performed internally using the same method. We note that we do not perform chip-to-chip zeropoint calibration for $z'$ and $R_c$ since it has been dealt with in great detail in @GY2005.
For the $B$ and $V$ photometry, we find by comparing to SDSS that there are systematic chip-to-chip zeropoint offsets in $B$, which are not found in $V$. The transformation functions between SDSS $g, r, i$ and $V$, $B$ are from Lupton (2005). In order to calibrate the zeropoints, we calculate the mean offset of each chip by combining all the chips with the same chip number, and then we apply the mean offsets to all the data in the corresponding chip.
Once the chip-to-chip calibration has been done, the pointing-to-pointing and patch-to-patch zeropoint calibrations for $V$ and $B$ are performed using the same method as that for $z'$ and $R_c$. For patches 0351 and 2153, which do not have SDSS overlap, we utilize the stellar color distributions of $V-R_c$ and $B-R_c$ to obtain more consistent calibrations. Here, stars with $R_c < 22$ are selected to determine the magnitude offsets on a pointing-to-pointing and patch-to-patch basis. For the pointing-to-pointing calibration within a patch, all the stars with $R_c < 22$ in the patch are used as the comparison reference set. The magnitude offsets in $V$ and $B$ for a pointing are computed using cross-correlation between the reference data set and the data of the pointing of the stellar color distributions of $V$ - $R_c$ and $B$ - $R_c$, respectively. The final pointing-to-pointing magnitude offsets are the summations of the offsets from the three iterations. The patch-to-patch photometry calibrations are performed after the pointing-to-pointing calibrations are done. In this case, all stars with $R_c < 22$ in all patches are used as the comparison reference set. The same calibration procedure as the pointing-to-pointing calibration is applied to the patch-to-patch calibration.
We note that object detection is performed by the Picture Processing Package [PPP, @yee1991] and has been found to reliably separate close pairs over the separations of interest in this paper, and to not overly deblend low-redshift galaxies to create false pairs.
@hsieh2005 provides a photometric redshift catalog from the RCS for 1.3 million galaxies using an empirical second-order polynomial fitting technique with 4,924 spectroscopic redshifts in the training set. Since then there are more spectroscopic data available for the RCS fields and they can be added to the training set. The DEEP2 DR2 [^3] overlaps with the RCS fields and provides 4,297 matched spectroscopic redshifts at $0.7 < z < 1.5$. The new training set not only contains almost twice the number of objects comparing to the old one but also provides a much larger sample for $z > 0.7$, i.e., better photometric redshift solutions for high redshift objects. Besides using a larger training set, we also use an empirical third-order polynomial fitting technique with 16 $kd$-tree cells to perform the photometric redshift estimation to achieve a higher redshift accuracy, giving an rms scatter $\sigma(\Delta{z}) < 0.05$ within the redshift range $0.2 < z < 0.5$ and $\sigma(\Delta{z}) < 0.09$ over the whole redshift range of $0.0 < z < 1.2$. As in the original catalog, the new catalog also includes an accurately computed photometric redshift error for each individual galaxy determined by Monte-Carlo simulations and the bootstrap method, which provides better error estimates used for the subsequent science analyses. Detailed descriptions of the photometric redshift method are presented in @hsieh2005.
Sample Selection {#sample}
================
Estimating Absolute Magnitudes $M_{Rc}$
---------------------------------------
In choosing a sample for an evolution study, great care must be taken. If the sampling criteria pick up objects with different physical properties (e.g., mass) at different epochs, the inferred evolution could be biased. For our pair study, we choose a volume-limited sample to include objects within the same range of absolute magnitude $M_{R_c}$ over the redshift range. The following equation is used for apparent magnitude to absolute magnitude conversion: $$\begin{aligned}
\label{absolutemag}
M_{R_c} = R_c - 5log{D_L} - 25 - k + Qz,\end{aligned}$$ where $M_{R_c}$ is the absolute magnitude in $R_c$; $R_c$, the apparent magnitude of filter band $R_c$; $D_L$, the luminosity distance; $k$, the $k$-correction; $Q$, the luminosity evolution in magnitude; and $z$, the redshift.
The traditional way of deriving the $k$-correction value for each galaxy is to compute its photometry by convolving an SED from a stellar synthesis model [@bc2003] or an empirical template [@CWW1980] with the filter transformation functions. However, without spectroscopic redshift measurements, the accuracy of the $k$-correction value is affected by the less accurate photometric redshift. Without good $k$-correction estimations, the sample selection using absolute magnitude as one of the selecting criteria is problematic.
For our analysis, we use a new empirical method to determine the $k$-correction for the photometric redshift sample. The Canadian Network for Observational Cosmology (CNOC2) Field Galaxy Redshift Survey [@yee2000] is a survey with both spectroscopic and photometric measurements of galaxies, and the catalog provides $k$-correction values estimated by fitting the five broad-band photometry using the measured spectroscopic redshift with empirical models derived from @CWW1980. It allows us to generate a training set to determine the relation between $k$-correction and photometry, i.e., $k$-correction = $f$($z', R_c, V, B$). We perform a second-order polynomial fitting of the $k$-corrections for the training set galaxies and the result is shown in Figure \[kcorr\]. This plot shows that the $k$-correction can be estimated very well with rms scatter = 0.03 mag by this method; we use the second-order polynomial fitting to derive the $k$-correction value for each galaxy.
We adopt $Q = 1.0$ for the luminosity evolution according to @lin1999.
Completeness {#completeness-correction}
------------
The RCS photometric redshift catalog provides the 68% ($\sim1\sigma$ if the error distribution is Gaussian) computed photometric redshift error for each object. The photometric redshift uncertainty depends on the errors of the photometry for $z', R_c, V, B,$ and colors. The computed error is estimated using a combination of the bootstrap and Monte-Carlo methods and it has been confirmed empirically to be very reliable. The details are described in @hsieh2005.
For the sample used in the pair analysis, a photometric redshift error cut has to be defined. A looser error cut will make the sample more complete, but the final result will be noisier because data with larger errors are included. A tighter error cut will make the final result cleaner, but the incompleteness and bias problems will be more severe. For example, the bluer objects tend to have larger photometric redshift error and will be rejected more easily than red objects. If the pair result is color-dependent, the conclusion could be biased. As a compromise, we choose $\sigma_z/(1+z) \leq 0.3$ to be the criterion for the photometric redshift error cut for the sample. This criterion can eliminate those objects with catastrophic errors, and at the same time, the incompleteness and bias problems will not be too severe. Furthermore, we deal with the incompleteness and bias problems with completeness correction (see § \[completeness\] for details) to minimize this selection effect.
The completeness correction factor is estimated using the ratio of the total number of galaxies within a 0.1 mag bin at the $R_c$ magnitude of the companion to the number of galaxies in that bin with photometric redshifts satisfying the redshift uncertainty criterion. The incompleteness problem is more severe with fainter magnitudes because of the larger photometry uncertainty. Figure \[0223A1.complete\] represents an example of $R_c$ galaxy count histograms for subsamples of various $\sigma_z$ criteria for pointing 0224A1. The curves from top to bottom indicate the results with selecting criteria: all objects, objects with photometric redshift solutions, and those with $\sigma_z/(1+z) \leq 0.4$, $\sigma_z/(1+z) \leq 0.3$, $\sigma_z/(1+z) \leq 0.2$, and $\sigma_z/(1+z) \leq 0.1$. It is clear that tighter criteria suffer from worse incompleteness at fainter magnitudes.
However, the completeness of the sample depends not only on magnitudes of $z', R_c, V,$ and $B$ but also on colors (e.g., galaxy types). At lower redshift, red objects are more complete than blue objects because early-type (red) galaxies have a clearer 4000Å break which results in better photometric redshift fitting. At higher redshift, the incompleteness for red objects is getting worse (even worse than that of the blue objects) because the photometry of the blue filters is sufficiently deep for blue objects but not for red objects. We refine our completeness correction by computing the factor separately for red and blue galaxies. Figure \[color-specz\] represents the observed $B-R_c$ vs. spectral redshift relation. Most objects in the upper locus are early-type galaxies, and for the lower locus, most of them are late-type galaxies. For the redshift range of our sampling criterion ($0.25 \leq z \leq 0.8$), we can simply use $B - R_c = 1.8$ to roughly separate different types of galaxies.
Choosing the Volume Limit {#volume-limit}
-------------------------
There are two ways to select samples to perform a pair analysis. One is volume-limited selection, the other is flux-limited selection. The volume-limited selection is a proper way to pick up objects with the same characteristics. However, most previous pair studies select flux-limited samples and then try to correct these samples to volume-limited samples using Monte-Carlo simulations [e.g., @patton2000; @patton2002; @lin2004], because of the small number of galaxies in their database. The RCS photometric redshift catalog contains more than one million objects; we can simply select a volume-limited sample and the number of objects is still statistically adequate for pair analysis.
We choose our volume-limited by examining the relationship between the completeness correction factor and absolute magnitude as a function of redshift. Figure \[absR-photoz\] shows the average data completeness in the absolute $R_c$ magnitude vs. photometric redshift diagrams, with a $\sigma_z/(1+z) \leq 0.3$ cut. The upper panel is for patch 0926 and the lower panel is for patch 1327. The contours indicate the completeness correction factors of 1, 2, 3, and 4. According to Figure \[absR-photoz\], patch 0926 has much better completeness than patch 1327 in both the absolute $R_c$ magnitude axis and the photometric redshift axis.
Figure \[absR-photoz\] allows us to set reasonable ranges in both the absolute $R_c$ magnitude axis and photometric redshift axis for a volume-limited sample. We decide to include data with completeness correction factors less than 2 in our sample, i.e., all the data in our volume-limited sample are at least 50% complete (marked by the heavy hashed lines). While the sample is not 100% complete, the completeness problem can be dealt with the completeness correction later. (See §\[completeness\] for details.) Hence, based on Figure \[absR-photoz\] and the diagrams for the other patches, we choose a volume-limited sample using the following criteria: $0.25 \leq z \leq 0.8$, $-25 \leq M_{R_c} \leq -20$ (hatched region), for which the completeness factor is less than 2. We note that at $z=0.8$, and $M_{R_c} = -20$, early type galaxy has an $R_c \sim 24.7$.
Choosing Field Galaxies
-----------------------
The pair statistics could be very different in different environments. It is very interesting to study how the environment affects the pair result. However, for a cluster environment, the pair analysis is much more difficult and a significant amount of calibration/correction needs to be done because the pair signals could be embedded in a cluster environment and are difficult to delineate. We may need to develop a different technique for the pair analysis for the highest density regions (i.e., clusters). In this paper, we focus on the pair statistics in the field. The RCS cluster catalogs [@GY2005] are used to separate the field and the cluster regions. The cluster catalogs provide the information of the red-sequence photometric redshift, astrometry, and $r_{200}$, estimated using the richness $B_{gc}$ statistics of @YE2003 in Mpc and in arcmin. An object inside $3 \times r_{200}$ of a cluster and having a photometric redshift within $z_{cluster} \pm 0.05$ is considered as a potential cluster member. Our rejection of clusters is fairly conservative; we throw away pretty large regions in order to avoid biasing the field estimate, at the expense of having a smaller field sample. Approximately 33% of the galaxies are rejected due to possibly being in clusters. The remaining objects are considered to be in the field and included in our sample.
Pair Fraction Measurement {#method}
=========================
We use the quantity $N_c$, defined as the number of close companions per galaxy, which is directly related to the galaxy merger rate, for our pair study. The definition of a close companion is a galaxy within a certain projected separation and velocity/redshift difference ($\Delta{z}$) of a primary galaxy. After counting the number of close companions for each primary galaxy, we calculate the average number of close companions, which is $N_c$ (all the projected separations in this paper are in physical sizes, unless noted otherwise).
Previous pair studies with spectroscopic redshift data [e.g., @patton2000; @patton2002; @lin2004] use pair definitions of 5-20 kpc to 5-100 kpc for projected separation, and a velocity difference $<$ 500km/s between the primary galaxy and its companions. However, because of the much larger redshift errors of the photometric redshift data, we cannot use the same criterion of $\Delta{v}$ as other spectroscopic pair studies for our analysis ($\sigma_z = 0.05$ at $z = 0.5$ is equivalent to $\Delta{v} \sim 10000$km/s). Thus, to carry out the pair analysis using a photometric redshift catalog, we develop a new procedure which includes new pair criteria and several necessary corrections.
For the projected separation, we can use the same definition as the spectroscopic pair studies. For the redshift criterion, we utilize the photometric redshift error ($\sigma_z$) provided by the RCS photometric redshift catalog to develop a proper definition. We define the redshift criterion of a pair as $\Delta{z} \leq n\sigma_z$, where $\Delta{z}$ is the redshift difference between the primary galaxies and its companion, $n$ is a factor to be chosen, and $\sigma_z$ is the photometric redshift error of the primary galaxy. Due to the fact that the behavior of the pair fraction for major mergers (pairs with similar mass) and minor mergers (pairs with a huge difference of mass) could be different, one has to give a mass (luminosity) ratio limitation between the primary galaxy and its companion to specify the kind of merger being studied, i.e., $\Delta{R_c} \leq x$ mag. We note that the absolute magnitude of $R_c$ should be used for the luminosity/mass difference criterion. However, since we use a photometric redshift catalog to perform the pair analysis, the redshifts of the primary and the secondary galaxies may be different due to the photometric redshift errors, which would make the difference of the $k$-corrected absolute magnitudes of the primary and the secondary galaxies larger than the luminosity difference criterion, even if they actually fit the criterion. Hence, we choose to use the apparent magnitude instead of the absolute magnitude.
The quantity $N_c$ approximately equals to the pair fraction when there are few triplets or higher order $N$-tuples in the sample. In this study, $N_c$ will sometimes simply be referred to as the pair fraction.
Accounting for Selection Effects {#selection_effect}
================================
In this section, we discuss the effects of incompleteness, boundary, seeing and projection, and the steps we take to account for them. In all our subsequent discussions and analyses, we will use samples chosen with the following fiducial common criteria, unless noted otherwise: $-25 \leq M_{R_c} \leq -20$ for the primary sample, $-26 \leq M_{R_c} \leq -19$ for the secondary sample, with galaxies selected satisfying $\sigma_z/(1+z) < 0.3$ and $0.25 \leq z \leq 0.8$, and with the pair selection criteria of $\Delta{z} \leq 2.5\sigma_z$, 5 kpc $\leq d_{sep} \leq$ 20 kpc, and $\Delta{R_c} \leq$ 1 mag.
Completeness of the Volume-Limited Sample {#completeness}
-----------------------------------------
For a pair analysis, the completeness of the sample is always an important issue. The pair fraction will drop due to the lower number density if the sample is not complete. Furthermore, the main point of this paper is to study the evolution of the pair fraction, and unfortunately, the incompleteness is not independent of redshift: it gets more serious at higher redshifts. This effect is expected to make the pair fraction at higher redshifts appear lower, and thus may bias conclusions regarding the redshift evolution of the pair fraction. Hence, to draw the correct conclusion, a completeness correction has to be applied.
Figure \[pair-zerr\] illustrates the pair fractions for different sampling criteria in $\sigma_z/(1+z)$ without completeness corrections (but with projection correction, see §\[backcor\]). Each redshift bin contains from 15,500 objects (for the $\sigma_z/(1+z) \leq 0.1$ criterion) The plot shows that the larger the $\sigma_z$ criterion, the higher the pair fraction. This is due to less completeness for a tighter criterion; some real companions in pairs are missed because they have larger photometric redshift errors. The differences between the pair fractions with different $\sigma_z/(1+z)$ criteria also become larger with redshift because the incompleteness is more severe at higher redshift, especially for the $\sigma_z/(1+z) \leq 0.1$ criterion. However, if a proper completeness correction is adopted, the pair results should not be affected by the sampling criteria and these curves of pair fraction with different $\sigma_z/(1+z)$ cuts should be similar.
We discussed the derivation of the completeness correction in §\[completeness-correction\], and after applying the completeness correction, we find the pair fractions to be very similar for the different redshift uncertainty criteria, which shows that the completeness correction works well. The results after the completeness correction are shown in §\[result\].
Boundary Effects {#boundary_effect}
----------------
Objects near the boundaries of the selection criteria or close to the edges of the observed field could have fewer companions. This effect is referred to as a boundary effect. There are four different boundaries for our sample: $1)$ the boundaries of the selection criterion on the redshift axis; $2)$ the edge of the observing field; $3)$ the boundaries between the cluster region and the field; and $4)$ the boundary of the absolute magnitude $M_{R_c}$ cut. The methods to deal with these boundary effects are described in this subsection.
The redshift range for our pair analysis is $0.25 \leq z \leq 0.8$. For primary galaxies at redshift close to 0.25 or 0.8, they will have fewer companions because some of their companions are scattered just outside the redshift boundaries and thus are not counted. To deal with this effect, we conduct the pair analysis for the full redshift range of the photometric redshift catalog ($0.0 < z < 1.5$), and then pick the primary objects satisfying the redshift criteria for the final result.
The objects near the boundaries of the observing field or near the gaps between CCD chips will have smaller pair fractions because some companions of these objects lie just across the boundaries of the observing field. The projected size of 20 kpc (the outer radius of pair searching circle) is about 5“ (24 pixels) at redshift at 0.25; the size gets smaller at higher redshift. We limit the primary sample to be at least 5” from the edges of the CCDs while the secondary sample still includes all the objects. This configuration allows us to avoid the boundary effect of the limited survey fields.
Because we only study the pair fraction in the field for this investigation and separate our data into field environment and cluster environment, the boundary effect at the edges of $3 \times r_{200}$ and $z_{cluster} \pm 0.05$ is of concern. We constrain the primary sample to be at least $3 \times r_{200}$ from the centers of clusters and $\Delta{z} > 0.1$ from $z_{cluster}$. For the secondary sample, we limit it to be at least $3 \times r_{200} - 10"$ from the center of clusters and $\Delta{z} > 0.05$ from $z_{cluster}$. These new limits effectively eliminate these boundary effects.
The objects with absolute magnitudes close to the $M_{R_c}$ cut will have a lower pair fraction as well. This can be solved easily by searching for companions down to $M_{R_c} \leq -19$ for the primary sample with $M_{R_c} \leq -20$. Figure \[absR-photoz\] shows that the completeness correction factors are still less than 2 even when we push the $M_{R_c}$ boundary to -19 at $z \leq 0.8$ for patch 0926 (the light shaded area). However, for patches 0351 and 1327, the completeness is significantly worse than other 8 patches, and they are substantially incomplete at $M_{R_c} = -19$. Hence, they are abandoned in our pair analysis. Consequently, the magnitude limit for the secondary sample is extended to minimize this boundary effect.
Projection Correction {#backcor}
---------------------
Because of the projection effect due to the lower accuracy of the photometric redshifts compared with spectroscopic redshifts, a pair analysis using any criterion would find some false companions and get a higher $N_c$ than the real value. To eliminate these foreground and background objects from contaminating our results, a projection correction is applied to our data.
The magnitude of the projection effect on $N_c$ is illustrated in Figure \[noback\] which shows the results of using the fiducial sample but with different $z_{primary} \pm n\sigma_z$ criteria for inclusion of the companion without any projection correction (but with completeness correction). Each data point contains about 17,500 objects. $N_c$ is higher with larger $n$ values because the pair criterion with larger $n$ includes more foreground/background galaxies and the results suffer from more serious projection effect. For lower redshift, the projected search area is bigger than the one for higher redshift; hence more foreground/background galaxies are counted. However, the photometric redshift error $\sigma_z$ becomes larger at higher redshift; this effect also makes more foreground/background galaxies included. Therefore, from low redshift to high redshift, the projection effect affects the pair fraction about equally.
To correct for the projection effect, we calculate the mean surface density of all the objects in the same patch as the primary galaxy satisfying the pair criteria, $\Delta{z} \leq n\sigma_z$ and $\Delta{R_c} \leq 1$ mag, and then multiply it by the pair searching area (5-20 kpc) for each primary galaxy. This is the projection correction value for each primary galaxy. The completeness corrections are also applied in the counting of the foreground/background galaxies. The final number of companions for each galaxy is the proper companion number with the projection correction value subtracted. Because not all primary objects have companions, some objects have negative companion numbers after the projection correction.
The Effect of Seeing {#quality_effect}
--------------------
The projected separation we use for the pair criteria is 5 kpc $\leq d_{sep} \leq$ 20 kpc. For the highest redshift cut ($z = 0.8$) used in our analysis, the projected size is about three pixels (0".7) for 5 kpc (the inner radius). However, some data are taken in less favorable weather conditions, and the 5 kpc inner radius could be too small for these data due to poor resolution. In the meantime, we do not want to enlarge the inner radius because the closest pairs are the most important for a pair study. We have to make sure that the 5 kpc inner searching radius does not cause problems with data obtained in poorer seeing conditions.
We only focus on the seeing conditions of the $R_c$ images because object finding was carried out using the $R_c$ images (see @hsieh2005 for details). The distribution of the seeing conditions of $R_c$ for the 8 patches is shown in Figure \[R-seeing\]. From this plot, most seeing values are between three to six pixels. To test if the inner radius (5 kpc) of the searching criterion is too small for data taken in poorer seeing conditions, we perform the pair analyses for three subsamples with different seeing conditions separately, separated at seeing $= 3.81$ pixels (0$"$.78) and $4.70$ pixels (0$"$.97). Each subsample contains a similar number of pointings. The results are shown in Figure \[seeing\]. The left panel represents the relation of the apparent size in pixels for different physical sizes (5, 10, 15, and 20 kpc) vs. redshift, as a reference, while the right panel shows the results with different seeing conditions. There are about 10,000 objects in each redshift bin. We find that the curve with the best seeing condition is in fact about 20% lower than the other two poorer seeing conditions, indicating that the range of seeing in our data do not cause a drop in $N_c$. We note that most pointings with the best seeing conditions are from patches 1416 and 1616; the 20% lower $N_c$ for the good seeing data could be just due to cosmic variance. Therefore, the 5 kpc inner searching radius for all the data does not appear to produce a significant selection effect on the result.
Error Estimation of $N_c$ {#err_est}
=========================
The error of $N_c$ is estimated assuming Poisson distribution and derived using: $$\begin{aligned}
\label{error}
error = \frac{\sqrt{N_{companion}+N_{proj}}}{N_{primary}}\end{aligned}$$ where $N_{companion}$ is the total number of the companions, $N_{proj}$ is the sum of the projection correction values, and $N_{primary}$ is the number of the primary galaxies in each redshift bin. We note that the values of $N_{companion}$ and $N_{proj}$ are the ones without the completeness corrections in order to retain the correct Poisson statistics.
Results {#result}
=======
The results of the pair analysis are shown in Figures \[sigma\_z\] to \[sep-1\]. Because we focus only on major mergers, the difference in $R_c$ between the galaxies in close pairs is restricted to be less than one magnitude. If the mass-to-light ratio is assumed to be constant for different types of galaxies at the same redshift, the mass ratio between the primary galaxies and companions ranges from 1:1 to 3:1.
Figure \[sigma\_z\] shows the results with different $\sigma_z/(1+z)$ cuts, with the fiducial sampling criteria listed in §\[selection\_effect\], with completeness and projection corrections applied. The redshift bins contain from 15,500 objects (for the $\sigma_z/(1+z) \leq 0.1$ criterion) to 17,500 objects (for the $\sigma_z/(1+z) \leq 0.4$ criterion). Compared to Figure \[pair-zerr\], the curves of the pair fractions are very similar for the different redshift uncertainty criteria, which illustrates that the completeness correction works well.
For the redshift criterion of the pair definition, we count companions within $z_{primary} \pm n\sigma_z$, where $z_{primary}$ is the redshift of the primary galaxy and $\Delta{z} = n\sigma_z$ (see §\[method\]). If the photometric redshift error is assumed to be Gaussian, when $n = 1.0$, we count only about 68% of the companions; when $n = 2.0$, about 95%; and so on. Of course, the criteria with larger $n$ will include more foreground/background galaxies and make $N_c$ higher, but this can be dealt with by the projection correction (see § \[backcor\] for details).
Figure \[nX\] represents $N_c$ vs. photometric redshift curves using different $\Delta{z} = n\sigma_z$ criteria with the fiducial sampling criteria, with completeness and projection corrections applied. Pair fractions using the values $n = 1.0, 2.0, 2.5, 3.0, 4.0,$ and $100.0$ are plotted. We use $n = 100.0$ to approximate $n = \infty$ which is equivalent to the result using no redshift information for companions. Each data point contains about 17,500 objects. Curves with $n \geq 2.5$ are consistent with being the same. As discussed, the curve with $n = 1.0$ should be about 32% lower than the one with $n = 100.0$ if the distribution of the photometric redshift error is Gaussian. The curve with $n = 1.0$ is about 40% lower than the one with $n = 100.0$, and about 30% lower than the one with $n = 2.5$. While the results do not perfectly match what is expected from a Gaussian distribution of photometric redshift uncertainties, comparing Figure \[nX\] to Figure \[noback\], the application of the projection correction does reduce the $N_c$ derived by approximately the correct amount. The somewhat larger difference is not unexpected, since the true probability distribution of the uncertainty of the photometric redshift is likely non-Gaussian, but somewhat broader. We note that the error bars with larger $n$ are bigger because of larger projection correction errors (see §\[err\_est\] for details). We choose $n = 2.5$ for our final result; with this, about 99% of the companions are counted, which is a compromise between the completeness of the companion counting and the size of the error of the projection correction.
In Figure \[pair\_sep\], we show $N_c$ using different outer radii of the projected separations for the pair criterion. The completeness and projection corrections are applied. We use an inner radius of 5 kpc for all cases. Different line types and symbols indicate different outer radii from 20 kpc to 100 kpc. As expected, the larger the outer radius, the higher the $N_c$ because more galaxies satisfy the pair criteria. For better statistics, and because most previous pair studies use 5-20 kpc as their pair criteria (since pairs with separations of 20 kpc will be almost certain to merge), we choose 5 kpc $\leq d_{sep} \leq$ 20 kpc to be the pair criterion for the projected separation for this study.
Using 5 kpc $\leq d_{sep} \leq$ 20 kpc and $\Delta{z} \leq 2.5\sigma_z$, as our fiducial criteria, we find that the pair fraction increases with redshift as $(1 + z)^m$ where $m = 2.83 \pm 0.33$. The best fit is plotted as a solid line on Figure 11. The error is estimated using the Jackknife technique. We note that the error bar of the $m$ value is affected not only by the error bar of each redshift bin but also by whether the function $(1 + z)^m$ is a good representation of the data.
We also study the pair fractions with different absolute magnitude $M_{R_c}$ cuts and show the results in Figure \[M\_Rc\]. The filled circles and the open squares indicate the absolute magnitude bins of $-21 \leq M_{R_c} \leq -20$ and $-25 \leq M_{R_c} < -21$, respectively. The numbers of objects in each redshift bin with absolute magnitude cuts for faint and bright are about 15,700 and 10,500, respectively. We find that the pair fraction increases with redshift as $(1+z)^m$ where $m = 3.25 \pm 0.11$ (solid line) and $m = 1.79 \pm 0.53$ (dashed line) for the absolute magnitude bins of $-21 \leq M_{R_c} \leq -20$ and $-25 \leq M_{R_c} < -21$, respectively; i.e., the brighter the absolute magnitude, the smaller the $m$ value. However, the error bar of the $m$ value is significantly larger for the high-luminosity sample. This effect is not due to the slightly larger error bars in the bright sample, but rather that the $(1+z)^m$ power-law does not appear to be a good representation of the evolution of $N_c$ for the higher luminosity galaxies. This result is further discussed in §\[discussion\].
Based on Figure \[pair\_sep\], it is apparent that the larger the outer radii, the lower the $m$ value is. To look at this in more detail, we use rings of area for the pair counting, and show the results of the evolution of $N_c$ in Figure \[sep-1\]. We note that all $N_c$ values are normalized by area using the 5-20 kpc bin as the reference. The number of objects in each redshift bin is about 17,500. The evolution of the pair fraction follow $(1+z)^m$ where $m = 2.83 \pm 0.33, 1.53 \pm 0.36, -0.39 \pm 0.36$, and $-1.20 \pm 0.48$ for separation 5-20 kpc, 20-50 kpc, 50-100 kpc, 100-150 kpc, respectively. We note that the $m$ value decreases with increasing separation. This result is further discussed in §\[discussion\].
All these results show that the pair fractions do not change much with different $\Delta{z} = n\sigma_z$ criteria after applying the completeness and projection corrections, which indicates that our results are very robust; however, the results do change with different projected separation criteria and absolute magnitude cuts.
We note that the role of photometric redshift in reducing the size of the projection effect is relatively limited, as indicated by the similar error bars for the pair fractions in Figure \[noback\] for different $n$. This is because the criterion of $\Delta m=\pm1$ effective eliminates most of the foreground background galaxies. However, photometric redshift is essential in defining a volume-limited sample and deriving the redshift dependence of $N_c$.
Discussions {#discussion}
===========
Major Merger Fraction {#merger_fraction}
---------------------
Although we have applied projection correction for our pair analysis, not all the close pairs we count will result in real mergers. Objects satisfying the pair criteria that are in the same structure (i.e., under each other’s gravitational influence) are within 20 kpc projected distance, but not within 20 kpc in real 3-D space distance. @YE1995 estimates that the true merger fraction ($f_{mg}$) is about half of the pair fraction which is evaluated with a triple integral over projected and velocity separation by placing the correlation function into redshift space. Hence, we divide $N_c$ by 2 to calculate the merger fraction.
### Merger Timescale {#merger_timescale}
Close pairs are considered as early-stage mergers. To relate the close pairs to the overall importance of merger, the merger timescale ($T_{mg}$) has to be estimated. Following previous studies [@binney1987; @patton2000], assuming circular orbits and a dark matter density profile given by $\rho(r) \propto r^{-2}$, the dynamical friction timescale ($T_{mg}$ in Gyr) is given by: $$\begin{aligned}
\label{timescale}
T_{mg} = \frac{2.64 \times 10^5r^2v_c}{M \ln \Lambda},\end{aligned}$$ where $r$ is the physical pair separation in kpc, $v_c$ is the circular velocity in km s$^{-1}$, $M$ is the mass in $M_\odot$, and ln $\Lambda$ is the Coulomb logarithm. We adopt the average projected separation ($\sim$ 15 kpc) as the physical pair separation since the major merger fraction has been corrected from projected separation to three-dimensional separation. We also assume $v_c \sim 250$ km s$^{-1}$. The mean absolute magnitude of companions is $M_{R_c} \sim -20.5$. If the mass-to-light ratio of $M/L \sim 5$ is assumed, we derive a mean mass of $M \sim 5\times10^{10} M_\odot$. @dubinski1999 estimates ln $\Lambda \sim 2$. With these values, we find $T_{mg} \sim 0.5$ Gyr. We note that this value is just a rough estimate over systems with a wide range of merger timescales, but it still represents the average merger timescale in our sample.
### Merger Rate {#merger_r}
The comoving merger rate is defined as the number of mergers per unit time per unit comoving volume. According to @lin2004, it can be estimated as: $$\begin{aligned}
\label{mg}
N_{mg} = 0.5n(z)N_c(z)C_{mg}T_{mg}^{-1},\end{aligned}$$ where $T_{mg}$ is the dynamical friction timescale, $C_{mg}$ indicates the fraction of galaxies in close pairs that will merge in $T_{mg}$, and $n(z)$ is the comoving number density of galaxies. The factor 0.5 is to convert the number of galaxies into the number of merger events. We adopt $T_{mg} = 0.5$ Gyr and $C_{mg} = 0.5$, as discussed above. We compute $n(z)$ using the primary sample, with the weight corrections included. We note that the computed $n(z)$ show a slight decline at $z>0.5$, which may be an indication of the effect of using a simple luminosity evolution description of $Qz$ for $M^*$ for choosing the sample. The evolution of the merger rate is shown in Figure \[merger\_rate\]. Each data point includes 17,500 objects and the error bars do not include the uncertainties in $T_{mg}$ and $C_{mg}$. Comparing our merger rate to @lin2004 measured between $z\sim0.5$ to 1.2, our value is about 3 times lower. The primary reason is likely that we use $\Delta{R_c} \leq 1$ mag and search for major mergers with mass ratio from 1:1 to 3:1, but @lin2004 look for mergers with mass ratio from 1:1 to 6:1. Our results show an increase in the merger rate with redshift of the form $(1+z)^\beta$, with $\beta\sim0.8$.
### Major Merger Remnant Fraction {#merger_remnant}
With the derived major merger rates for several different epochs for $z < 0.8$ and the merger timescale, we can use these parameters to estimate the fraction of present day galaxies which have undergone major mergers in the past. These galaxies are merger remnants, and the fraction of the merger remnants is defined as the remnant fraction ($f_{rem}$). According to @patton2000, the remnant fraction is given by $$\begin{aligned}
\label{remnant_fraction}
f_{rem} = 1 - \prod^N_{j=1}\frac{1 - f_{mg}(z_j)}{1 - 0.5f_{mg}(z_j)}\end{aligned}$$ where $f_{mg}$ is the merger fraction, $z_j$ is the redshift which corresponds to a lookback time of $t = jT_{mg}$, and j is an integer factor. Because the mass ratio of galaxies in a pair for this study is from 1:1 to 3:1, the merger remnant fraction is for major mergers. After applying this equation to our pair result, the estimated remnant fraction is $f_{rem} = 0.06$, which implies that $\sim 6\%$ of galaxies with $-25 \leq M_{R_c} \leq -20$ have undergone a major merger since $z \sim 0.8$.
Evolution of the Pair Fraction {#interpretation}
------------------------------
The $m$ values determined in many previous observational results in similar redshift ranges are very diverse ($0 \leq m \leq 4$). These results from the literature are listed in Table \[m-value\] in the order of redshift, and briefly discussed in §\[introduction\]. From $\Lambda$ CDM $N$-body simulations [@governato1999; @gottlober2001], the merger rates of halos increases with redshift as $(1+z)^m$, with $2.5 \leq m \leq 3.5$. Our result for $-25 \leq M_{R_c} \leq -20$ is broadly consistent with the $N$-body simulations as well as all the previous observational results, except those of @lin2004 and @bundy2004. Most of the previous works, except that of @kartaltepe2007, used spectroscopic redshifts to study pairs, especially for the lower redshift ranges. Due to much smaller samples ( $< 1/20$ of our sample) these studies need to convert a flux-limited sample to a volume-limited sample; this conversion may affect the final results. We note that all the previous studies did not exclude possible cluster environments, which may cause a problem of transforming $N_c$ to pair fraction, just because a galaxy would have a much higher chance to have more than one companion in cluster environments, relative to in field environments; the value of average number of companions per galaxy is not a good representation of pair fraction anymore. We do not have this problem because potential cluster members in our sample are removed, and less than 2% of pairs actually consists of triples or more in our work.
[lccc]{} This Work & $2.83 \pm 0.33$ & 0.25 $\leq z \leq$ 0.80 & -25 $\leq M_{R_c} \leq$ -20, field galaxies\
@governato1999 [@gottlober2001] & $2.5 \sim 3.5$ & & $\Lambda$ CDM $N$-body simulations\
@zepf1989 & $4.0 \pm 2.5$ & $z < 0.25$ & $B \leq 22$\
@YE1995 & $4.0 \pm 1.5$ & $z < 0.38$ & $r \leq 21.5$\
@carlberg1994 & $3.4 \pm 1.0$ & $z < 0.4$ & $V \leq 22.5$\
@patton1997 & $2.8 \pm 0.9$ & $z \leq 0.45$ & $r \leq 22$\
@patton2002 & $2.3 \pm 0.7$ & 0.12 $\leq z \leq$ 0.55 & $R_c \leq 21.5$, Q=1\
@burkey1994 & $3.5 \pm 0.5$ & $z < 0.7$ & $I \leq 22.3$\
@lefevre2000 & $2.7 \pm 0.6$ & $z < 1$ & $I_{AB} \leq 22.5$ for primary, $I_{AB} \leq 24.5$ for secondary\
@lin2004 & $1.60 \pm 0.29$ & $0.45 < z < 1.2$ & $R_{AB} \leq 24.1$\
@lin2004 & $0.51 \pm 0.28$ & & Q=1\
@kartaltepe2007 & $3.1 \pm 0.1$ & $0 < z < 1.2$ & $M_V < -19.8$, photometric redshift\
@kartaltepe2007 & $2.2 \pm 0.1$ & & Q=1\
@bridge2007 & $2.12 \pm 0.93$ & $0.2 < z < 1.3$ & $ L_{IR} \leq 10^{11}L_{\odot}$\
@bundy2004 & no evolution & $0.5 < z < 1.5$ & $K \leq 22.5$\
@cassata2005 & $2.2 \pm 0.3$ & $z < 2$ & $K_s < 20$\
The only previous work using a similar technique (photometric redshifts) with comparable size (59,221 galaxies, about 2.5 times smaller than our sample) is @kartaltepe2007. They have a higher redshift limit ($z = 1.2$) comparing with ours ($z = 0.8$), while we have a considerably larger survey field (33.6 deg$^{2}$ vs. 2 deg$^{2}$). They also have a similar luminosity cut ($M_V = -19.8$, equivalent to $M_{R_c} \sim -20.3$) to that of our primary sample ($M_{R_c} = -20.0$), although we have a one-magnitude deeper luminosity cut for the secondary sample ($M_{R_c} = -19.0$) which properly deals with the boundary problem (see §\[boundary\_effect\]). Since we apply luminosity evolution in our analysis, we compare our result to the $m$ value of @kartaltepe2007 derived with luminosity evolution. The $m$ value of our study is somewhat higher, $2.83 \pm 0.33$ vs. $2.2 \pm 0.1$, but within statistical consistency.
We further separate our sample with different luminosities, as shown in Figure \[M\_Rc\]. In general, the pair fraction for luminous galaxies is lower than that of the fainter sample. The $m$ values are $3.25 \pm 0.11$ and $1.79 \pm 0.53$ for $-21 \leq M_{R_c} \leq -20$ and $-25 \leq M_{R_c} < -21$, respectively. From the result of the $m$ values, it appears that the pair fraction for the faint galaxy sample evolves more rapidly than the luminous galaxy sample. However, we note that the evolution of $N_c$ for the bright sample may not fit a single power law very well. The data suggest that $N_c$ may level out at $z \lesssim 0.6$, while at $z \gtrsim 0.6$, the $m$ value is similar to that for the faint sample. It is not clear what produces this possible leveling of the evolution for the bright sample. A larger sample of galaxies will be useful in examining the dependence of the pair fraction evolution as a function of luminosity or stellar mass.
We also study the evolution of pair fractions for different projected separations. As shown in Figure \[sep-1\], the $m$ value decreases with increasing separation; i.e., the evolution gets weaker for larger separation. This suggests that the timescale of a galaxy going from 20 kpc to 0 (i.e., merged), relative to the timescale for galaxies going in at 50 kpc, increases at higher redshift, so that there is a build up of galaxies at 5-20 kpc. Alternatively, this difference in $m$ values may be the steepening of the galaxy - galaxy correlation function at very small scales at higher redshift. Such steepening, at somewhat larger radius, has been observed [@pollo2006; @coil2006]. The difference of timescale could be due to higher concentration in the galaxy haloes at lower redshift. For a given mass of dark halo, the density is higher in the inner region for a high concentration halo, relative to a low concentration halo; however, the density is lower in the outer region for a high concentration halo. Therefore, for a high concentration halo, the dynamical friction timescale would be longer in the outer region and shorter in the inner region. Hence, our results suggest that for a given mass, the galaxy dark halo size is smaller at lower redshift, which is consistent with the simulation results from @bullock2001. Bullock et al. studied dark matter halo density profile parameterized by an NFW [@nfw1997] form in a high-resolution $N$-body simulation of a $\Lambda$CDM cosmology. Figure 10 in @bullock2001 shows that the concentration parameter increases with decreasing redshift for a given mass, which may be responsible for producing the dependence of the $m$ value evolution of radial bins.
Conclusions {#conclusion}
===========
We study the evolution of the pair fraction for over 157,000 galaxies in field with $0.25 \leq z \leq 0.8$, $\sigma_z/(1+z) \leq 0.3$, $-25 \leq M_{R_c} \leq -20$ for the primary sample, $-26 \leq M_{R_c} \leq -19$ for the secondary sample, using 5 kpc $\leq d_{sep} \leq$ 20 kpc, $\Delta{z} \leq 2.5\sigma_z$, and $\Delta{R_c} \leq 1$ mag criteria from the RCS photometric redshift catalog. Our result for all the objects in the sample shows that the pair fraction increases with redshift as $(1 + z)^m$ with $m = 2.83 \pm 0.33$, which is consistent with $N$-body simulations and many previous works. We also estimate the major merger remnant fraction, which is 0.06. This implies that only $\sim 6\%$ of galaxies with $-25 \leq M_{R_c} \leq -20$ have undergone major mergers since $z \sim 0.8$.
By looking at the results separated into different magnitude bins, we find that the brighter the luminosity, the weaker the evolution. We also study the evolution of pair fractions for different projected separation bins and find that the $m$ value decreases with increasing separation, which suggests that for a given mass, the galaxy dark halo size is smaller at lower redshift, which is consistent with the simulation results from @bullock2001. In a future paper we will examine the evolution of the pair fraction in other environments (e.g., cluster core, cluster outskirts) to study whether the merger rate is affected by the environment.
Binney, J., & Tremaine, S. 1987, Galactic Dynamics (Princeton: Princeton Unniv. Press)
Bridge, C. R. et al. 2007, , 659, 931
Bruzual, A. G., & Charlot, S. 2003, , 344, 1000
Bundy, K., Fukugita, M., Ellis, R. S., Kodama, T., & Conselice, C. J. 2004, , 601, L123
Bullock, J. S., Kolatt, T. S., Sigad, Y., Somerville, R. S., Kravtsov, A. V., Klypin, A. A., Primack, J. R., & Dekel, A. 2001, , 321, 559
Burkey, J. M., Keel, W. C., Windhorst, R. A., Franklin, B. E., & 1994, , 429, L13
Capak, P. et al. 2004, , 127, 180
Carlberg, R. G., Pritchet, C. J., & Infante, L. 1994, , 435, 540
Carlberg, R. G., et al. 2000, , 532, 1
Cassata, P., et al. 2005, , 357, 903
Coil, A. L., Newman, J. A., Cooper, M. C., Davis, M., Faber, S. M., Koo, D. C., & Willmer, C. N. A. 2006, , 644, 671
Coleman, G. D., Wu, C. -C., & Weedman, D. W. 1980, , 43, 393
Conselice, C. J., Bershady, M. A., Dickinson, M., & Papovich, C. 2003, , 126, 1183
Cowie, L. L., Songaila, A., & Hu, E. M. 1996, , 112, 3
Cowie, L. L., Barger, A. J., Hu, E. M., Capak, P., & Songaila, A. 2004, , 127, 3137
Dubinski, J., Mihos, J. C., & Hernquist, L. 1999, ApJ, 526, 607
Giavalisco, M. et al. 2004, , 600, L93
Gladders, M. D., & Yee, H. K. C. 2005 , 157, 1
Glazebrook, K., Peacock, J. A., Collins, C. A., & Miller, L. 1994, , 266, 65
Gottlöber, S., Klypin, A., & Kravtsov, A. V. 2001, , 546, 223
Governato, F., Gardner, J. P., Stadel, J., Quinn, T., & Lake, G. 1999, , 117, 1651
Heavens, A., Panter, B., Jimenez, R. & Dunlop, J. 2004, Nature, 428, 625
Hsieh, B. C., Yee, H. K. C., Lin, H., & Gladders, M. D. 2005, , 158, 161
Juneau, S., et al. 2005, , 619, L135
Kartaltepe, J. S., et al. 2007, , 172, 320
Kodama, T., et al. 2004, , 350, 1005
Lavery, R. J., Remijan, A., Charmandaris, V., Hayes, R. D., & Ring, A. A. 2004, , 612, 679
Le Févre, O., et al. 2000, , 311, 565
Lin, H., Yee, H. K. C., Carlberg, R. G., Morris, S. L., Sawicki, M., Patton, D. R., Wirth, G., & Shepherd, C. W. 1999, , 518, 533
Lin, L., et al. 2004, , 617, L9
Lotz, J. M. et al. 2006, astro-ph/0602088
McCarthy, P. J., et al. 2004, , 614, L9
Navarro, J. F., Frenk, C. S., & White, S. D. M. 1997, , 490, 493
Neuschaefer, L. W., Im, M., Ratnatunga, K. U., Griffiths., R. E., & Casertano, S. 1997, , 480, 59
Patton, D. R., Pritchet, C. J., Yee, H. K. C., Ellingson, E., & Carlberg, R. G. 1997, , 475, 29
Patton, D. R., Carlberg, R. G., Marzke, R. O., Pritchet, C. J., i Da Costa, L. N., & Pellegrini P. S. 2000, , 536, 153
Patton, D. R., et al. 2002, , 565, 208
Pollo, A., et al. 2006, A&A, 451, 409
Puech, M., Hammer, F., Lehnert, M. D., & Flores, H. 2007, A&A, 466, 83
Puech, M., Hammer, F., Flores, H, Neichel, B., Yang, Y., & Rodrigues, M. 2007, A&A, 476, 21
Reshetnikov, V. P. 2000, A&A, 353, 92
Treu, T., Ellis, R. S., Liao, T. X., & van Dokkum P. G. 2005, , in press
Wirth, G. D. et al. 2004, , 127, 3121
Woods, D., Fahlman, G. G., & Richer, H. B. 1995, , 454, 32
Yee, H. K. C. 1991, , 103, 396
Yee, H. K. C., & Ellingson, E. 1995, , 445, 37
Yee, H. K. C. et al. 2000, , 129, 475
Yee, H. K. C., & Ellingson, E. 2003, , 585, 215
Zepf, S. E., & Koo, D. C. 1989, , 337, 34
[^1]: <http://www.sdss.org/dr5/>
[^2]: <http://www.sdss.org/dr4/algorithms/sdssUBVRITransform.html>
[^3]: <http://deep.berkeley.edu/DR2/>
|
---
abstract: 'The macroturbulent atmospheric circulation of Earth-like planets mediates their equator-to-pole heat transport. For fast-rotating terrestrial planets, baroclinic instabilities in the mid-latitudes lead to turbulent eddies that act to transport heat poleward. In this work, we derive a scaling theory for the equator-to-pole temperature contrast and bulk lapse rate of terrestrial exoplanet atmospheres. This theory is built on the work of [@Jansen:2013], and determines how unstable the atmosphere is to baroclinic instability (the baroclinic “criticality”) through a balance between the baroclinic eddy heat flux and radiative heating/cooling. We compare our scaling theory to General Circulation Model (GCM) simulations and find that the theoretical predictions for equator-to-pole temperature contrast and bulk lapse rate broadly agree with GCM experiments with varying rotation rate and surface pressure throughout the baroclincally unstable regime. Our theoretical results show that baroclinic instabilities are a strong control of heat transport in the atmospheres of Earth-like exoplanets, and our scalings can be used to estimate the equator-to-pole temperature contrast and bulk lapse rate of terrestrial exoplanets. These scalings can be tested by spectroscopic retrievals and full-phase light curves of terrestrial exoplanets with future space telescopes.'
author:
- 'Thaddeus D. Komacek$^{1}$, Malte F. Jansen$^{1}$, Eric T. Wolf$^{2}$, and Dorian S. Abbot$^{1}$'
bibliography:
- 'References\_terrestrial.bib'
title: Scaling Relations for Terrestrial Exoplanet Atmospheres from Baroclinic Criticality
---
Introduction {#sec:intro}
============
We are approaching an era in which the atmospheres of terrestrial exoplanets will be detectable with both space-based [@Kreidberg:2016aa; @Morley:2017aa] and ground-based [@Snellen:2015aa; @Baker:2019aa; @Molliere:2019aa] observatories. Observations with the future space telescopes LUVOIR/HabEx/OST will constrain the molecular abundances and atmospheric structure of terrestrial exoplanets orbiting Sun-like stars [@Feng:2018aa], and their reflectance as a function of orbital phase will be inverted to derive maps of the planetary surface [@Cowan2009; @Kawahara2010; @Cowan2013; @Lustig-Yaeger:2018aa]. These observations will determine if theories developed to understand the climate of Earth apply throughout the wide parameter space of exoplanets.\
The potential temperature, $\theta$, is a measure of entropy, and represents the temperature that a parcel would have if it were adiabatically displaced to a standard reference pressure. The slope of the potential temperature with height depends on whether the atmosphere is unstable or stable to dry convection. If the atmosphere is unstable to dry convection, the potential temperature decreases with height ($d\theta/dz <0$), if the atmosphere is stable the potential temperature increases with height ($d\theta/dz > 0$), and if the atmosphere is neutral the potential temperature is constant with height ($d\theta/dz = 0$). In an atmosphere unstable to dry convection, turbulent motions relax the troposphere toward constant potential temperature, causing the temperature profile to be that of a dry adiabat [@Pierrehumbert:2010]. Earth’s atmosphere is stable to dry convection, but near neutrality to moist convection in the tropics.\
On Earth, the equator-to-pole temperature contrast is approximately the same as the potential temperature contrast between the surface and the tropopause. This means that isentropes (contours of constant entropy or potential temperature) in Earth’s atmosphere that lie at the surface at the equator slope up to the tropopause at the pole. This slope is such that Earth’s atmosphere is marginally unstable to certain types of baroclinic instability, which lead to the storms that represent weather in Earth’s mid-latitudes [@Charney:1967aa; @Eady:1949aa; @Vallis:2006aa; @Showman_2013_terrestrial_review]. The criticality of the atmosphere to baroclinic instability can be characterized by a baroclinic criticality parameter, $\xi$, which is $\approx 1$ for marginally critical conditions, $> 1$ for baroclinically unstable circulation, and $<1$ for stable flow. Earth’s atmosphere happens to have $\xi \approx 1$ [@Stone:1978aa].\
Based on GCM simulations, [@Schneider:1961aa] and [@Schneider:2006aa] argued that Earth’s atmosphere is forced to this marginally baroclincally unstable state. However, in a series of papers, [@Jansen:2012a; @Jansen:2012; @Jansen:2013] showed theoretically and numerically that varying atmospheric properties can cause the atmosphere to adjust to different baroclinic criticality parameters. Additionally, @Jansen:2013 built upon previous quasi-geostrophic theory [@Held:1996aa; @Held:2007aa] to derive scaling relations for the baroclinic criticality parameter $\xi$. The baroclinic criticality parameter is directly related to isentropic slopes in the atmosphere, which in turn are controlled by the ratio of the equator-to-pole and surface-to-tropopause potential temperature contrasts. Using an additional constraint on the relationship between the horizontal and vertical heat transport, @Jansen:2013 further propose separate scalings for the equator-to-pole and surface-to-tropopause potential temperature contrast.\
In this work, we use the theory of @Jansen:2013 to derive scalings for how the equator-to-pole temperature contrast and the bulk lapse rate of Earth-like exoplanets depend on planetary parameters. We define the bulk lapse rate as the minimum of the potential temperature contrast from the surface to the tropopause and the potential temperature contrast over one scale height. We compare our theory to results from a sophisticated exoplanet GCM to show the applicability of our scalings to terrestrial exoplanet atmospheres. Similar GCMs have been used previously to study how the circulation of terrestrial exoplanets orbiting Sun-like stars depend on a broad range of planetary parameters [@Yang:2014; @Kaspi:2014; @Popp:2016; @chemke:2017; @Way:2017aa; @Wolf:2017; @Jansen:2018aa; @Kang:2019aa; @Kang:2019ab]. We focus on planets the size of Earth to isolate the dependence of baroclinic criticality on planetary rotation rate and surface pressure. We find that the theory of @Jansen:2013 applies throughout the regime of baroclinically unstable and marginally critical atmospheres of Earth-sized exoplanets.\
This paper is organized as follows. In Section \[sec:theory\], we build upon the work of @Jansen:2013 to develop scalings for the baroclinic criticality parameter as a function of planetary parameters and use this to predict the scaling of the equator-to-pole temperature contrast and bulk lapse rate with planetary parameters. We outline our numerical setup in Section \[sec:methods\], and then compare our GCM results to our theoretical scalings in Section \[sec:comp\]. Lastly, we discuss the application of our results to interpret future exoplanet observations, describe the limitations of our theory, and state conclusions in Section \[sec:disc\].
Theoretical scalings {#sec:theory}
====================
Criticality parameter
---------------------
The baroclinic criticality parameter is related to the ratio of the equator-to-pole and surface-to-tropopause potential temperature contrasts, $\xi \sim \Delta_h \bar{\theta}/\Delta_v\bar{\theta}$, where the overbars represent a zonal average and $\Delta$ represents a difference taken across the troposphere. Given that the slope of atmospheric isentropes is $s = -\left(\partial \bar{\theta}/\partial y\right)/\left(\partial \bar{\theta}/\partial z\right)$, where $y$ is latitude and $z$ is height, we can relate the criticality parameter to the bulk slope of atmospheric isentropes as [@Jansen:2013] $$\label{eq:crit}
\xi = s\frac{a}{H}\mathrm{,}$$ where $a$ is the planetary radius and $H$ is the height of the tropopause. The derivation in [@Jansen:2013] assumes a Boussinesq approximation, which is appropriate only if the tropopause height is small compared to the atmospheric scale height. In the opposite limit, the relevant height scale $H$ in [Equation(\[eq:crit\])]{} is the atmospheric scale height (e.g., [@Chai:2014aa]). In this work, we take $H$ to be the minimum of the tropopause height and scale height.\
We define the length of an isentrope to be the smaller of the length over which it rises from the surface to the tropopause or one scale height. As a result, we can write the slope of isentropes as the ratio of their height to their length, that is $s = H/l$. As in [@Jansen:2013], we scale the length of isentropes with the distance that baroclinic eddies can diffusively mix the atmosphere over a radiative relaxation timescale $\tau_\mathrm{rad}$, $$l_\mathrm{diff} \sim \sqrt{\tau_\mathrm{rad}D_\mathrm{eddy}} \mathrm{,}$$ where $D_\mathrm{eddy}$ is a characteristic eddy diffusivity. Now, we rewrite the slope of isentropes as $$s \sim \frac{H}{\sqrt{\tau_\mathrm{rad}D_\mathrm{eddy}}} \mathrm{,}$$ and plugging this expression for $s$ into [Equation(\[eq:crit\])]{} we find that the criticality parameter scales as $$\label{eq:criteddy}
\xi \sim \frac{a}{\sqrt{\tau_\mathrm{rad}D_\mathrm{eddy}}}.$$ To relate the eddy diffusivity $D_\mathrm{eddy}$ to basic-state properties of the atmosphere, we assume that the eddy diffusivity scales as $D_\mathrm{eddy} \sim U L_\mathrm{Rh}$, where $U$ is a characteristic eddy velocity. $L_\mathrm{Rh} \approx \sqrt{U/\beta}$ is the Rhines scale, which is the scale at which the growth of macroturbulent eddies is arrested by differential rotation, where $\beta = df/dy = 2\Omega \mathrm{cos}\left(\phi\right)/a$ is the change in the Coriolis parameter $f$ with latitudinal distance, where $\Omega$ is the rotation rate. We then use scalings from [@Held:1996aa] that relate the Rhines scale to the length scale at which gravity waves are affected by rotation (the Rossby deformation length $L_\mathrm{d}$) as $L_\mathrm{Rh} \sim \xi L_\mathrm{d}$. The Rossby deformation length $L_\mathrm{d} \approx \left(\partial_z \sqrt{b}\right) H/f$, where $b \approx g \alpha {\left(\theta-\theta_0\right)}$ is buoyancy, $g$ is gravity, $\alpha$ is the thermal expansion coefficient of air, and $\theta_0$ is a reference potential temperature. Solving for the eddy diffusivity, we recover the scaling of [@Held:1996aa]: $$\label{eq:eddy}
D_\mathrm{eddy} \sim \beta \left(\xi L_\mathrm{d}\right)^3 \mathrm{.}$$\
Inserting [Equation(\[eq:eddy\])]{} into [Equation(\[eq:criteddy\])]{} and assuming that the radiative relaxation timescale scales as $\tau_\mathrm{rad} \propto p/(gT^3)$ [@showman_2002], where $p$ is pressure and $T$ is temperature, we arrive at a scaling for the criticality parameter as a function of planetary parameters: $$\label{eq:xiscaling}
\xi \sim \xi_\varoplus \left(\frac{\Omega}{\Omega_\varoplus}\right)^{2/5} \left(\frac{p}{p_\varoplus}\right)^{-1/5} \left(\frac{H}{H_\varoplus}\right)^{-3/5} \left(\frac{a}{a_\varoplus}\right)^{3/5} \left(\frac{g}{g_\varoplus}\right)^{-1/10} \mathrm{,}$$ where $\xi_\varoplus \sim 1$ is the criticality parameter on Earth and all parameters are normalized to their values on Earth. Note that [Equation(\[eq:xiscaling\])]{} assumes that changes in $\partial_z \theta$ are relatively small, which is consistent with the results discussed below. [Equation(\[eq:xiscaling\])]{} suggests that faster-rotating planets with less massive atmospheres will be more unstable to baroclinic instabilities.
Equator-to-pole temperature contrast and bulk lapse rate
--------------------------------------------------------
[@Jansen:2013] showed that the baroclinic criticality parameter can be linked to the equator-to-pole temperature contrast and bulk lapse rate, the latter of which is related to the surface-to-tropopause temperature contrast. As in [@Jansen:2013], for the purposes of this scaling derivation we will approximate the radiative heating/cooling as a Newtonian relaxation of potential temperature to an equilibrium value over the radiative timescale, with $d\theta/dt = -\left(\theta-\theta_\mathrm{eq}\right)/\tau_\mathrm{rad}$. Then, the vertical eddy heat flux needed to balance diabatic heating/cooling scales as $$\label{eq:feddyv1}
F_\mathrm{eddy,v} \sim H \frac{\Delta_v \bar{\theta} - \Delta_v \bar{\theta}_\mathrm{eq}}{\tau_\mathrm{rad}} \mathrm{,}$$ where $\Delta_v$ denotes a vertical contrast, taken from the tropopause to the surface ($\Delta_v \theta = \theta_\mathrm{tropopause} - \theta_\mathrm{surface}$).\
Similarly, the horizontal eddy heat flux has to balance the diabatic heating and cooling at low and high latitudes. This allows us to relate the horizontal eddy heat flux $F_\mathrm{eddy,h}$ and the horizontal temperature contrast taken from equator to pole ($\Delta_h\theta = \theta_\mathrm{equator} - \theta_\mathrm{pole}$): $$\label{eq:deltah}
F_\mathrm{eddy,h} \sim -a \frac{(\Delta_h \bar{\theta} - \Delta_h \bar{\theta}_\mathrm{eq})}{\tau_\mathrm{rad}} \mathrm{.}$$ Following [@Jansen:2013], we assume that the eddy heat flux is directed along isentropes. Additionally, we relate the ratio of the horizontal to vertical potential temperature contrasts (which scales directly with the criticality parameter) to the isentropic slope as $\Delta_h\bar{\theta}/\Delta_v\bar{\theta} \sim sa/H$. This allows us to relate the vertical eddy heat flux to the horizontal eddy heat flux as $$\label{eq:deltav}
F_\mathrm{eddy,v} \sim s F_\mathrm{eddy,h} \sim \frac{H}{a} \frac{\Delta_h \bar{\theta}}{\Delta_v \bar{\theta}} F_\mathrm{eddy,h} \mathrm{.}$$ Combining Equations (\[eq:feddyv1\]), (\[eq:deltah\]) and (\[eq:deltav\]), we relate the vertical and horizontal potential temperature contrasts as: $$\label{eq:algebra}
\Delta_v\bar{\theta}\left(\Delta_v\bar{\theta}-\Delta_v\bar{\theta}_\mathrm{eq}\right) = -\Delta_h\bar{\theta}\left(\Delta_h\bar{\theta}-\Delta_h\bar{\theta}_\mathrm{eq}\right) \mathrm{.}$$ Substituting $\xi \sim \Delta_h\bar{\theta}/\Delta_v\bar{\theta}$ into [Equation(\[eq:algebra\])]{} and assuming a stable background atmosphere with $\Delta_v \theta_\mathrm{eq} \approx 0$, we relate the horizontal potential temperature contrast individually to the criticality parameter $$\label{eq:deltahixi}
\Delta_h\bar{\theta} \sim \frac{\Delta_h\theta_\mathrm{eq}}{1+\xi^{-2}} \mathrm{.}$$ Dividing through by $\xi$, we relate the bulk lapse rate to the criticality parameter as $$\label{eq:bulklapse}
\Delta_v\bar{\theta} \sim \frac{\Delta_h\theta_\mathrm{eq}}{\xi+\xi^{-1}} \mathrm{.}$$ [Equation(\[eq:bulklapse\])]{} has a maximum in bulk lapse rate for $\xi = 1$, which is the regime that Earth itself is in.\
Substituting our scaling for the baroclinic criticality parameter as a function of planetary parameters from [Equation(\[eq:xiscaling\])]{} into [Equation(\[eq:deltahixi\])]{}, we write a scaling for the equator-to-pole temperature contrast as a function of planetary parameters: $$\begin{aligned}
\label{eq:deltahscaling}
\Delta_h\bar{\theta} \sim \left(\Delta_h\theta_\mathrm{eq}\right) \bigg[1+ & \xi_\varoplus^{-2} \left(\frac{\Omega}{\Omega_\varoplus}\right)^{-4/5} \left(\frac{p}{p_\varoplus}\right)^{2/5} \\ & \left(\frac{H}{H_\varoplus}\right)^{6/5} \left(\frac{a}{a_\varoplus}\right)^{-6/5} \left(\frac{g}{g_\varoplus}\right)^{1/5}\bigg]^{-1} \mathrm{,}
\end{aligned}$$ where $\left(\Delta_h\theta_\mathrm{eq}\right)$ is the equilibrium equator-to-pole potential temperature contrast. Similarly, we substitute our scaling for the criticality parameter from [Equation(\[eq:xiscaling\])]{} into [Equation(\[eq:bulklapse\])]{} to find a scaling for the bulk lapse rate as a function of planetary parameters: $$\begin{aligned}
\label{eq:deltavscaling}
\Delta_v\theta \sim & \left(\Delta_h\theta_\mathrm{eq}\right) \bigg[\xi_\varoplus \left(\frac{\Omega}{\Omega_\varoplus}\right)^{2/5} \left(\frac{p}{p_\varoplus}\right)^{-1/5} \left(\frac{H}{H_\varoplus}\right)^{-3/5} \\ & \left(\frac{a}{a_\varoplus}\right)^{3/5} \left(\frac{g}{g_\varoplus}\right)^{-1/10} + \xi_\varoplus^{-1} \left(\frac{\Omega}{\Omega_\varoplus}\right)^{-2/5} \left(\frac{p}{p_\varoplus}\right)^{1/5} \\ & \left(\frac{H}{H_\varoplus}\right)^{3/5} \left(\frac{a}{a_\varoplus}\right)^{-3/5} \left(\frac{g}{g_\varoplus}\right)^{1/10} \bigg]^{-1} \ \mathrm{.}
\end{aligned}$$\
We expect that the equator-to-pole temperature contrast increases for faster rotation rates and smaller surface pressures, until the limit where $\xi \gg 1$ when the temperature gradient approaches its radiative equilibrium value. We expect that the bulk lapse rate increases with faster rotation rates and smaller surface pressures for $\xi < 1$, increases with slower rotation rates and larger surface pressures when $\xi >1$, and has a maximum near $\xi = 1$. Next, we will introduce our numerical simulations, which will then be compared to our analytic scaling predictions for the criticality parameter, equator-to-pole temperature contrast, and bulk lapse rate in [Section \[sec:comp\]]{}.
Numerical Methods {#sec:methods}
=================
{width="100.00000%"}
To compare with our scaling theory, we perform simulations with the [[ExoCAM]{}]{} GCM. [[ExoCAM]{}]{} is a version of the Community Atmosphere Model version 4 with upgraded radiative transfer and water vapor absorption coefficients for application to exoplanets, and has been used for a wide range of exoplanet studies [@Kopp:2016; @kopparapu2017; @Wolf:2017; @Wolf:2017aa; @Haqq2018; @Komacek:2019aa; @Yang:2019aa]. In this work, we use the same basic model setup as [@kopparapu2017; @Haqq2018; @Komacek:2019aa], and [@Yang:2019aa]: we consider aquaplanets without continents that have immobile slab oceans with a depth of $50~\mathrm{m}$ and an atmosphere comprised only of N$_2$ and H$_2$O. We consider planets with zero obliquity orbiting a Sun-like star, with varying rotation rates from $0.0625-8 \Omega_\varoplus$, where $\Omega_\varoplus$ is the rotation rate of Earth, and varying surface pressure from $0.25-4~\mathrm{bars}$. When varying rotation rate, we keep the surface pressure fixed at 1 bar. Similarly, when we vary surface pressure, we keep the rotation rate fixed to $1 \Omega_\varoplus$.\
These simulations are the same as those in [@Komacek:2019aa], except that we extend the suite of simulations to faster rotation rates. All simulations assume an incident stellar flux of $1360.8 \ \mathrm{W} \ \mathrm{m}^{-2}$, equal to that of Earth. The majority of these simulations use a horizontal resolution of $4^\circ \times 5^\circ$ with 40 vertical levels and a timestep of $30~\mathrm{minutes}$. However, we use a horizontal resolution of $0.47^\circ \times 0.63^\circ$ and a timestep of $7.5~\mathrm{minutes}$ for the fastest rotating case, which has a rotation rate 8 times greater than that of Earth.\
[Figure \[fig:T\]]{} shows maps of near-surface potential temperature, zonal wind, and eddy component of the zonal wind from a subset of our GCM simulations with varying rotation rates of $1-8 \Omega_\varoplus$. The eddy component of the zonal wind is $u' = u - \bar{u}$, where $u$ is the zonal wind and $\bar{u}$ is the zonal-mean of the zonal wind. We find that the width of tropical regions decreases and the number of zonal jets increases with increasing rotation rate, as expected from previous work [@Rhines:1975aa; @Williams:1982aa; @Kaspi:2014; @Wang:2018]. We also find that the scale of mid-latitude eddies decreases with increasing rotation rate due to the decreasing deformation radius, as expected from previous work [@Schneider:2006aa; @Kaspi:2011aa; @Kaspi:2013aa; @Wang:2018]. This basic understanding qualitatively agrees with that from our scaling theory, which found that the baroclinic criticality parameter should increase with increasing rotation rate.
GCM-Theory comparison {#sec:comp}
=====================
Criticality parameter
---------------------
{width="100.00000%"}
{width="100.00000%"}
{width="100.00000%"}
We calculate the baroclinic criticality parameter from our GCM results similarly to [@Jansen:2013]: $$\label{eq:criticalityfromgcm}
\xi = \frac{a\left<\partial \bar{\theta}/\partial y\right>}{H\left<\partial \bar{\theta}/\partial z\right>} \mathrm{,}$$ where $a = 6.37\times10^6~\mathrm{m}$ is the planetary radius and $H$ is the smaller of the height of the tropopause and scale height averaged between $30^\circ-80^{\circ}$ latitude. The ratio of partial derivatives of potential temperature with respect to latitude and height gives the slope of isentropes from our simulations. We average these partial derivatives from a pressure of $0.5 p_s$ (where $p_s$ is the surface pressure) to the surface and between $30^\circ-80^{\circ}$ latitude. [Equation(\[eq:criticalityfromgcm\])]{} calculates the criticality parameter as an average of local isentropic slopes, which is not equivalent to the ratio of the equator-to-pole temperature contrast and bulk lapse rate. We describe the calculation of the equator-to-pole temperature contrast and bulk lapse rate from our GCM experiments in [Section \[sec:tempcontrasts\]]{}.\
[Figure \[fig:crit\]]{} shows a comparison of the baroclinic criticality parameter from our GCM results (solid lines) with the scaling theory from [Equation(\[eq:xiscaling\])]{} (dashed lines) for varying rotation rate and surface pressure. In this comparison, we set $\xi = \xi_\varoplus$ for an Earth-like rotation rate and surface pressure. We find that the scaling of criticality parameter as $\Omega^{2/5}$ from [Equation(\[eq:xiscaling\])]{} is in good agreement with simulations that have rotation rates $\Omega \ge 0.5 \Omega_\varoplus$ (left panel of [Figure \[fig:crit\]]{}). The under-prediction of the criticality parameter at slower rotation rates is not surprising, as planets with rotation rates of $\Omega \lesssim 1/3 \Omega_\varoplus$ have weak baroclinic effects because the deformation radius is larger than the planetary circumference.\
We also find good agreement between our theoretical scaling of the criticality parameter with surface pressure and GCM experiments (right hand panel of [Figure \[fig:crit\]]{}). Note that for the scaling with surface pressure, we include the minimum of the tropopause height and scale height calculated from our GCM experiments in the predicted criticality, because the tropopause height increases with increasing surface pressure. As a result, the decrease of baroclinic criticality with increasing surface pressure is due to the combined effects of increasing surface pressure and increasing tropopause height.
Equator-to-pole temperature contrast and bulk lapse rate {#sec:tempcontrasts}
--------------------------------------------------------
[Figure \[fig:delta\]]{} shows a comparison of the equator-to-pole temperature contrast in the GCM experiments (solid lines) with our theory from Equation (\[eq:deltahscaling\]) (dashed lines) for varying rotation rate and surface pressure. In the GCM results, we average the equator-to-pole temperature contrast from $0.5 p_s$ to the surface. We keep $\Delta_h \theta_\mathrm{eq}$ fixed at $242 \ \mathrm{K}$, which is the radiative equilibrium emission temperature contrast between the equator and the pole for a zero obliquity planet with Earth’s incident stellar flux. Our estimate of $\Delta_h \theta_\mathrm{eq}$ assumes an albedo of $0.55$, which is the albedo of our simulation with an Earth-like value of rotation rate and surface pressure. In keeping $\Delta_h \theta_\mathrm{eq}$ fixed, we ignore changes in the albedo due to changes in the sea ice cover. Notice that, with this choice, there is no tunable parameter in our equations for the equator-to-pole temperature contrast and bulk lapse rate.\
We find that the theoretical scalings for the equator-to-pole temperature contrast broadly match those from our GCM experiments, although not as well as for the criticality parameter. The equator-to-pole temperature contrast increases with increasing rotation rate and decreasing surface pressure in our GCM experiments, due to the increased criticality parameter leading to reduced eddy length scales. Note that the equator-to-pole temperature contrast is greatly reduced for simulations with surface pressures of 2 and 4 bars because they are in an ice-covered state due to the lack of CO$_2$. The change from partial to total ice cover causes a change in $\Delta_h\theta_\mathrm{eq}$, which is not accounted for in our prediction.\
[Figure \[fig:bulk\]]{} shows the bulk lapse rate calculated from our suite of GCM experiments, averaged between $30-80^\circ$ latitude, along with predictions from our theoretical scaling (Equation \[eq:deltavscaling\]). We find that the bulk lapse rate does not show as strong of a dependence on planetary parameters as the equator-to-pole temperature contrast because our simulations are all near the $\xi \approx 1$ regime. Our theoretical prediction for the bulk lapse rate increases with increasing rotation rate throughout the parameter regime studied, with a maximum at $\approx 8 \Omega_\varoplus$. This is because $\xi < 1$ in our simulations for rotation rates $\Omega < 8 \Omega_\varoplus$ (see [Figure \[fig:crit\]]{}), and when $\xi < 1$ the bulk lapse rate scales with the criticality parameter (see Equation \[eq:bulklapse\]). Our prediction for the bulk lapse rate is not strongly dependent on pressure, but does have a maximum at a pressure around $0.5 \ \mathrm{bars}$. This maximum at $0.5 \ \mathrm{bars}$ occurs because at this surface pressure $\xi \approx 1$, while at lower pressures the criticality parameter $\xi > 1$ and at higher pressures $\xi < 1$. Our simulations also show a maximum in bulk lapse rate at an intermediate surface pressure, though the maximum occurs at a slightly larger pressure than expected from the theory.
Discussion & Conclusions {#sec:disc}
========================
Our predictions for the equator-to-pole temperature contrast and bulk lapse rate can be tested by future observations of terrestrial exoplanet atmospheres with LUVOIR/HabEx/OST. We predict that the equator-to-pole temperature contrast should increase with increasing rotation rate and decreasing surface pressure. Our theoretical predictions can be tested by observationally constraining the equator-to-pole temperature contrast. It may be possible to constrain the equator-to-pole temperature contrast for planets with non-zero obliquity, as both their polar and equatorial regions are visible throughout one planetary rotation [@Cowan:2018aa; @Olson:2018aa]. Our scalings show that, when varying only rotation rate and considering fixed incident stellar flux, surface pressure, and atmospheric composition, planets with faster rotation rates will have colder poles. As a result, planets in a cold climate regime like those simulated in this work should have wider ice coverage with increasing rotation rate, which could be detectable through albedo variations over planetary orbital phase.\
We predict that the bulk lapse rate is only weakly dependent on planetary parameters relative to the equator-to-pole temperature contrast. We also found that the bulk lapse rate has a maximum for planets with baroclinic criticality parameters $\xi \approx 1$. For Earth-like planets with rotation rates $\Omega \lesssim 8 \Omega_\varoplus$, we find in our GCM experiments that the bulk lapse rate increases with rotation rate, while for faster rotating planets the bulk lapse rate should decrease with increasing rotation rate. We also find that the bulk lapse rate is very weakly dependent on surface pressure, with less than a factor of two variation when increasing the surface pressure from 0.25 to 4 bars. The bulk lapse rate is related to the vertical potential temperature contrast. Planets with greater vertical potential temperature contrasts have more stably stratified atmospheres with smaller in-situ temperature lapse rates. As a result, a large bulk (potential temperature) lapse rate implies a small in-situ temperature lapse rate. In this way, our predictions for the bulk lapse rate can be tested with inverse (retrieval) methods that constrain the atmospheric lapse rate from an observed spectrum, which have been applied widely to interpret previous exoplanet observations [@Madhusudhan:2009; @Benneke:2012; @Line:2013; @Waldmann:2015aa; @Feng:2016aa]. Retrieval methods can determine the height of the top of the tropospheric cloud layer [@Feng:2018aa], which occurs approximately at the tropopause. Given the height of the tropopause, we can relate the bulk lapse rate directly to the temperature lapse rate and compare it to our theoretical prediction.\
We tested our analytic theory using simulations of terrestrial exoplanets with varying rotation rate and surface pressure. However, this suite of simulations was limited in extent, as we kept the atmospheric composition and host stellar spectrum fixed. We only considered an N$_2$-H$_2$O atmosphere without additional greenhouse gases, and as a result our simulations are in a relatively cold climate regime with large sea-ice coverage, relatively little water vapor, and weak latent heat transport. If our simulations were instead in a warm climate regime, for example by including an Earth-like complement of greenhouse gases, the equator to pole temperature contrast would be smaller [@Kaspi:2014; @Jansen:2018aa]. Additionally, we used the Solar incident spectrum for our suite of simulations. Near-IR absorption by water vapor could change the bulk lapse rate from that predicted by our theory for non-tidally locked planets that orbit later spectral type host stars than the Sun [@Cronin:2016].\
Our theoretical model, while applicable throughout a wide range of parameter space, can only be applied to predict the atmospheric circulation of planets that have baroclinically unstable atmospheres. Baroclinic effects are significantly weaker on slowly rotating planets that have deformation radii which are larger than the planetary circumference. As a result, our theory does not apply to Earth-sized planets with rotation periods $\gtrsim 3~\mathrm{days}$. Note that Earth-like planets could have a wide range of spin rates, with rotation periods ranging from $1-10^4 \ \mathrm{hr}$ due to the stochastic nature of the oligarchic stage of planetary growth [@Miguel:2010aa]. Additionally, Earth itself rotated faster in the past, as its rotation is slowed due to tidal interactions with the Moon. The theory presented here cannot be applied to slowly-rotating terrestrial planets orbiting Sun-like stars or tidally locked planets orbiting M dwarf stars. However, [@Wordsworth:2014] and [@Koll:2016] have developed theories for the temperature structure and wind speeds of tidally locked terrestrial planets. Additionally, theories developed to understand circulation in the tropical regions of Earth [@Held:1980; @held:2000; @Sobel:2001; @chemke:2017] can be used to understand the atmospheric heat transport of slowly rotating planets orbiting Sun-like stars.\
Our scaling theory could also help with the interpretation of observations of warm Jupiter and warm Neptune exoplanets with the James Webb Space Telescope (JWST). Baroclinic instabilities play a key role in driving the zonal jets in Jupiter’s atmosphere [@Williams:1978aa; @Gierasch:1979aa; @Williams:1979aa; @Kaspi:2006aa; @Lian:2008aa; @Young:2019aa], and likely affect the circulation of fast-rotating gas giant exoplanets. Previous work has studied how the atmospheric circulation of warm Jupiters depends on rotation rate, incident stellar flux, and obliquity [@Showman:2014; @Rauscher:2017aa]. [@Showman:2014] found that the equator-to-pole temperature contrast of warm Jupiters increases with increasing rotation rate, as expected from our scaling theory. Emission spectra with JWST will constrain the temperature-pressure profiles of warm Jupiters and warm Neptunes [@Greene:2015], which may also be affected by baroclinic instabilities.\
In this work, we extended Earth-based theoretical scalings for baroclinic instability to estimate basic quantities of the circulation of terrestrial exoplanets. Our scalings can be used to predict the baroclinic criticality parameter throughout the baroclinically unstable regime of terrestrial exoplanets. As the criticality parameter is directly linked to the slope of isentropes, we apply these scalings to estimate the equator-to-pole temperature contrast and bulk lapse rate. These scalings predict that the equator-to-pole temperature contrast increases with increasing rotation rate and decreasing surface pressure, while the bulk lapse rate depends relatively weakly on variations in planetary parameters around Earth-like values and has a maximum when the criticality parameter is near one. We find reasonable agreement between the equator-to-pole temperature contrast and bulk lapse rate predicted by our scaling theory and simulated using a detailed GCM. This agreement extends throughout the parameter regime of Earth-sized exoplanets with rotation rates $\gtrsim 1/3 \Omega_\varoplus$, but for more slowly rotating planets baroclinic effects are small and our theory no longer applies. We thank the referee for helpful comments that improved the manuscript. This work was supported by the NASA Astrobiology Program Grant Number 80NSSC18K0829 and benefited from participation in the NASA Nexus for Exoplanet Systems Science research coordination network. T.D.K. acknowledges funding from the 51 Pegasi b Fellowship in Planetary Astronomy sponsored by the Heising-Simons Foundation. Our work was completed with resources provided by the University of Chicago Research Computing Center.
[n]{}n
[n]{}y
|
---
abstract: 'We present a series of SIR-network models, extended with a game-theoretic treatment of imitation dynamics which result from regular population mobility across residential and work areas and the ensuing interactions. Each considered SIR-network model captures a class of vaccination behaviours influenced by epidemic characteristics, interaction topology, and imitation dynamics. Our focus is the eventual vaccination coverage, produced under voluntary vaccination schemes, in response to these varying factors. Using the next generation matrix method, we analytically derive and compare expressions for the basic reproduction number $R_0$ for the proposed SIR-network models. Furthermore, we simulate the epidemic dynamics over time for the considered models, and show that if individuals are sufficiently responsive towards the changes in the disease prevalence, then the more expansive travelling patterns encourage convergence to the endemic, mixed equilibria. On the contrary, if individuals are insensitive to changes in the disease prevalence, we find that they tend to remain unvaccinated in all the studied models. Our results concur with earlier studies in showing that residents from highly connected residential areas are more likely to get vaccinated. We also show that the existence of the individuals committed to receiving vaccination reduces $R_0$ and delays the disease prevalence, and thus is essential to containing epidemics.'
author:
- |
Sheryl L. Chang ^a^ [^1]\
` [email protected]`\
Mahendra Piraveenan ^a,b^\
`[email protected]`\
Mikhail Prokopenko ^a,c^\
`[email protected]`\
^a^Complex Systems Research Group, Faculty of Engineering and IT\
The University of Sydney, NSW, 2006, Australia\
^b^Charles Perkins Centre, The University of Sydney\
John Hopkins Drive, Camperdown, NSW, 2006, Australia\
^c^Marie Bashir Institute for Infectious Diseases and Biosecurity\
The University of Sydney, Westmead, NSW, 2145, Australia\
title: 'The effects of imitation dynamics on vaccination behaviours in SIR-network model'
---
Introduction {#intro}
============
Vaccination has long been established as a powerful tool in managing and controlling infectious diseases by providing protection to susceptible individuals [@WHO]. With a sufficiently high vaccination coverage, the probability of the remaining unvaccinated individuals getting infected reduces significantly. However, such systematic programs by necessity may limit the freedom of choice of individuals. When vaccination programs are made voluntary, the vaccination uptake declines as a result of individuals choosing not to vaccinate, as seen in Britain in 2003 when the vaccination program for Measles-Mumps-Rubella (MMR) was made voluntary [@measle_outbreak]. Parents feared possible complications from vaccination [@measle_outbreak; @bauch_2005_imi] and hoped to exploit the ‘*herd immunity*’ by assuming other parents would choose to vaccinate their children. Such hopes did not materialize precisely because other parents also thought similarly.
Under a voluntary vaccination policy an individual’s decision depends on several factors: the social influence from one’s social network, the risk perception of vaccination, and the risk perception of infection, in terms of both likelihood and impact [@vac_review]. This decision-making is often modelled using game theory [@bauch_2005_imi; @imi_bauch_bha; @sto_network_2010; @sto_network_2012; @sto_network_2012_memory; @sto_network_2013; @net_imi_nei; @comp_imi_nei], by allowing individuals to compare the cost of vaccination and the potential cost of non-vaccination (in terms of the likelihood and impact of infection), and adopting imitation dynamics in modelling the influence of social interactions. However, many of the earlier studies [@static_bauch; @attractor_bauch; @static_group_ind] were based on the key assumption of a well-mixed homogeneous population where each individual is assumed to have an equal chance of making contact with any other individual in the population. This population assumption is rather unrealistic as large populations are often diverse with varying levels of interactions. To address this, more recent studies [@sto_network_2012; @sto_network_2012_memory; @sto_network_2013; @net_imi_nei; @comp_imi_nei; @network_game_bauch] model populations as complex networks where each individual, represented by a node, has a finite set of contacts, represented by links [@vac_net_review].
It has been shown that there is a critical cost threshold in the vaccination cost above which the likelihood of vaccination drops steeply [@sto_network_2010]. In addition, highly connected individuals were shown to be more likely to choose to be vaccinated as they perceive themselves to be at a greater risk of being infected due to high exposure within the community. In modelling large-scale epidemics, the population size (i.e., number of nodes) can easily reach millions of individuals, interacting in a complex way. The challenge, therefore, is to extend epidemic modelling not only with the individual vaccination decision-making, but also capture diverse interaction patterns encoded within a network. Such an integration has not yet been formalized, motivating our study. Furthermore, once an integrated model is developed, a specific challenge is to consider how the vaccination imitation dynamics developing across a network affects the basic reproduction number, $R_0$. This question forms our second main objective.
One relevant approach partially addressing this objective is offered by the multi-suburb (or multi-city) SIR-network model [@ArinoR0; @multi-city; @patch] where each node represents a neighbourhood (or city) with a certain number of residents. The daily commute of individuals between two neighbourhoods is modelled along the network link connecting the two nodes, which allows to quantify the disease spread between individuals from different neighbourhoods. Ultimately, this model captures meta-population dynamics in a multi-suburb setting affected by an epidemic spread at a greater scale. To date, these models have not yet considered intervention (e.g., vaccination) options, and the corresponding social interaction across the populations.
To incorporate the imitation dynamics, modelled game-theoretically, within a multi-suburb model representing mobility, we propose a series of integrated vaccination-focused SIR-network models. This allows us to systematically analyze how travelling patterns affect the voluntary vaccination uptake due to adoption of different imitation choices, in a large distributed population. The developed models use an increasingly complex set of vaccination strategies. Thus, our specific contribution is the study of the vaccination uptake, driven by imitation dynamics under a voluntary vaccination scheme, using an SIR model on a complex network representing a multi-suburb environment, within which the individuals commute between residential and work areas.
In section \[method\], we present the models and methods associated with this study: in particular, we present analytical derivations of the basic reproduction number $R_0$ for the proposed models using the Next Generation Operator Approach [@NGM1990], and carry out a comparative analysis of $R_0$ across these models. In section \[results\], we simulate the epidemic and vaccination dynamics over time using the proposed models in different network settings, including a pilot case of a 3-node network, and a 3000-node Erdös-Rényi random network. The comparison of the produced results across different models and settings is carried out for the larger network, with the focus on the emergent attractor dynamics, in terms of the proportion of vaccinated individuals. Particularly, we analyze the sensitivity of the individual strategies (whether to vaccinate or not) to the levels of disease prevalence produced by the different considered models. Section \[conclusion\] concludes the study with a brief discussion of the importance of these results.\
Technical background
====================
Basic reproduction number $R_0$
-------------------------------
The basic reproduction number $R_0$ is defined as the number of secondary infections produced by an infected individual in an otherwise completely susceptible population [@R0_ori]. It is well-known that $R_0$ is an epidemic threshold, with the disease dying out as $R_0<1$, or becoming endemic as $R_0>1$ [@ArinoR0; @harding2018]. This finding strictly holds only in deterministic models with infinite population [@Hartfield2013]. The topology of the underlying contact network is known to affect the epidemic threshold [@Pastor-Satorras]. Many disease transmission models have shown important correlations between $R_0$ and the key epidemic characteristics (e.g., disease prevalence, attack rates, etc.) [@R0_book]. In addition, $R_0$ has been considered as a critical threshold for phase transitions studied with methods of statistical physics or information theory [@erten2017].
Vaccination model with imitation dynamics {#vaxx-SIR}
-----------------------------------------
Imitation dynamics, a process by which individuals copy the strategy of other individuals, is widely used to model vaccinating behaviours incorporated with SIR models. The model proposed in [@bauch_2005_imi] applied game theory to represent parents’ decision-making about whether to get their newborns vaccinated against childhood disease (e.g., measles, mumps, rubella, pertussis). In this model, individuals are in a homogeneously mixing population, and susceptible individuals have two ‘pure strategies’ regarding vaccination: to vaccinate or not to vaccinate. The non-vaccination decision can change to the vaccination decision at a particular sampling rate, however the vaccination decision cannot be changed to non-vaccination. Individuals adopt one of these strategies by weighing up their perceived payoffs, measured by the probability of morbidity from vaccination, and the risk of infection respectively. The payoff for vaccination ($f_v$) is given as, $$f_v=-r_v
\label{vaccinator}$$ and the payoff for non-vaccination ($f_{nv}$), measured as the risk of infection, is given as $$f_{nv}=-r_{nv}mI(t)
\label{infection}$$ where $r_v$ is the perceived risk of morbidity from vaccination, $r_{nv}$ is the perceived risk of morbidity from non-vaccination (i.e., infection), $I(t)$ is the current disease prevalence in population fraction at time $t$, and $m$ is the sensitivity to disease prevalence [@bauch_2005_imi].
From Equation \[vaccinator\] and \[infection\], it can be seen that the payoff for vaccinated individuals is a simple constant, and the payoff for unvaccinated individuals is proportional to the extent of epidemic prevalence (i.e., the severity of the disease).
It is assumed that life-long immunity is granted with effective vaccination. If one individual decides to vaccinate, s/he cannot revert back to the unvaccinated status. To make an individual switch to a vaccinating strategy, the payoff gain, $\Delta E=f_v-f_{nv}$ must be positive ($\Delta E>0$). Let $x$ denote the relative proportion of vaccinated individuals, and assume that an unvaccinated individual can sample a strategy from a vaccinated individual at certain rate $\sigma$, and can switch to the ‘vaccinate strategy’ with probability $\rho \Delta E$. Then the time evolution of $x$ can be defined as: $$\begin{aligned}
\dot{x} & =(1-x)\sigma x \rho \Delta E \\
& =\delta x(1-x)[-r_v+r_{nv}mI] \\
\end{aligned}
\label{x_bauch}$$ where $\delta=\sigma \rho$ can be interpreted as the combined imitation rate that individuals use to sample and imitate strategies of other individuals. Equation \[x\_bauch\] can be rewritten so that $x$ only depends on two parameters: $$\dot{x} =\kappa x(1-x)(-1+\omega I)\\
\label{x_bauch_final}$$ where $\kappa=\delta r_v$ and $\omega=mr_{nv}/r_v$, assuming that the risk of morbidity from vaccination, $r_v$, and the risk of morbidity from non-vaccination (i.e., morbidity from infection), $r_{nv}$, are constant during the course of epidemics. Hence, $\kappa$ is the adjusted imitation rate, and $\omega$ measures individual’s responsiveness towards the changes in disease prevalence.
The disease prevalence $I$ is determined from a simple SIR model [@R0_ori] with birth and death that divides the population into three health compartments: $S$ (susceptible), $I$ (infected) and $R$ (recovered). If a susceptible individual encounters an infected individual, s/he contracts the infection with transmission rate $\beta$, and thus progresses to the infected compartment, while an infected individual recovers at rate $\gamma$. It is assumed that the birth and death rate are equal, denoted by $\mu$ [@math_epi]. Individuals in all compartments die at an equal rate, and all newborns are added to the susceptible compartment unless and until they are vaccinated. Vaccinated newborns are moved to the recovered class directly, with a rate $\mu x$. Therefore, the dynamics are modelled as follows [@bauch_2005_imi]: $$\begin{aligned}
\dot{S} &=\mu(1-x)- \beta S I-\mu S \\
\dot{I} &=\beta S I- \gamma I -\mu I \\
\dot{R} &=\mu x+\gamma I-\mu R \\
\dot{x} &=\kappa x(1-x)(-1+\omega I)\\
\end{aligned}
\label{bauch_ori}$$
where $S$, $I$, and $R$ represent the proportion of susceptible, infected, and recovered individuals respectively (that is, $S+I+R=1$), and $x$ represents the proportion of vaccinated individuals within the susceptible class at a given time. The variables $S$, $I$, and $R$, as well as $x$, are time-dependent, but for simplicity, we omit subscript $t$ for these and other time-dependent variables.
Model \[bauch\_ori\] predicts oscillations in vaccine uptake in response to changes in disease prevalence. It is found that oscillations are more likely to occur when individuals imitate each other more quickly (i.e., higher $\kappa$). The oscillations are observed to be more volatile when people alter their vaccinating behaviours promptly in response to changes in disease prevalence (i.e., higher $\omega$). Overall, higher $\kappa$ or $\omega$ produce stable limit cycles at greater amplitude. Conversely, when individuals are insensitive to changes in disease prevalence (i.e., low $\omega$), or imitate at a slower rate (i.e., low $\kappa$), the resultant vaccinating dynamics converge to equilibrium [@bauch_2005_imi].
Multi-city epidemic model {#multicity}
-------------------------
Several multi-city epidemic models [@ArinoR0; @multi-city; @patch] consider a network of suburbs as $M$ nodes, in which each node $i \in V$ represents a suburb, where $i={1,2,\cdots M}$. If two nodes $i,j \in V$ are linked, a fraction of population living in node $i$ can travel to node $j$ and back (commute from $i$ to $j$, for example for work) on a daily basis. The connectivity of suburbs and the fraction of people commuting between them are represented by the population flux matrix $\phi$ whose entries represent the fraction of population daily commuting from $i$ to $j$, $\phi_{ij} \in [0,1]$ (Equation \[phi\]). Note that $\phi_{ij} \neq \phi_{ji}$, thus $\phi$ is not a symmetric matrix. Figure \[influx\] shows an example of such travel dynamics.\
![Schematic of daily population travel dynamics across different suburbs (nodes): a 4-node example. Solid line: network connectivity. Dashed line: volume of population flux (influx and outflux). Non-connected nodes have zero population flux (e.g., $\phi_{34}=\phi_{43}=0$). For each node, the daily outflux proportions (including travel to the considered node itself) sum up to unity, however, the daily influx proportions do not.[]{data-label="influx"}](Fig1.pdf){width="0.5\columnwidth"}
$$\phi_{M \times M} =
\begin{bmatrix}
\phi_{11} & \phi_{12} & \cdots & \phi_{1M} \\
\phi_{21} & \phi_{22} & \cdots & \phi_{2M} \\
\vdots & \vdots & \ddots & \vdots \\
\phi_{M1} & \phi_{N2} & \cdots & \phi_{MM}
\end{bmatrix}
\label{phi}$$
Trivially, each row in $\phi$, representing proportions of the population of a suburb commuting to various destinations, has to sum up to unity: $$\sum_{j=1}^{M}\phi_{ij}=1 \ , \ \ \forall i \in V$$ However, the column sum measures the population influx to a suburb during a day, and therefore depends on the node’s connectivity. Column summation in $\phi$ does not necessarily equal to unity.
It is also necessary to differentiate between *present population* $N_j^p$ and *native population* $N_j$ in node $j$ on a particular day. $N^p_j$ is used as a normalizing factor to account for the differences in population flux for different nodes: $$N^p_j =\sum_{k=1}^{M} \phi_{kj}N_k
\label{N_p}$$
Assuming the disease transmission parameters are identical in all cities, the standard incidence can be expressed as [@ArinoR0]: $$\sum\limits_{j=1}^M \sum\limits_{k=1}^{M} \lambda_j \beta_j \frac{I_k}{N^p_j} S_i
\label{patch_I}$$ where $I_l$ and $S_l$ denote respectively the number of infective and susceptible individuals in city $l$, and $\lambda_l$ is the average number of contacts in city $l$ per unit time. Incorporating travelling pattern defined by $\phi$, yields [@patch]: $$\sum\limits_{j=1}^M \sum\limits_{k=1}^{M} \beta_j \phi_{ij} \frac{\phi_{kj}I_k}{N^p_j} S_i
\label{patch_I_phi}$$
The double summation term in this expression captures the infection at suburb $j$ due to the encounters between the residents from suburb $i$ and the residents from suburb $k$ ($k$ could be any suburb including $i$ or $j$) occurring at suburb $j$, provided that suburbs $i,j,k$ are connected with non-zero population flux entries in $\phi$.
$R_0$ can be derived using the Next Generation Approach (see Appendix \[app1\] for more details). For example, for a multi-city SEIRS model with 4 compartments (susceptible, exposed, infective, recovered), introduced in [@ArinoR0], and a special case where the contact rate $\lambda_j$ is set to 1, while $\beta$ is identical across all cities, $R_0$ has the following analytical solution: $$R_0=\frac{\beta \epsilon}{(\gamma+d)(\epsilon+d)}
\label{Adino_R0}$$ where $1/d,1/\epsilon,$ and $1/\gamma$ denote the average lifetime, exposed period and infective period. It can be seen that this solution concurs with a classical SEIRS model with no mobility. When $\epsilon \to \infty$, Equation \[Adino\_R0\] reduces to $\beta / (d+\gamma)$, being a solution for a canonical SIR model. Furthermore, if the population dynamics is not considered (i.e., $d=0$), Equation \[Adino\_R0\] can be further reduced to $\beta /\gamma$, agreeing with [@patch].
Methods {#method}
=======
Integrated model
----------------
Expanding on models described in sections \[vaxx-SIR\] and \[multicity\], we bring together population mobility and vaccinating behaviours in a network setting, and propose three extensions within an integrated vaccination-focused SIR-network model:
- vaccination is available to newborns only (model \[system2.1\_old\] below )
- vaccination is available to the entire susceptible class (model \[system2.2\] below)
- committed vaccine recipients (model \[system\_reform\] below)
In this study, we only consider vaccinations that confer lifelong immunity, mostly related to childhood diseases such as measles, mumps, rubella and pertussis. Vaccinations against such diseases are often administered for individuals at a young age, implying that newborns (more precisely, their parents) often face the vaccination decision. These are captured by the first extension above. However, during an outbreak, adults may face the vaccination decision themselves if they were not previously vaccinated. For example, diseases which are not included in formal childhood vaccination programs, such as smallpox [@smallpox], may present vaccination decisions to the entire susceptible population. These are captured by the second extension. The third extension introduces a small fraction of susceptible population as committed vaccine recipients, i.e., those who would always choose to vaccinate regardless. The purpose of these committed vaccine recipients is to demonstrate how successful immunization education campaigns could affect vaccination dynamics.
Our models divide the population into many homogeneous groups [@R0_beta], based on their residential suburbs. Within each suburb (i.e., node), residents are treated as a homogeneous population. It is assumed that the total population within each node is conserved over time. The dynamics of epidemic and vaccination at each node, are referred to as ‘local dynamics’. The aggregate epidemic dynamics of the entire network can be obtained by summing over all nodes, producing ‘global dynamics’.
We model the ‘imitation dynamics’ based on the individual’s travelling pattern and the connectivity of their node defined by Equation \[phi\]. For any node $i \in V$, let $x_i$ denote the fraction of vaccinated individuals in the susceptible class in suburb $i$. On a particular day, unvaccinated susceptible individuals $(1-x_i)$ commute to suburb $j$ and encounter vaccinated individuals from node $k$ (where $k$ may be $i$ itself, $j$ or any other nodes) and imitate their strategy. However, this ‘imitation’ is only applicable in the case of a non-vaccinated individual imitating the strategy of a vaccinated individual (that is, deciding to vaccinate), since the opposite ‘imitation’ cannot occur. Therefore, in our model, every time a non-vaccinated person from $i$ comes in contact with a vaccinated person from $k$, they imitate the ‘vaccinate’ strategy if the perceived payoff outweighs the non-vaccination strategy.
Following Equation \[x\_bauch\_final\] proposed in [@bauch_2005_imi], the rate of change of the proportion of vaccinated individuals in $i$, that is, $x_i$, over time can be expressed by: $$\begin{aligned}
\dot{x_i} &=\sigma (1-x_i) \sum_{j=1}^{M} \sum_{k=1}^{M} \phi_{ij} \rho \Delta E \phi_{kj} x_k \\
&=\delta (1-x_i) \sum_{j=1}^{M} \sum_{k=1}^{M} \phi_{ij} (-r_v+r_{nv} m I_j) \phi_{kj} x_k \\
&=\kappa (1-x_i)\sum\limits_{j=1}^M \phi_{ij}(-1+\omega I_j)\sum\limits_{k=1}^M\phi_{kj}x_k \\
\end{aligned}
\label{imitation}$$ The players of the vaccination game are the parents, deciding whether or not to vaccinate their children using the information of the disease prevalence collected from their daily commute. For example, a susceptible individual residing at node $i$ and working at node $j$ uses the local disease prevalence at node $j$ to decide whether to vaccinate or not. If such a susceptible individual is infected, that individual will be counted towards the local epidemic prevalence at node $i$.
We measure each health compartment as a proportion of the population. Hence, we define a ratio, $\epsilon_l^p$, as the ratio between present population $N_l^p$ and the ‘native’ population $N_l$ in node $l$ on a particular day as: $$\epsilon_l^p =\frac{N_l^p}{N_l} =\frac{\sum_{q=1}^{M} \phi_{ql}N_q}{N_l}
\label{normalisation}$$
Our main focus is to investigate the effects of vaccinating behaviours on the global epidemic dynamics. To do so, we vary three parameters:
- Individual’s responsiveness to changes in disease prevalence, $\omega$
- Adjusted imitation rate, $\kappa$
- Vaccination failure rate, $\zeta$
While $\kappa$ and $\omega$ have had been considered in [@bauch_2005_imi], we introduce a new parameter $\zeta$ as the vaccination failure rate, $\zeta \in [0,1]$ to consider the cases where the vaccination may not be fully effective. The imitation component (Equation \[imitation\]) is common in all model extensions. While the epidemic compartments vary depending on the specific extension, Equation \[imitation\] is used consistently to model the relative rate of change in vaccination behaviours.
Vaccination available to newborns only {#newborn}
--------------------------------------
Model (\[system2.1\_old\]) captures the scenario when vaccination opportunities are provided to newborns only: $$\begin{aligned}
\dot{S_i} &=\mu[\zeta x_i+(1-x_i)]-\sum\limits_{j=1}^M \sum\limits_{k=1}^{M} \beta_j \phi_{ij} \frac{\phi_{kj}I_k}{\epsilon ^p_j} S_i-\mu S_i \\
\dot{I_i} &=\sum\limits_{j=1}^M \sum\limits_{k=1}^{M} \beta_j \phi_{ij} \frac{\phi_{kj}I_k}{\epsilon ^p_j}S_i-\gamma I_i -\mu I_i \\
\dot{R_i} &=\mu(1-\zeta) x_i+\gamma I_i-\mu R_i \\
\dot{x_i} &=\kappa (1-x_i)\sum\limits_{j=1}^M \phi_{ij}(-1+\omega I_j)\sum\limits_{k=1}^M\phi_{kj}x_k \\
\end{aligned}
\label{system2.1_old}$$ Unvaccinated newborns and newborns with unsuccessful vaccination $\mu[\zeta x_i+(1-x_i)]$ stay in the susceptible class. Successfully vaccinated newborns, on the other hand, move to the recovered class $\mu(1-\zeta) x_i$. Other population dynamics across health compartments follow the model (\[bauch\_ori\]).
Since $S+I+R=1 (\dot{S}+\dot{I}+\dot{R}=0)$, model (\[system2.1\_old\]) can be reduced to: $$\begin{aligned}
\dot{S_i} &=\mu[\zeta x_i+(1-x_i)]-\sum\limits_{j=1}^{M} \sum\limits_{k=1}^{M} \beta_j \phi_{ij} \frac{\phi_{kj}I_k}{\epsilon ^p_j} S_i-\mu S_i \\
\dot{I_i} &=\sum\limits_{j=1}^M \sum\limits_{k=1}^{M} \beta_j \phi_{ij} \frac{\phi_{kj}I_k}{\epsilon ^p_j}S_i-\gamma I_i -\mu I_i \\
\dot{x_i} &=\kappa (1-x_i)\sum\limits_{j=1}^M \phi_{ij}(-1+\omega I_j)\sum\limits_{k=1}^M\phi_{kj}x_k \\
\end{aligned}
\label{system2.1}$$
Model (\[system2.1\]) has a disease-free equilibrium (disease-free initial condition) $(S_i^0, I_i^0)$ for which $$\begin{aligned}
S_i^0 & =\zeta x_i^0+(1-x_i^0) \\
I_i^0 & =0 \\
\end{aligned}
\label{eq_SIR2.2}$$ **Proposition 1.** At the disease free equilibrium, $x^0$ has two solutions ($x^0=0$ or $x^0=1$) if $x^0$ and $S^0$ are uniform across all nodes.
*Proof:* At the disease free equilibrium, assuming $S^0_i, x_i^0$ are uniform across all nodes, and therefore: $$S^0_i = S^0, \ x_i^0 = x^0 \quad \forall i
\label{assump1}$$ Hence, under disease-free condition where $I^0=0$ and $\dot{x_i}=0$, Equation \[system2.1\] becomes: $$\begin{aligned}
0 & =(1-x_i^0)x_i^0\sum\limits_{j=1}^M \phi_{ij}(-1+\omega I_j)\sum\limits_{k=1}^M\phi_{kj} \\
\end{aligned}
\label{prep1}$$ Equation (\[prep1\]) leads to two solutions: $x^0=0$ or $x^0= 1$.
We can now obtain $R_0$ for the global dynamics in this model by using the Next Generation Approach, where $R_0$ is given by the most dominant eigenvalue (or ‘spatial radius’ $\mathbf{\rho}$) of $\mathbf{F}\mathbf{V}^{-1}$, where $\mathbf{F}$ and $\mathbf{V}$ are $M \times M$ matrices, representing the ‘new infections’ and ‘cases removed or transferred from the infected class’, respectively in the disease free condition [@NGM1990; @NGM2010]. As a result, $R_0$ is determined as follows (see Appendix \[app1\] for detailed derivation): $$R_0=\mathbf{\rho}(\mathbf{FV}^{-1})
\label{eq_SIR}$$ where $$\mathbf{F}=
\begin{bmatrix}\frac{\partial \mathcal{F}_1}{\partial I_1} & \frac{\partial \mathcal{F}_1}{\partial I_2} & \cdots & \frac{\partial \mathcal{F}_1}{\partial I_M} \\
\frac{\partial \mathcal{F}_2}{\partial I_1} & \frac{\partial \mathcal{F}_2}{\partial I_3} & \cdots & \frac{\partial \mathcal{F}_2}{\partial I_M} \\
\vdots & \vdots & \ddots & \vdots \\
\frac{\partial \mathcal{F}_M}{\partial I_1} & \frac{\partial \mathcal{F}_M}{\partial I_2} & \cdots & \frac{\partial \mathcal{F}_M}{\partial I_M} \\
\end{bmatrix}$$
$$\mathbf{V}=
\begin{bmatrix}\frac{\partial \mathcal{V}_1}{\partial I_1} & \frac{\partial \mathcal{V}_1}{\partial I_2} & \cdots & \frac{\partial \mathcal{V}_1}{\partial I_M} \\
\frac{\partial \mathcal{V}_2}{\partial I_1} & \frac{\partial \mathcal{V}_2}{\partial I_2} & \cdots & \frac{\partial \mathcal{V}_2}{\partial I_M} \\
\vdots & \vdots & \ddots & \vdots \\
\frac{\partial \mathcal{V}_M}{\partial I_1} & \frac{\partial \mathcal{V}_M}{\partial I_2} & \cdots & \frac{\partial \mathcal{V}_M}{\partial I_M} \\
\end{bmatrix}$$
while $$\begin{aligned}
\mathcal{F}_i &=S_i^0\sum_{M}^{j=1} \sum_{M}^{k=1}\beta_j\phi_{ij} \frac{\phi_{kj}I_k^0}{\epsilon^p_j} \\
\mathcal{V}_i &=\gamma I_i^0 +\mu I_i^0 \\
\end{aligned}$$ Assuming $\beta_i=\beta$ for all nodes, we can derive $\mathbf{F}$ and $\mathbf{V}$ as follows: $$\begin{aligned}
\mathbf{F} &= \beta
\begin{bsmallmatrix}
\sum \limits_{j=1}^M S_1^0\frac{\phi_{1j} \phi_{1j}}{\epsilon^p_j} & \sum \limits_{j=1}^M S_1^0\frac{\phi_{1j} \phi_{2j}}{\epsilon^p_j} & \cdots &\sum \limits_{j=1}^M S_1^0\frac{\phi_{1j} \phi_{Mj}}{\epsilon^p_j}\\
\sum \limits_{j=1}^M S_2^0\frac{\phi_{2j} \phi_{1j}}{\epsilon^p_j} & \sum \limits_{j=1}^M S_2^0\frac{\phi_{2j} \phi_{2j}}{\epsilon^p_j} & \cdots &\sum \limits_{j=1}^M S_2^0\frac{\phi_{2j} \phi_{Mj}}{\epsilon^p_j} \\
\vdots & \vdots & \ddots & \vdots \\
\sum \limits_{j=1}^M S_M^0\frac{\phi_{Mj} \phi_{1j} }{\epsilon^p_j} & \sum \limits_{j=1}^M S_M^0\frac{\phi_{Mj} \phi_{2j}}{\epsilon^p_j} & \cdots &\sum \limits_{j=1}^M S_M^0\frac{\phi_{Mj} \phi_{Mj}}{\epsilon^p_j} \\
\end{bsmallmatrix} \\
\\
\mathbf{V} &=(\gamma+\mu) \mathbb{I} \\
\end{aligned}$$ where $\mathbb{I}$ is a $M \times M$ identity matrix.\
Using **Proposition 1**, $\mathbf{F}$ can be simplified as: $$\mathbf{F}= \beta S^0
\begin{bsmallmatrix}
\sum \limits_{j=1}^M \frac{\phi_{1j} \phi_{1j}}{\epsilon^{p}_j} & \sum \limits_{j=1}^M \frac{\phi_{1j} \phi_{2j}}{\epsilon^{p}_j} & \cdots &\sum \limits_{j=1}^M \frac{\phi_{1j} \phi_{Mj}}{\epsilon^{p}_j}\\
\sum \limits_{j=1}^M \frac{\phi_{2j} \phi_{1j}}{\epsilon^{p}_j} & \sum \limits_{j=1}^M \frac{\phi_{2j} \phi_{2j}}{\epsilon^{p}_j} & \cdots &\sum \limits_{j=1}^M \frac{\phi_{2j} \phi_{Mj}}{\epsilon^{p}_j} \\
\vdots & \vdots & \ddots & \vdots \\
\sum \limits_{j=1}^M \frac{\phi_{Mj} \phi_{1j} }{\epsilon^{p}_j} & \sum \limits_{j=1}^M \frac{\phi_{Mj} \phi_{2j}}{\epsilon^{p}_j} & \cdots &\sum \limits_{j=1}^M \frac{\phi_{Mj} \phi_{Mj}}{\epsilon^{p}_j} \\
\end{bsmallmatrix}$$
We can now prove the following simple but useful proposition.
**Proposition 2.** Matrix $\mathbf{G}$, defined as follows: $$\mathbf{G}= \begin{bsmallmatrix}
\sum \limits_{j=1}^M \frac{\phi_{1j} \phi_{1j}}{\epsilon^{p}_j} & \sum \limits_{j=1}^M \frac{\phi_{1j} \phi_{2j}}{\epsilon^{p}_j} & \cdots &\sum \limits_{j=1}^M \frac{\phi_{1j} \phi_{Mj}}{\epsilon^{p}_j}\\
\sum \limits_{j=1}^M \frac{\phi_{2j} \phi_{1j}}{\epsilon^{p}_j} & \sum \limits_{j=1}^M \frac{\phi_{2j} \phi_{2j}}{\epsilon^{p}_j} & \cdots &\sum \limits_{j=1}^M \frac{\phi_{2j} \phi_{Mj}}{\epsilon^{p}_j} \\
\vdots & \vdots & \ddots & \vdots \\
\sum \limits_{j=1}^M \frac{\phi_{Mj} \phi_{1j} }{N^{p^*}_j} & \sum \limits_{j=1}^M \frac{\phi_{Mj} \phi_{2j}}{N^{p^*}_j} & \cdots &\sum \limits_{j=1}^M \frac{\phi_{Mj} \phi_{Mj}}{N^{p^*}_j} \\
\end{bsmallmatrix}
\label{gmarkov}$$ is a Markov matrix.\
*Proof:* We assume all nodes have the same population $N$. Then Equation (\[normalisation\]) can be reduced to: $$\epsilon ^{p^*}_i=\sum^{M}_{k=1}\phi_{ki}
\label{samepop}$$ Without loss of generality, substituting Equation (\[samepop\]) into Equation (\[gmarkov\]) and expanding entries in the first column yields the first column $\mathbf{g_1}$ of matrix $\mathbf{G}$:
$$\mathbf{g_1}=\begin{bmatrix}
\frac{\phi_{11}\phi_{11}}{\sum \limits_{k=1}^M \phi_{k1}}+\frac{\phi_{12}\phi_{12}}{\sum \limits_{k=1}^M \phi_{k2}}+\cdots+\frac{\phi_{1M}\phi_{1M}}{\sum \limits_{k=1}^M \phi_{kM}} & \cdots \\
\frac{\phi_{21}\phi_{11}}{\sum \limits_{k=1}^M \phi_{k1}}+\frac{\phi_{22}\phi_{12}}{\sum \limits_{k=1}^M \phi_{k2}}+\cdots+\frac{\phi_{2M}\phi_{1M}}{\sum \limits_{k=1}^M \phi_{kM}} & \cdots \\
\vdots & \vdots \\
\frac{\phi_{M1}\phi_{11}}{\sum \limits_{k=1}^M \phi_{k1}}+\frac{\phi_{M2}\phi_{12}}{\sum \limits_{k=1}^M \phi_{k2}}+\cdots+\frac{\phi_{MM}\phi_{1M}}{\sum \limits_{k=1}^M \phi_{kM}} & \cdots \\
\end{bmatrix}$$
Continuing with column 1, the column sum is: $$\begin{aligned}
& =\phi_{11}\frac{\sum \limits_{k=1}^M\phi_{k1}}{\sum \limits_{k=1}^M \phi_{k1}}+\phi_{12}\frac{\sum \limits_{k=1}^M\phi_{k2}}{\sum \limits_{k=1}^M \phi_{k2}}+\cdots+\phi_{1M}\frac{\sum \limits_{k=1}^M\phi_{kM}}{\sum \limits_{k=1}^M \phi_{kM}} \\
& =\sum \limits_{k=1}^M\phi_{1k} =1 \\
\end{aligned}$$ Similarly, the sums of each column are equal to 1, with all entries being non-negative population fractions. Hence, $\mathbf{G}$ is a Markov matrix.
As a Markov matrix, $\mathbf{G}$ always has the most dominant eigenvalue of unity. All other eigenvalues are smaller than unity in absolute value [@markov_matrix].
Now the next generation matrix $\mathbf{K}$ can be obtained as follows: $$\begin{aligned}
\mathbf{K} & = \frac{\beta S^0}{\gamma+\mu} \mathbf{G} \\
& = \frac{\beta[(1-x^0)+\zeta x^0]}{\gamma+\mu} \mathbf{G} \\
\label{K_SIR_newborn}
\end{aligned}$$
From **Proposition 2** and Equation (\[eq\_SIR\]), $R_0$ can be obtained as: $$\label{r0_exp}
\begin{aligned}
R_0 &=\rho(\mathbf{K}) \\
&=\frac{\beta[(1-x^0)+\zeta x^0]}{\gamma+\mu} \rho(\mathbf{G}) \\
&=\frac{\beta[(1-x^0)+\zeta x^0]}{\gamma+\mu} \\
\end{aligned}$$
Noting Proposition 1, there are two cases: $x=0$ and $x=1$. When nobody vaccinates ($x^0=0, S^0=1$), the entire population remains susceptible, which reduces model (\[system2.1\]) to a canonical SIR model without vaccination intervention and $R_0$ returns to $\frac{\beta}{\gamma+\mu}$. On the other hand, if the whole population is vaccinated ($x^0=1$), the fraction of susceptible population only depends on the vaccine failure rate $\zeta$, resulting in: $$R_0=\frac{\beta \zeta}{\gamma+\mu}
\label{R0_2.1}$$ Clearly, fully effective vaccination ($\zeta=0$) would prohibit disease spread ($R_0=0$). Partially effective vaccination could potentially suppress disease transmission, or even eradicate disease spread if $R_0<1$. If all vaccinations fail ($\zeta=1$), model \[system2.1\] concurs with a canonical SIR model without vaccination, which also corresponds to a special case in Equation \[Adino\_R0\] where $\epsilon \to \infty$.
Vaccination available to the entire susceptible class
-----------------------------------------------------
If the vaccination opportunity is expanded to the entire susceptible class, including newborns and adults, the following model is proposed based on model (\[system2.1\]): $$\begin{aligned}
\dot{S_i} &=\mu-\sum\limits_{j=1}^M \sum\limits_{k=1}^{M} \beta_j \phi_{ij} \frac{\phi_{kj}I_k}{\epsilon^p_j} S_i-\mu S_i- x_i S_i+x_i \zeta S_i \\
\dot{I_i} &=\sum\limits_{j=1}^M \sum\limits_{k=1}^{M} \beta_j \phi_{ij} \frac{\phi_{kj}I_k}{\epsilon^p_j}S_i -\gamma I_i -\mu I_i \\
\dot{R_i} &= S_i x_i (1-\zeta)+\gamma I_i- \mu I_i \\
\dot{x_i} &=\kappa (1-x_i)\sum\limits_{j=1}^M \phi_{ij}(-1+\omega I_j)\sum\limits_{k=1}^M\phi_{kj}x_k \\
\end{aligned}
\label{system2.2}$$
Model (\[system2.2\]) has a disease-free equilibrium $(S^0_i, I^0_i, x^0_i)$:
$$\begin{aligned}
S_i^0 & =\frac{\mu}{\mu+x_i^0(1-\zeta)}\\
I_i^0 & =0 \\
\end{aligned}
\label{eq_sus_equ}$$
while $$\begin{aligned}
\mathcal{F}_i &=S_i^0\sum_{j=1}^{M} \sum_{k=1}^{M}\beta_j\phi_{ij} \frac{\phi_{kj}I_k^0}{\epsilon^p_j} \\
\mathcal{V}_i &=\gamma I_i^0 +\mu I_i^0 \\
\end{aligned}$$ Substituting $S^0_i$ using Equation (\[eq\_SIR2.2\]) yields $$\begin{aligned}
\mathbf{F} &= \beta \frac{\mu}{\mu+x^0(1-\zeta)} \mathbf{G} \\
\mathbf{V} &=(\gamma+\mu) \mathbb{I} \\
\end{aligned}$$ where $\mathbb{I}$ is a $M \times M$ identity matrix.
The next generation matrix $\mathbf{K}$ can then be obtained as: $$\mathbf{K} = \frac{\beta \mu}{(\gamma+\mu)[\mu+x^0(1-\zeta)]} \mathbf{G} \\
\label{K_SIR_s}$$ Using **Proposition 2**, $R_0$ can be obtained as: $$R_0= \frac{\beta \mu}{(\gamma+\mu)[\mu+x^0(1-\zeta)]}
\label{R0_2.2}$$
When nobody vaccinates ($x^0=0$), model (\[system2.2\]) concurs with a canonical SIR model with $R_0$ reducing to $\frac{\beta}{\gamma+\mu}$. When the vaccination attains the full coverage in the population ($x^0=1$), the magnitude of $R_0$ depends on the vaccine failure rate $\zeta$: $$R_0=\frac{\beta \mu}{(\gamma+\mu)(\mu+1-\zeta)}
\label{R0_2.2_x1}$$
Equation \[R0\_2.2\_x1\] reduces to $\frac{\beta}{\gamma+\mu}$ if all vaccines fail ($\zeta=1$). Conversely, if all vaccines are effective $\zeta=0$, Equation \[R0\_2.2\_x1\] yields $$R_0 =\frac{\beta \mu}{(\gamma+\mu)(\mu+1)}$$ Given that $\mu \ll 1$, $R_0$ is well below the critical threshold: $$R_0=\frac{\beta \mu}{\gamma} \ll 1
\label{R0_2.2_x1z0}$$
If $0<\zeta<1$, by comparing Equations (\[R0\_2.1\]) and (\[R0\_2.2\_x1\]), we note that $R_0$ becomes smaller, provided $ \zeta > \mu$, that is: $$\begin{aligned}
R_{0}^{newborn}-R_0^{susceptible} &=\frac{\beta \zeta}{\gamma+\mu}-\frac{\beta \mu}{(\gamma+\mu)(\mu+1-\zeta)} \\
&=\frac{\beta (1-\zeta)(\zeta-\mu)}{(\gamma+\mu)(\mu+1-\zeta)} >0 \\
\end{aligned}$$
Vaccination available to the entire susceptible class with committed vaccine recipients
---------------------------------------------------------------------------------------
We now consider the existence of committed vaccine recipients, $x^c$, as a fraction of individuals who would choose to vaccinate regardless of payoff assessment [@Bai2015; @SEIR2013] ($0 < x^c \ll 1$). We assume that committed vaccine recipients are also exposed to vaccination failure rate $\zeta$ and are distributed uniformly across all nodes. It is also important to point out that the fraction of committed vaccine recipients is constant over time. However, they can still affect vaccination decision for those who are not vaccinated, and consequently, contribute to the rate of change of the vaccinated fraction $x$. Model (\[system2.2\]) can be further extended to reflect these considerations, as follows: $$\begin{aligned}
\dot{S_i} &=\mu-\sum\limits_{j=1}^M \sum\limits_{k=1}^{M} \beta_j \phi_{ij} \frac{\phi_{kj}I_k}{\epsilon ^p_j} S_i-\mu S_i+(\zeta-1) [(x_i+x^c) S_i] \\
\dot{I_i} &=\sum\limits_{j=1}^M \sum\limits_{k=1}^{M} \beta_j \phi_{ij} \frac{\phi_{kj}I_k}{\epsilon ^p_j}S_i -\gamma I_i -\mu I_i \\
\dot{x_i} &=\kappa (1-x_i-x^c)\sum\limits_{j=1}^M \phi_{ij}(-1+\omega I_j)\sum\limits_{k=1}^M\phi_{kj}(x_k+x^c) \\
\end{aligned}
\label{system_reform}$$
Model (\[system\_reform\]) has a disease-free equilibrium: $$\begin{aligned}
S_i^0 & =\frac{\mu}{\mu+(x_i^0+x^c)(1-\zeta)}\\
I_i^0 & =0 \\
\end{aligned}
\label{eq_SIR3}$$ while $$\begin{aligned}
\mathcal{F}_i &=S_i^0 \sum_{M}^{j=1} \sum_{M}^{k=1}\beta_j\phi_{ij} \frac{\phi_{kj}I_k^0}{\epsilon ^p_j} \\
\mathcal{V}_i &=\gamma I_i^0 +\mu I_i^0 \\
\end{aligned}$$ Substituting $S^0_i$, using Equation \[eq\_SIR3\], yields: $$\begin{aligned}
\mathbf{F} &= \frac{\beta \mu}{\mu+(x^0+x^c)(1-\zeta)}
\begin{bsmallmatrix}
\sum \limits_{j=1}^M \frac{\phi_{1j} \phi_{1j}}{\epsilon ^p_j} & \sum \limits_{j=1}^M \frac{\phi_{1j} \phi_{2j}}{\epsilon ^p_j} & \cdots &\sum \limits_{j=1}^M \frac{\phi_{1j} \phi_{Mj}}{\epsilon ^p_j}\\
\sum \limits_{j=1}^M \frac{\phi_{2j} \phi_{1j}}{\epsilon ^p_j} & \sum \limits_{j=1}^M \frac{\phi_{2j} \phi_{2j}}{\epsilon ^p_j} & \cdots &\sum \limits_{j=1}^M \frac{\phi_{2j} \phi_{Mj}}{\epsilon ^p_j} \\
\vdots & \vdots & \ddots & \vdots \\
\sum \limits_{j=1}^M \frac{\phi_{Mj} \phi_{1j} }{\epsilon ^p_j} & \sum \limits_{j=1}^M \frac{\phi_{Mj} \phi_{2j}}{\epsilon ^p_j} & \cdots &\sum \limits_{j=1}^M \frac{\phi_{Mj} \phi_{Mj}}{\epsilon ^p_j} \\
\end{bsmallmatrix} \\
\mathbf{V} &=(\gamma+\mu) \mathbb{I} \\
\end{aligned}$$ where $\mathbb{I}$ is a $M \times M$ identity matrix.\
The next generation matrix $\mathbf{K}$ can now be obtained as: $$\mathbf{K} = \frac{\beta \mu}{[\mu+(x^0+x^c)(1-\zeta)](\gamma+\mu)} \mathbf{G} \\
\label{K_SIR_com}$$
**Proposition 3** At the disease free equilibrium, $x^0$ has two solutions $x^0=x^c=0$ or $x^0+x^c=1$ if $x^0$, $x^c$ and $S^0$ are uniform across all nodes.
*Proof:* In analogy with **Proposition 1**, with committed vaccine recipients, at the disease free equilibrium, $S_i^0$ and $x_i^0$ are uniform across all nodes, and therefore: $$S^0_i = S^0, \ x_i^0 = x^0+x^c \quad \forall i$$ Hence, under disease-free condition where $I^0=0$ and $\dot{x_i}=0$, Equation \[system\_reform\] becomes: $$\begin{aligned}
0 & =(1-x_i^0-x^c)(x_i^0+x^c)\sum\limits_{j=1}^M \phi_{ij}(-1+\omega I_j)\sum\limits_{k=1}^M\phi_{kj} \\
\end{aligned}
\label{prop3}$$ Equation \[prop3\] leads to two solutions: $x^0+x^c=0$ or $x^0+x^c=1$.
$R_0$ can be obtained for this case as: $$R_0=\frac{\beta \mu}{[\mu+(x^0+x^c)(1-\zeta)](\gamma+\mu)}
\label{R0_3.2}$$ If nobody chooses to vaccinate ($x^0=x^c=0$), $R_0$ can be reduced to $\frac{\beta}{\gamma+\mu}$. Conversely, if the entire population is vaccinated $(x^0+x_c=1)$, Equation (\[R0\_3.2\]) reduces to equation (\[R0\_2.2\_x1\]), and $R_0$ is purely dependent on the vaccine failure rate $\zeta$.
Model parametrization
---------------------
The proposed models aim to simulate a scenario of a generic childhood disease (e.g., measles) where life-long full immunity is acquired after effective vaccination. The vaccination failure rate, $\zeta$, is set as the probability of ineffective vaccination, to showcase the influence of unsuccessful vaccination on the global epidemic dynamics. In reality, the vaccination for Measles-Mumps-Rubella (MMR) is highly effective: for example, in Australia, an estimated 96% is successful in conferring immunity [@vacc_failure].
The population flux matrix $\phi$ is derived from the network topology, in which each entry represents the connectivity between two nodes: if two nodes are not connected, $\phi_{ij}=0$; if two nodes are connected, $\phi_{ij}$ is randomly assigned in the range of $(0,1]$. The population influx into a node $i$ within a day is represented by the column sum $\sum_{j=1}^M \phi_{ji}$ in flux matrix $\phi$.
The parameters used for all simulations are summarized in Table \[parameter\]. Same initial conditions are applied to all nodes. Parameters $\omega$ and $\kappa$ are calibrated based on values used in [@bauch_2005_imi].\
[llll]{} parameter & Interpretation & Baseline value & References\
$1/\gamma$ & Average length of recovery period (days) & 10 & [@measle_R0]\
$R_0$ & Basic reproduction number & 15 & [@measle_R0]\
$\mu$ & Mean birth and death rate ($days^{-1}$) & 0.000055 & [@bauch_2005_imi]\
$\zeta$ & Vaccination failure rate & \[0,1\] & Assumed\
$\kappa$ & Imitation rate & 0.001 & [@bauch_2005_imi]\
$\omega$ & Responsiveness to changes in disease prevalence & \[1000,3500\] & [@bauch_2005_imi]\
$\phi_{ij}$& Fraction of residents from node $i$ travelling to $j$ & \[0,1\] & Network connectivity\
$I$ & Initial condition & 0.001 & [@bauch_2005_imi]\
$S$ & Initial condition & 0.05 & [@bauch_2005_imi]\
$x$ & Initial condition & 0.95 & [@bauch_2005_imi]\
Our simulations were carried out on two networks:
- a pilot case of a network with 3 nodes (suburbs), and
- an Erdös-Rényi random network [@random_network] with 3000 nodes (suburbs).
Each of these networks was used in conjunction with the following three models of vaccination behaviours:
- vaccinating newborns only: using model (\[system2.1\])
- vaccinating the susceptible class: using model (\[system2.2\]),
- vaccinating the susceptible class with committed vaccine recipients: using model (\[system\_reform\]).
In the pilot case of 3-node network, two network topologies are studied: an isolated 3-node network where all residents remain in their residential nodes without travelling to other nodes (equivalent to models proposed in [@bauch_2005_imi]), and a fully connected network where the residents at each node commute to the other two nodes, and the population fractions commuting are symmetric and uniformly distributed (Figure \[3-nodeSche\]).
We then consider a suburb-network modelled as an Erdös-Rényi random graph (with number of nodes $M=3000$) with average degree $\langle k \rangle =4$, in order to study how an expansive travelling pattern affects the global epidemic dynamics. Other topologies, such as lattice, scale-free and small-world networks, can easily be substituted here, though in this study our focus is on an Erdös-Rényi random graphs.
Results
=======
3-node network
--------------
### Vaccinating newborns only
When there is no population mobility, we observe three distinctive equilibria (Figure \[3-node\]; dotted lines): a pure non-vaccinating equilibrium (where $x^{f}=0$, representing the final condition, and $\omega=1000$), a mixed equilibrium (where $x^{f} \neq 0$, $\omega=2500$), and stable limit cycles (where $x^{f} \neq 0$, $\omega=3500$). This observation is in qualitative agreement with the vaccinating dynamics reported by [@bauch_2005_imi].
When commuting is allowed, individuals commute to different nodes, and their decision will no longer rely on the single source of information (i.e., the disease prevalence in their residential node) but will also depend on the disease prevalence at their destination. As a consequence, the three distinctive equilibria are affected in different ways (Figure \[3-node\]; solid lines). The amplitude of the stable limit cycles at $\omega = 3500$ is reduced as a result of the reduced disease prevalence. It takes comparatively longer (compare the dotted lines and solid lines in Figure \[3-node\]) to converge to the pure non-vaccinating equilibrium at $\omega=1000$, and to the endemic, mixed equilibrium at $\omega=2500$ with high amplitude of oscillation at the start of the epidemic spread. As $\omega$ can be interpreted as the responsiveness of vaccinating behaviour to the disease prevalence [@bauch_2005_imi], if individuals are sufficiently responsive (i.e., $\omega$ is high), the overall epidemic is suppressed more due to population mobility (and the ensuing imitation), as evidenced by smaller prevalence peaks. Conversely, if individuals are insensitive to the prevalence change (i.e., $\omega$ is low), epidemic dynamics with equal population mobility may appear to be more volatile at the start, but the converged levels of both prevalence peak and vaccine uptake remain unchanged in comparison to the case where there is no population mobility.
{width="\textwidth"}
We further investigated how the vaccination failure rate $\zeta$ affects the global epidemic dynamics. If, for example, only a half of the vaccine administered is effective ($\zeta=0.5$), as shown in Figure \[3node\_zeta\] (b-d), the infection peaks arrive sooner, for all values of $\omega$. As a result of having the earlier infection peaks, the individuals respond to the breakout and choose to vaccinate sooner, causing the vaccine uptake to rise. This seemingly counter-intuitive behaviour has also been reported in [@Alessio2]. When $\zeta=0$, the prevalence peaks take longer to develop and the extended period gives individuals an illusion that there may not be an epidemic breakout, and consequently encourages ‘free-riding’ behaviour. In the case where half of the vaccinations fail, epidemic breaks out significantly earlier at lower peaks, an observation that is beneficial to encourage responsive individuals to choose to vaccinate. Although some (in this case half) of the vaccines fail, a sufficiently high vaccine uptake still curbs prevalence peaks and shortens the length of breakout period. In terms of the final vaccination uptake, the vaccine failure rate predominantly affects behaviours of responsive individuals (when $\omega$ is high, such as $3500$). In this case, instead of the stable limit cycles observed when $\zeta=0$, an endemic, mixed equilibrium is reached when the vaccination failure rate is significant. Final vaccination uptake is impacted little by vaccination failure rate when individuals are insensitive to changes in disease prevalence (when $\omega$ is lower in value, such as $2500$ or $1000$). These observations are illustrated in Figure \[3node\_zeta\].
{width="\textwidth"}
### Vaccinating the entire susceptible class
If (voluntary) vaccination is offered to the susceptible class regardless of the age, the initial condition of $x=0.95$ represents the scenario that the vast majority of population are immunized to begin with, and the epidemic would not breakout until the false sense of security provided by the temporary ‘herd immunity’ settles in. This feature is observed in Figure \[3node\_sus\_zeta\] (a)-(b): the vaccination coverage continues to drop at the start of the epidemic breakout, indicating that individuals, regardless of their responsiveness towards the prevalence change, exploit the temporary herd immunity until an infection peak emerges. Since vaccination is available to the entire susceptible class, a small increase in $\textit{x}$ could help suppresses disease prevalence. However, when vaccination is partially effective (i.e., $\zeta=0.5$), the peaks in vaccine uptake, $\textit{x}$, are no longer an accurate reflection of the actual vaccination coverage, and such peaks therefore may not be sufficient to adequately suppress infection peaks. It can also be observed that vaccinating the susceptible class encourages non-vaccinating behaviours due to the perception of herd immunity, and therefore pushes the epidemic towards an endemic equilibrium, particularly when sensitivity to prevalence is relatively high ( mid and high $\omega$), as the responsive individuals would react promptly to the level of disease prevalence, altering their vaccination decisions. However, no substantial impact is observed on those individuals who are insensitive to changes in the disease prevalence ( low $\omega$).
{width="\textwidth"}
Erdös-Rényi random network of 3000 nodes
----------------------------------------
We now present the simulation results on a much larger Erdö-Rényi random network of $M=3000$ nodes, which more realistically reflects the size of a modern city and its commuting patterns. It was observed by previous studies [@immunization] that such a larger system requires a higher vaccination coverage to achieve herd immunity, and thus curbs ‘free-riding’ behaviours more effectively.
### Vaccinating newborns only
In this case, three distinct equilibria are observed (as shown in Figure \[newborn\_zeta0\] (a)) for three values of $\omega$: a pure non-vaccinating equilibrium at $\omega=1000$ and two endemic mixed equilibria at $\omega=2500$ and $\omega=3500$ respectively, replacing the stable limit cycles at high $\omega$ previously observed in the 3-node case. As expected, when individuals are more responsive to disease prevalence (higher $\omega$), they are more likely to get vaccinated. The expansive travelling pattern also somewhat elevates the global vaccination coverage level (compared to the 3-node case), particularly in the case that the individuals show a moderate level of responsiveness to prevalence ($\omega = 2500$), and shortens the convergence time to reach the equilibrium. However, for those who are insensitive to the changes in disease prevalence ($\omega = 1000$), the more expansive commuting presented by the larger network does not affect either the level of voluntary vaccination, or the convergence time in global dynamics - these individuals remain unvaccinated as in the case of the smaller network. It is also found that the vaccination coverage is very sensitive to the disease prevalence change at the larger network since the disease prevalence peaks are notably lower than the 3-node case counterparts. If half of the vaccines administered are unsuccessful ($\zeta=0.5$), the impact of early peaks on global dynamics is magnified in a larger network (Figure \[newborn\_zeta0\] (b-d)), leading to a shorter convergence time, although the final equilibria are hardly affected compared to the 3-node case as shown in figure \[3node\_zeta\].
We also compared the epidemic dynamics in terms of the adjusted imitation rate, represented by the parameter $\kappa$, as shown in figure \[kappa\_XI\]. Recall that the value of imitation rate used in our simulations, unless otherwise stated, is $\kappa = 0.001$ as reported in Table 1. Here we presents results where this value of imitation rate is compared with a much smaller value of $\kappa = 0.00025$. In populations where the individuals imitate more quickly (i.e., higher $\kappa$), the oscillations in prevalence and vaccination dynamics arrive quicker with larger amplitude, although the converged vaccine uptake and disease prevalence are similar for higher and lower $\kappa$. For a higher $\kappa$, the convergence to equilibria is quicker. These observations are consistent with results reported by earlier studies [@bauch_2005_imi]. A smaller value of $\kappa$ ($\kappa=0.00025$) also altered behaviours of those individuals who are insensitive to prevalence change. Instead of converging to the pure non-vaccination equilibrium, the dynamics converge to a mixed endemic state, meaning that when imitation rate is very low, some individuals would still choose to vaccinate even when they are not very sensitive to disease prevalence.
{width="\textwidth"}
For such a large network, vaccinating behaviour of individuals also depends on the weighted degree (number and weight of connections) of the node (suburb) in which they live. This is illustrated in Figure \[Vac\_Nei\]. For individuals living in highly connected nodes, there are many commuting destinations, allowing access to a broad spectrum of information on the local disease prevalence. Therefore, it is not surprising that we found that individuals living in ‘hubs’ are more likely to get vaccinated, particularly when the population is sensitive to prevalence ($\omega$ is high), as shown in Figure \[Vac\_Nei\] (a). This observation is in accordance with previous studies [@sto_network_2010]. Note that in in Figure \[Vac\_Nei\] (a), nodes with degree $k \geq 10$ are grouped into one bin to represent hubs, as the frequency count of these nodes is extremely low. Note also that there is a positive correlation between the number of degrees and the volume of population influx as measured by a sum of proportions from the source nodes (Figure \[Vac\_Nei\] (b)), indicating that a highly connected suburb has greater population influx on a daily basis.
### Vaccinating entire susceptible class
If (voluntary) vaccination is offered to the susceptible class regardless of age, we found that the global epidemic dynamics converge quicker compared to the similar scenario in the 3-node case. Only one predominant infection peak is observed, corresponding to the high infection prevalence around year 25, as shown in figure \[3000\_sus\_XI\]. Oscillations of small magnitude are observed for both vaccine uptake and disease prevalence at later time-steps for middle or high values of $\omega$ (as shown in insets of figure \[3000\_sus\_XI\]). These findings also largely hold if half of the vaccines administered are ineffective (i.e., $\zeta=0.5$), although the predominant prevalence peak and the corresponding vaccination peak around year 25 are both higher than their counterparts observed for the case where $\zeta=0$.
We also found that employing a small fraction of committed vaccine recipients prevents major epidemic by curbing disease prevalence. Such a finding holds for all $\omega$ (i.e., regardless population’s responsiveness towards disease prevalence) as the magnitude of prevalence is too small to make a non-vaccinated individual switch to the vaccination strategy (Figure \[committed0p05\_XI\]).
Discussion and conclusion {#conclusion}
=========================
We presented a series of SIR-network models with imitation dynamics, aiming to model scenarios where individuals commute between their residence and work across a network (e.g., where each node represents a suburb). These models are able to capture diverse travelling patterns (i.e., reflecting local connectivity of suburbs), and different vaccinating behaviours affecting the global vaccination uptake and epidemic dynamics. We also analytically derived expressions for the reproductive number $R_0$ for the considered SIR-network models, and demonstrated how epidemics may evolve over time in these models.
We showed that the stable oscillations in the vaccinating dynamics are only likely to occur either when there is no population mobility across nodes, or only with limited commuting destinations. We observed that, compared to the case where vaccination is only provided to newborns, if vaccination is provided to the entire susceptible class, higher disease prevalence and more volatile oscillations in vaccination uptake are observed (particularly in populations which are relatively responsive to the changes in disease prevalence). A more expansive travelling pattern simulated in a larger network encourages the attractor dynamics and the convergence to the endemic, mixed equilibria, again if individuals are sufficiently responsive towards the changes in the disease prevalence. If individuals are insensitive to the prevalence, they are hardly affected by different vaccinating models and remain as unvaccinated individuals, although the existence of committed vaccine recipients noticeably delays the convergence to the non-vaccinating equilibrium. The presented models highlight the important role of committed vaccine recipients in actively reducing $R_0$ and disease prevalence, strongly contributing to eradicating an epidemic spread. Similarly conclusions have been reached previously [@Bai2015], and our results extend these to imitation dynamics in SIR-network models.
Previous studies drew an important conclusion that highly connected hubs play a key role in containing infections as they are more likely to get vaccinated due to the higher risk of infection in social networks [@immunization; @sto_network_2010]. Our results complement this finding by showing that a higher fraction of individuals who reside in highly connected suburbs choose to vaccinate compared to those living in relatively less connected suburbs. These hubs, often recognized as business districts, also have significantly higher population influx as the destination for many commuters from other suburbs. Therefore, it is important for policy makers to leverage these job hubs in promoting vaccination campaigns and public health programs.
Overall, our results demonstrate that, in order to encourage vaccination behaviour and shorten the course of epidemic, policy makers of vaccination campaigns need to carefully orchestrate the following three factors: ensuring a number of committed vaccine recipients in each suburb, utilizing the vaccinating tendency of well-connected suburbs, and increasing individual awareness towards the prevalence change.
There are several avenues to extend this work further. This work assumes that the individuals from different nodes (suburbs) only differ in their travelling patterns, using the same epidemic and behaviour parameters for individuals from all nodes. Also, the same $\omega$ and $\kappa$ are used for all nodes, by assuming that individuals living in all nodes are equally responsive towards disease prevalence and imitation. Realistically, greater heterogeneity can be implemented by establishing context-specific $R_0$ and imitating parameters for factors such as the local population density, community size [@measle_R0], and suburbs’ level of connectivity. For example, residents living in highly connected suburbs may be more alert to changes in disease prevalence, and adopt imitation behaviours more quickly. Different network topologies can also be used, particularly scale-free networks [@scale_free] where a small number of nodes have a large number of links each. These highly connected nodes are a better representation of suburbs with extremely high population influx (e.g., central business districts and job hubs). It may also be instructive to translate the risk perception of vaccination and infection into tangible measures to demonstrate the aggregate social cost of an epidemic breakout, and help policy makers to visualize the cost effectiveness of different vaccinating strategies and estimate the financial burden for public health care. This study can also be extended by calibrating the network to scale-free networks [@Alessio1] while incorporating information theory [@shannon]. Networks from real-life connectivity and demographic are also of strong interest to mimic a breakout in a targeted region [@AceMod; @Zachresoneaau5294], so that a model of vaccination campaign can demonstrate a reduction in the severity of epidemic. These considerations could advance this study to more accurately reflect contagion dynamics in urban environment, and provide further insights to public health planning.
Appendix
========
Next Generation Operator Approach {#app1}
---------------------------------
The proposed model (\[system2.1\]) can be seen as a finely categorized SIR deterministic model, so that each health compartment (Susceptible, Infected and Recovered) has $M$ sub-classes. Let us denote $\mathbf{I}=\{I_1, I_2,...,I_M\}$ to represent $M$ infected host compartments, and $\mathbf{y}$ to represent $2M$ other host compartments consisting of susceptible compartments $\mathbf{y_s}=\{y_{s1}, y_{s2},...,y_{sM}\}$ and recovered compartments $\mathbf{y_r}=\{y_{r1}, y_{r2},...,y_{rM}\}$. $$\begin{aligned}
\frac{dI_i}{dt} & =\mathcal{F}_i(I,y_s)-\mathcal{V}_i(I,y_r) \\
\frac{dy_s}{dt} & =\mathcal{G}_{sj}(I,y_s) \\
\frac{dy_r}{dt} & =\mathcal{G}_{rj}(I,y_r) \\
\end{aligned}$$ where $i \in (1,M)$ and $j \in (1,M)$.
$\mathcal{F}_i$ is the rate at which new infection enters infected compartments and $\mathcal{V}_i$ is the transfer of individuals out of or into the infected compartments.
When close to disease-free equilibrium where $S=S^*=1$, the model can be linearized to: $$\frac{dI}{dt}=(\mathbf{F}-\mathbf{V})I$$ where $\mathbf{F}_{ij}=\frac{\partial \mathcal{F}_i}{\partial \mathcal{I_j}} (0,S^*)$ and $\mathbf{V}_{ij}=\frac{\partial \mathcal{V}_i}{\partial \mathcal{I_j}} (0,S^*)$\
The next generation matrix, $\mathbf{K}$, is then given by: $$\mathbf{K}=\mathbf{FV}^{-1}$$ where each entry $K_{ij}$ represents the expected number of secondary cases which an infected individual imposes on the rest of the compartments. $\mathbf{F}$ and $\mathbf{V}$ are given as follows: $$\mathbf{F}=
\begin{bmatrix}\frac{\partial \mathcal{F}_1}{\partial I_1} & \frac{\partial \mathcal{F}_1}{\partial I_2} & \cdots & \frac{\partial \mathcal{F}_1}{\partial I_M} \\
\frac{\partial \mathcal{F}_2}{\partial I_1} & \frac{\partial \mathcal{F}_2}{\partial I_2} & \cdots & \frac{\partial \mathcal{F}_2}{\partial I_M} \\
\vdots & \vdots & \ddots & \vdots \\
\frac{\partial \mathcal{F}_M}{\partial I_1} & \frac{\partial \mathcal{F}_M}{\partial I_2} & \cdots & \frac{\partial \mathcal{F}_M}{\partial I_M} \\
\end{bmatrix}$$
$$\mathbf{V}=
\begin{bmatrix}\frac{\partial \mathcal{V}_1}{\partial I_1} & \frac{\partial \mathcal{V}_1}{\partial I_2} & \cdots & \frac{\partial \mathcal{V}_1}{\partial I_M} \\
\frac{\partial \mathcal{V}_2}{\partial I_1} & \frac{\partial \mathcal{V}_2}{\partial I_2} & \cdots & \frac{\partial \mathcal{V}_2}{\partial I_M} \\
\vdots & \vdots & \ddots & \vdots \\
\frac{\partial \mathcal{V}_M}{\partial I_1} & \frac{\partial \mathcal{V}_M}{\partial I_2} & \cdots & \frac{\partial \mathcal{V}_M}{\partial I_M} \\
\end{bmatrix}$$
The basic reproduction number $R_0$ is given by the most dominant eigenvalue (or ‘spatial radius’ $\rho$) of $\mathbf{K}$ [@NGM1990; @NGM2010; @R0_beta], and therefore: $$R_0=\rho(\mathbf{K})$$
Declaration of interest {#declaration-of-interest .unnumbered}
-----------------------
The authors declare that they have no competing interests.
[——-]{} \[1\][\#1]{}
. ; World Heath Organization, 2006; pp. 125–127.
Jansen, V.A.A.; Stollenwerk, N. Jensen, H.; Ramsay, M.; Edmunds, W.; Rhodes, C. Measles outbreaks in a population with declining vaccine uptake. , [*301*]{}, 804.
Bauch, C.T. Imitation dynamics predict vaccinating behaviour. , [ *272*]{}, 1669–1675.
Wang, Z.; Bauch, C.; Bhattacharyya, S.; d’Onofrio, A.; Manfredi, P.; Perc, M.; Perra, N.; Salathè, M.; Zhao, D. Statistical physics of vaccination. , [*664*]{}, 1–113.
Bauch, C.T.; Bhattacharyya, S. Evolutionary game theory and social learning can determine how vaccine scares unfold. , [*8*]{}.
Fu, F.; Rosenbloom, D.; Wang, L.; Nowak, M. Imitation dynamics of vaccination behaviour on social networks. , [ *278*]{}, 42–49.
Liu, X.; Wu, Z.; Zhang, L. Impact of committed individuals on vaccination behaviour. , [*86*]{}.
Zhang, H.; Fu, F.; Zhang, W.; Wang, B. Rational behaviour is a ‘double-edged sword’ when considering voluntary vaccination. , [*391*]{}, 4807–4815.
Zhang, Y. The impact of other-regarding tendencies on the spatial vaccination network. , [*2013*]{}, 209–215.
Li, Q.; Li, M.; Lv, L.; Guo, C.; Lu, K. A new prediction model of infectious diseases with vaccination strategies based on evolutionary game theory. , [*104*]{}, 51–60.
Feng, X.; Bin, W.; Long, W. Voluntary vaccination dilemma with evolving psychological perceptions. , [*439*]{}, 65–75.
Bauch, C.; Earn, D.J.D. Vaccination and the theory of games. , [*101*]{}.
Bauch, C.; Earn, D.J.D. Transients and attractors in epidemics. , [ *270*]{}, 1573–1578.
Bauch, C.; Galvani, A.P.; Earn, D. Group interest versus self-interest in smallpox vaccination policy. , [*100*]{}.
Perisic, A.; Bauch, C. A simulation analysis to characterize the dynamics of vaccinating behaviour on contact networks. , [*9*]{}, 77.
Wang, Z.; Andrews, M.; Wu, Z.; Wang, L.; Bauch, C. Coupled disease–behavior dynamics on complex networks: A review. , [*15*]{}.
Arino, J.; van den Driessche, P. The Basic Reproduction Number in a Multi-city Compartmental Epidemic Model. LNCIS; Springer, Berlin, Heidelberg, 2003, Vol. 294, pp. 135–142.
Arino, J.; van den Driessche, P. A multi-city epidemic model. , [*10*]{}, 175–193.
Stolerman, L.; Coombs, D.; Boatto, S. -network model and its application to dengue fever. , [ *75*]{}, 2581–2609.
Diekmann, O.; Heesterbeek, J.A.P.; Metz, J.A.J. On the definition and the computation of the basic reproduction ratio [$R_0$]{} in models for infectious diseases in heterogeneous populations. , [*28*]{}, 365–382.
Anderson, R.M.; May, R.M. ; Oxford University Press: Oxford, 1991.
Harding, N.; Nigmatullin, R.; Prokopenko, M. Thermodynamic efficiency of contagions: a statistical mechanical analysis of the SIS epidemic model. , [*8*]{}, 20180036.
Hartfield, M.; Alizon, S. Introducing the Outbreak Threshold in Epidemiology. , [*9(6)*]{}, ee1003277.
Pastor-Satorras, R.; Castellano, C.; Mieghem, P.; Vespignani, A. Epidemic processes in complex networks. , [*87*]{}.
van den Driessche, P.; Watmough, J. 6. Further notes on the basic reproduction number. In [ *Mathematical Epidemiology*]{}; Wu, J., Ed.; Springer, 2008.
Erten, E.; Lizier, J.; Piraveenan, M.; Prokopenko, M. Criticality and information dynamics in epidemiological models. , [*19*]{}, 194.
Brauer, F. 2. Compartmental Models in Epidemiology. In [*Mathematical Epidemiology*]{}; Wu, J., Ed.; Springer, 2008.
. National Immunisation Program Schedule. Online, 2018.
van den Driessche, P.; Watmough, J. Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission. , [*180*]{}, 29–48.
Diekmann, O.; Heesterbeek, J.; Roberts, M.G. The construction of next-generation matrices for compartmental epidemic models. , [ *7*]{}, 873–885.
Seneta, E. ; Springer: New York, 1981; pp. 61–67.
Bai, Z. Global dynamics of a [SEIR]{} model with information dependent vaccination and periodically varying transmission rate. , [ *38*]{}, 2403–2410.
Buonomo, B.; d’Onofrio, A.; Lacitignola, D. Modelling of pseudo-rational exemption to vaccination for [SEIR]{} diseases. , [*404*]{}, 385–398.
Sheppeard, V.; Forssman, B.; Ferson, M.; Moreira, C.; Campbell-Lloyd, S.; Dwyer, D.; [McAnulty]{}, J. Vaccine failures and vaccine effectiveness in children during measles outbreaks in [New South Wales, March–May]{} 2006. , [ *33*]{}.
Guerra, F.; Bolotin, S.; Lim, G.; Heffernan, J.; Deeks, S.; Li, Y.; Crowcroft, N. The basic reproduction number ([$R_0$]{}) of measles: a systematic review. , [*17*]{}, e420–e428.
Erdös, P.; Rényi, A. On random graphs. , [ *6*]{}, 290–297.
Steinegger, B.; Cardillo, A.; Rios, P.; Gómez-Gardeñes, J.; Arenas, A. Interplay between cost and benefits triggers nontrivial vaccination uptake. , [*97*]{}, 032308.
Pastor-Satorras, R.; Vespignani, A. Immunization of complex networks. , [*65*]{}, 036104.
Barabási, A., Network Science; Cambridge University Press, 2016; chapter 4. Scale-free properties, pp. 3–9.
Cardillo, A.; Reyes-Suárez, C.; Naranjo, F.; Gómez-Gardeñes, J. Evolutionary vaccination dilemma in complex networks. , [*88*]{}, 032803.
Piraveenan, M.; Prokopenko, M.; Zomaya, A. Assortativeness and information in scale-free networks. , [*67*]{}, 291–300.
Cliff, O.M.; Harding, N.; Piraveenan, M.; Erten, Y.; Gambhir, M.; Prokopenko, M. Investigating spatiotemporal dynamics and synchrony of influenza epidemics in [A]{}ustralia: An agent-based modelling approach. , [ *87*]{}, 412–431.
Zachreson, C.; Fair, K.M.; Cliff, O.M.; Harding, N.; Piraveenan, M.; Prokopenko, M. Urbanization affects peak timing, prevalence, and bimodality of influenza pandemics in [Australia]{}: Results of a census-calibrated model. , [*4*]{}.
[^1]: Corresponding author
|
---
author:
- 'Kamen O. Todorov'
- 'Jean-Michel Désert'
- 'Catherine M. Huitson'
- 'Jacob L. Bean'
- Vatsal Panwar
- Filipe de Matos
- 'Kevin B. Stevenson'
- 'Jonathan J. Fortney'
- Marcel Bergmann
bibliography:
- 'hat-1\_pre\_v1.bib'
date: 'Received ; accepted 17 Sep 2019'
title: 'Ground-based optical transmission spectrum of the hot Jupiter HAT-P-1'
---
Introduction
============
The atmospheres of transiting hot exoplanets have been probed through time-series transit spectroscopy, for instance, using data from the Hubble Space Telescope [HST, e.g., @cha02; @vid03; @dem13; @sin16; @arc18]. From the ground, transit studies have used multi-object spectrographs [MOS; e.g., @bea10; @bea11; @gib13b; @ste14; @nik16; @hui17; @rac17; @bix19; @esp19] and long-slit spectrographs in low-resolution mode [e.g., @sin12; @mur14; @nor16; @mur19]. Studies using ground-based high-resolution spectrographs [e.g., @red08; @sne08; @nor18] have also yielded information on the planetary atmospheres.
We here focus on the MOS technique, which relies on time-series spectrophotometry of the target star-planet system during transit. The objective, as with all time-series transit spectroscopy, is to measure the depth of a transit as a function of wavelength to extract a transit spectrum, which carries information about the contents of the planetary atmosphere. A reference star is typically observed in another MOS slit to help mitigate any spectrophotometric systematic effects. Long-slit spectrographs have also been used.
This work, along with the study by @hui17, is part of a transit spectroscopy survey using the Gemini/GMOS instrument that observed nine transiting exoplanets during a total of about 40 transits with high spectrophotometric precision in visible wavelengths. The results from @hui17 have been used by @bou19 along with other observations, including TESS transits, to detect transit timing variations of the hot Jupiter WASP-4b. The goal of the GMOS survey is to robustly estimate systematic effects and extract high-fidelity transit spectra for the target planets, ultimately constraining Rayleigh scattering and cloud deck prevalence in irradiated atmospheres. In this study, we focus on GMOS time-series transit spectroscopy observations of a well-studied hot Jupiter.
HAT-P-1b is one of the first hot Jupiters discovered with the transit technique [@bak07 Table \[tab:prop\]]. The host star, HAT-P-1, is reasonably bright, old [3.6Gyr, @bak07], and inactive [e.g., @knu10; @nik14]. The equilibrium temperature of the planet’s day side is approximately 1300K [assuming full irradiation redistribution to the night side; @nik14]. The Rossiter-McLaughlin effect for the system was observed by @joh08, who showed that the planet’s orbital axis is well aligned with the stellar spin axis, indicating a post-formation orbital evolution that preserves alignment. Another aspect of the system is the presence of an equal-mass stellar companion at an angular distance of 11.26(V=9.75, F8V).
----------------------------- ------------------------ ----------------
M$_\star$ (M$_{\odot}$) $1.15\pm0.05$ @nik14
$\rm V_\star$ (mag) $9.87\pm0.01$ SIMBAD; @zac13
T$_{eff,\star}$ (K) $5980\pm50$ @nik14
SpT$_\star$ G0V @bak07
L$_\star$ (L$_{\odot}$) $1.59\pm0.10$ @nik14
$\rm \log(g_\star$) $4.36\pm0.01$ @nik14
$_\star$ $0.13\pm0.01$ @nik14
Distance (pc) $160\pm1$ [*GAIA*]{} DR2
P (day) $4.46529976\pm(55)$ @nik14
T$\rm _C$ (BJD$\rm _{TDB}$) $2453979.93202\pm(24)$ @nik14
a (au) $0.0556\pm0.0008$ @nik14
M$_p$ (M$\rm _J$) $0.525\pm0.02$ @nik14
R$_p$ (R$\rm _J$) $1.319\pm0.02$ @nik14
T$_{eq,p}$ (K) $1320\pm20$ @nik14
inclination ($^\circ$) $85.63\pm0.06$ @nik14
----------------------------- ------------------------ ----------------
\[tab:prop\]
The atmosphere of HAT-P-1b has been studied extensively in the past. @tod10 observed the planet’s secondary eclipses using broadband photometry at 3.6, 4.5, 5.8, and 8.0$\mu$m with the IRAC instrument on [*Spitzer*]{} and concluded that the mid-infrared spectral energy distribution of the planet’s day side is consistent with a black body. The secondary eclipse of the planet was also detected in the K$\rm _s$ band using the Long-slit Intermediate Resolution Infrared Spectrograph (LIRIS) on the William Herschel Telescope (WHT) [@moo11]. In the following years, investigators focused on transit spectroscopy studies using the [*Hubble Space Telescope*]{} (HST) with the Wide Field Camera 3 (WFC3) and the Space Telescope Imaging Spectrograph (STIS) instruments. @wak13 used the G141 grism on the WFC3 to create a low-resolution ($\rm R \sim 70$) transit spectrum in the wavelength range between 1.09 and 1.68$\mu$m. These authors detected water absorption from the 1.4$\mu$m at more than 5$\sigma$. @nik14 used HST/STIS to extract a low-resolution transit spectrum between 0.29 and 1.03$\mu$m. These observations yielded a detection of sodium at 0.589$\mu$m at 3.3$\sigma$. @sin16 summarized the HST observations and placed them in the context of Spitzer transit photometry observations at 3.6 and 4.5$\mu$m, and of other HST transit spectroscopy studies of hot Jupiters. @mon15 presented a very low-resolution transit spectrum in the visible range obtained with the Device Optimized for the Low Resolution (DOLORES) on the 3.6m ground-based Telescopio Nazionale Galileo (TNG). Potassium has been detected in the atmosphere of HAT-P-1b in transit using the Optical System for Imaging and low Resolution Integrated Spectroscopy (OSIRIS) on the Gran Telescopio Canarias (GTC) in tunable filter imaging mode near 766.5nm [@wil15].
In this work, we measure the optical transit spectra we have obtained between 0.3 and 0.9$\mu$m, using the GMOS-N instrument on the Gemini North telescope and discuss them in the context of previous HST/STIS and TNG/DOLORES results in the visible range. In Section \[sec:obs\] we present the our observational strategy and discuss the specifics of our data. Section \[sec:red\] details our data reduction approach. Section \[sec:fitting\] focuses on our treatment of the time series and the transit spectrum extraction. Section \[sec:mod\] discusses our data modeling and the implications of our results.
Observations {#sec:obs}
============
GMOS transit spectroscopy
-------------------------
We observed two primary transits of HAT-P-1b using the Gemini Multi-Object Spectrograph (GMOS-N) instrument on the Gemini North telescope (Maunakea, Hawaii) on 2012 November 12 and on 2015 November 02 (Table \[tab:data\]). Our observing strategy is similar to that of @hui17 and to those of other earlier studies using this instrument [e.g., @gib13a; @gib13b; @ste14; @ves17].
[lllllll]{} Observation ID & Date & Band & Grating & No. of & Duration & Airmass\
& & (nm) & & Exposures & & range\
\
GN-2012B-Q-67 & 2012-Nov-12 & $520-950$ & R150 + G5306 &1018& 6h 13m & $1.06-1.98$\
GN-2015B-LP-3 & 2015-Nov-02 & $320-800$ & B600 + G5307 &378 & 5h 44m & $1.06-2.74$\
\[tab:data\]
The goal of the observations was to perform accurate spectrophotometry of the target star (HAT-P-1, or BD+37 4734B). Therefore, we configured GMOS in its multi-object spectrograph mode (MOS) to observe the target and a suitable reference star (BD+37 4734A, the binary companion to the target, 11.26$\arcsec$ separation). The target and reference stars have comparable spectral types and magnitudes [G0V with $\rm V=9.87$, and F8V with $\rm V=9.75$, respectively; @hog00; @zac13].
Both stars were observed through the same 30$\arcsec$ long slit because they are close enough that a single slit provides sufficient sky coverage to assess the background. To mitigate any time-variable slit losses that could complicate our time-series analysis, we chose to use a wide slit (10). We ensured that the wavelength coverage was consistent between stars by selecting the position angle of the MOS slit mask to be equal to the PA between the two stars ($74.3^{\circ}$ E of N).
The transit observation in 2012 was performed with the R150 + G5306 grating combination (from now on, “the R150 transit”). The OG515\_G0306 filter was used to block light at wavelengths shorter than $\sim$520nm, as well as to block out contamination from other orders. This yielded a wavelength coverage of 520 to 950nm. The 2015 transit observation used the B600 + G5307 grating (the “B600 transit”), with a wavelength coverage of 320 – 800nm. No blocking filter was used.
The ideal resolving power of the R150 observation is $\rm R=631$ at the blaze wavelength (717nm), and for the B600 observation, it is $\rm R=1688$ at a blaze wavelength of 461nm. These values assume a 0.5$\arcsec$ wide slit [@hoo04 GMOS Online Manual], but our observations use a 10$\arcsec$ wide slit and are seeing limited. The wavelength resolution we obtain is $2-6\times$ lower than the ideal values.
Our data were observed while GMOS was still operating with its e2v DD detectors. These are decommissioned in GMOS-N since February 2017. We reduced the read-out noise by windowing a region of interest (ROI) on the detector in covering both spectral traces, rather than reading the whole detector. A single ROI was used to cover both stellar spectra. In addition, we binned the detector output ($1\times2$) in the cross-dispersion direction. The detector was read out with six amplifiers and with gains of approximately 2 e$^{-}$/ADU in each amplifier (although the R150 spectral traces only cover four amplifiers). Exposure times were chosen to keep count levels between $\sim$10,000 and 40,000 peak ADU and well within the linear regime of the CCDs ($<50\%$ of the full well of the detector). The total duration of the transit observations was 6h 13m (R150) and 5h 44m (B600). The transit duration of HAT-P-1b is 2h 40m, allowing sufficient out-of-transit sampling of the light curves.
Long-term photometric monitoring of HAT-P-1 {#sec:lcodata}
-------------------------------------------
We obtained photometric observations in the Johnson-Cousins/Bessell B-band of HAT-P-1b using the Las Cumbres Observatory (LCO) global network of robotic telescopes [@bro13]. The network consists of 42 telescopes with mirrors with diameters of 40cm, 1m, and 2m. The telescopes are spread across Earth in latitude and longitude, providing full sky coverage. Our 674 photometric observations cover eight months between 2016 April 25 and 2016 December 12 (one measurement every 8 hours on average, weather permitting, with flexible scheduling). We used only the 40cm and 1m telescopes with the SBIG 4k$\times$4k imagers.
Data reduction {#sec:red}
==============
We based our data reduction on the custom GMOS transit spectroscopy pipeline discussed in detail in @hui17. We used this code to correct the raw images for gain and bias levels, to perform cosmic ray identification, to remove bad columns, to flat-field the data, and to correct for slit tilt (as needed), and finally, to extract the 1D spectra. We then applied corrections for additional time- and wavelength-dependent dispersion shifts between target and reference stars on the detector due to weather and airmass, for example. In this section we outline the main points of the pipeline and the additional corrections required by our data.
Extracting the time series
--------------------------
Cosmic rays were detected with 5$\sigma$ clipping using a moving boxcar of 20 frames in time, where the value of a given pixel was compared to the values of the same pixel in the images obtained immediately before and after. The value of the flagged pixel was replaced by the median value of that pixel within the boxcar. The cosmic-ray removal flagged several percent of pixels per image frame.
The GMOS-N e2v detectors suffered from bad pixel columns, which we identified and excluded from our analysis. The pipeline flagged 1.9-2.6% of columns as bad by comparing the out-of-slit counts within a given column with those of neighboring columns. The majority ($\sim90\%$) of flagged columns are consistent between transits and typically occur in the transition regions between detector amplifiers. We saw no columns of shifted charge like those in the GMOS detector on the Gemini South telescope prior to July 2014, as reported by @hui17.
Flat-fielding should ideally not be necessary because our measurement is relative and we followed the counts in the same set of pixels as a function of time. However, as the pointing changed throughout the night, the gravity vector on the instrument evolved, causing flexing. As a result, the spectral trace drifted over different detector pixels during the observation. Therefore, we tested our extraction with and without flat-fielding. We find that flat-fielding does not significantly affect the scatter of the resulting white-light curves, increasing it (B600) or decreasing it (R150) by several percent. For consistency, we chose to omit flat-fielding in both observations.
The HAT-P-1 transit spectrum observations do not suffer from spectral tilt. The sky lines on the image are parallel to the pixel columns. We therefore did not correct for it.
We applied the optimal extraction algorithm [@hor86] to obtain the 1D spectra. We tested a range of aperture sizes between 10 and 60 pixels in the cross-dispersion direction. We find that the lowest scatter in the out-of-transit light curves is produced by an aperture of 20 pixels for both observations.
### Slit losses
Slit losses are caused by a combination of differential atmospheric dispersion and differential refraction [e.g., @san14]. For the airmass ranges in our study (Table \[tab:data\]), a separation of 11.26“, and using Equation 3 in @san14, we estimate that the apparent shift in position due to differential refraction is between 0.002$\arcsec$ and 0.009$\arcsec$ (R150), and 0.002$\arcsec$ and 0.016$\arcsec$ (B600). This is much smaller than our 10” slit, and so it is unlikely that differential refraction plays a large role in our observations. Because the reference and target stars form an almost equal-mass binary, they have very similar spectral types, colors, and apparent brightness. Thus, differential atmospheric dispersion is also unlikely to play a large role for our observations. Any variations are likely to be gradual with time and would be accounted for by our instrumental effects corrections (e.g., Section \[sec:lcsys\], Equation \[eqn:ramp\]).
Another aspect is that any time jitter in the position angle (PA) of the slit could introduce differential slit losses. No such variability is recorded in the FITS file headers, but low-level pointing changes cannot be completely ruled out. To estimate the magnitude of this effect, we measured the exposure-to-exposure pointing jitter in the cross-dispersion direction and found that it is typically less than 0.04$\arcsec$ in both transit observations, compared to the 10slit. The change in PA of the slit is likely of similar or smaller magnitude. Any slit losses that do occur would contribute to the red noise we observe in our light curves, but should at least in part be accounted for by our common mode correction (Section \[sec:spec\]).
### Diluted light
The reference star we used in our study is also the stellar companion of HAT-P-1 (11.2) that shares a similar magnitude and spectral type (planet host: G0, $\rm V=9.87$, companion: F8, V=9.75). Because the two objects have a small separation on the sky, the reference star point spread function (PSF) dilutes the transit light-curve measurement. In order to estimate the magnitude of this effect, we measured the flux of the reference star at 11.2“ in the cross-dispersion direction opposite from the target. This flux is a combination of sky background and scattered reference star light. We assumed that the sky background is spatially uniform within our field of view, and we estimated its value by measuring the flux value farthest from the reference star within the ROI in Figure \[fig:diluted\_psf\]. The combined background and diluted-light fluxes correspond to $\sim1\%$ of the target stellar flux. To ensure that using the PSF shape on the opposite side of the target is reasonable, we tested the symmetry of the GMOS spectral PSF for a single star using the observations of WASP-4 [@hui17]. We found that at 11.2”, the spectroscopic PSF is symmetric in the cross-dispersion direction within 0.2-0.4% of the overall flux level at this location.
We caution that this needs to be accounted for when the robustness of the background measurement is tested. One way to test the background estimation is to multiply the estimated background level in a cross-dispersion slice by a constant factor, $f$, and test the effect of deliberately over- or undersubtracting the background on the final results. This test is not appropriate, however, if diluted light from a nearby companion is present because multiplying the background also increases the slope of the diluting PSF that rises toward the reference star and does not take this into account in the optimal extraction stage. This increases the wavelength-dependent dilution by a factor $f$, but this increase is later not normalized by dividing by a reference star flux that is also $f$ times higher. This can cause spurious changes in the transit spectrum that would incorrectly suggest that the background subtraction is not reliable.
In our case, the reference and target stars are clearly separated enough to ensure that the spatial slope of the diluting flux is approximately linear. We estimated the dilution strength by comparing the wavelength-dependent estimate of the dilution to the flux of the target star. The contamination level in the R150 data set of the target PSF caused by the reference PSF is 0.1% (near $\sim 500$nm) and increases to $\sim1.5$% ($\sim 800$nm). In the B600 data, the dilution is $\sim1$% and is stable in time and wavelength, except near the edges of the observed wavelength range. These dilution levels reduce the transit depth by $\sim10-200$ parts per million (ppm), which corresponds to $\delta R_p/R_\star \lessapprox 0.015$, and this can be wavelength dependent.
The dilution contamination increases to several percent near the edges of the observed wavelength ranges for both observations. This is particularly important in the bluest region of B600 ($\lessapprox 400$nm), but we discarded these data from our final analysis due to additional strong systematic effects toward the end of the observation caused by high airmass. The reddest part of the R150 observation ($\gtrapprox 850$nm) is also affected by strong dilution of the target star (3-4% level). We do not use these data in Section \[sec:mod\] either.
In order to test the effect of the dilution correction on the extracted planetary transit spectrum, as an experiment, we performed our analysis with and without correcting the stellar spectra for dilution. We find that in the B600 data, the effect of dilution on the measured transit depth is small, typically much smaller than 1$\sigma$ at a given wavelength. For R150, the uncorrected time-series spectra result in a slope in the transit spectrum with shallower transits at longer wavelengths, where the dilution increases with wavelength, as a fraction of the total flux at R150, while at B600, the dilution ratio remains fairly constant with wavelength. In our final analysis, we therefore applied dilution correction to the R150 data but not to the B600 data.

### Ghost spectra
During both transit observations (B600 and R150), each spectral trace image was accompanied by a fainter trace-like signal. The amplitude of the signal is small, $\sim50$ppm, compared to the amplitude of the observed stellar spectrum. For both the reference and the targets stars, it was confined to between 22 and 30 pixels from the peak of the main trace, and followed the same spatial behavior as the main trace in all wavelengths. Despite its low amplitude, this spurious signal is clearly detected due to the extremely high signal-to-noise ratio of the median-combined spectral image of the HAT-P-1 stellar system over the entire night. However, attempting to extract its spectrum is difficult because it is dominated by the wings of the PSF of the main spectral trace. Thus, we cannot explore its spectral signature in detail, but its persistent appearance in the same location in all images and above both the target and reference stars suggests that it is likely caused by internal reflections within the instrument. Because it is highly localized, we masked the strip of pixels that showed this spurious signal and did not use them in our analysis.
Effect of terrestrial wind on the time series {#sec:wind}
---------------------------------------------
The B600 white-light curve exhibits several unusual artifacts that could not be corrected or correlated with any of the methods that are frequently discussed in the literature. We explored the headers of the raw fits file and discovered that one of these artifacts (near phase 0.987) coincides with a sudden change in the transverse wind at the observatory site: the projected wind component, perpendicular to the line of sight of the telescope, switches direction from the right to the left (Figure \[fig:winds\]). We suggest that the abrupt change in the apparent baseline slope of the B600 white-light curve near phase 0.987 could be due to structural vibrations caused by the change in wind direction with respect to the line of sight, which affect the spectral PSF width. The timing of this defect is unfortunate because it coincides with the ingress of the transit. We removed these data from our analysis.
The leading one hour of the R150 white-light curve ends with a kink near phase 0.9805. At about this time, the direction of the head-on component of the wind from changes. It comes from the front instead of the back, as before. In this data set, the transverse component of the wind always comes from the left of the observer and is approximately constant. This change in the wind does not produce a corresponding spike in the spectral PSF full width at half-maximum (FWHM), as in the B600 observation, and we cannot unambiguously attribute the kink to it.

Wavelength calibration
----------------------
After spectral extraction, we obtained the wavelength solution using CuAr lamp spectra taken on the same day as each science observation. To observe the CuAr spectra at high resolution, we used a separate MOS slit mask to that used for science, with the same slit position and slit length as the science mask, but the slit width was only 1(compared to 10). The arc frames were obtained with the same grating and filter setup as the corresponding science observation.
We used the [*identify*]{} task in the Gemini IRAF package to identify the spectral features in the CuAr spectra. A wavelength solution was then constructed by fitting a straight line to the pixel locations of the spectral features and their known wavelengths. This solution was then refined by cross-correlating all spectra with the pixel locations of several known stellar and telluric features.
The final uncertainties in the wavelength solution are approximately 1nm for all observations, which is $\sim3\%$ and $\sim6-7$% of the bin widths used in the final transmission spectrum for the R150 and B600 transits, respectively. This level of uncertainty in the wavelength solution has been shown to be sufficient to avoid systematic effects caused by wavelength-dependent differences in the limb-darkening models we used to fit the transits in Sect. \[sec:fitting\] [@hui17], and in addition, it is also smaller than our resolution element ($\sim4$nm in R150 and $\sim$2nm in B600).
Dispersion-direction shifts of the stellar spectra
--------------------------------------------------
Because GMOS has no atmospheric dispersion compensator, we expect the wavelength solution to shift with time. @hui17 reported that throughout the night, individual spectra are shifted and stretched in the dispersion direction. The shift and stretch are both time and wavelength dependent. Left uncorrected for, this effect could introduce spurious signals in the observed transit spectrum of the planet. In order to avoid these systematic effects in the final light curves related to shifting wavelength solution with time, we applied a correction for this effect.
A priori differential refraction calculations alone do not explain the observed shifts for our observations well. Therefore we applied a cross-correlation to four spectral segments as a function of time to measure the spectral shift and stretch empirically, similarly to @hui17.
Finally, we corrected for the differential offset in wavelength between the reference and target star spectra on the detector. This offset is caused by the fact that the PA of the detector slit is slightly different from the PA of the reference star. We used a cross-correlation to measure the offset ($1.57 - 1.63$ pixels for all observations). We interpolated the reference star’s spectrum onto the target star’s wavelength solution. Bad columns (which are the same columns on the detector but are at different wavelengths for each star) were omitted.
We tested the effect of wavelength shifts with time by running our analysis with and without the correction. We find that for the low-resolution broad bins that we adopted, the correction results in differences typically smaller than 0.1$\sigma$ in both B600 and R150. However, the shift correction has a small effect on the narrow bins we used to investigate the Na absorption region. Therefore we chose to keep this correction in our analysis.
Transit light-curve analysis {#sec:fitting}
============================
We describe our transit depth measurements for the B600 and R150 observations below. First, we describe the model we used to correct for the systematic effects in Section \[sec:lcsys\]. In Section \[sec:wlc\] we focus on the wavelength-integrated white-light curves for each of the observations. Section \[sec:spec\] focuses on the transit light curves in individual wavelength bins, while Section \[sec:res\_sodium\] discusses the time series near the 589nm Na I resonant doublet expected in the planet’s atmosphere. Section \[sec:starvar\] addresses any potential variability in the host star based on the LCO data.
Light-curve model of systematic effects {#sec:lcsys}
---------------------------------------
To describe the systematic effects (telluric absorption and instrumental effects) in our light curves, we adopted the parameterization of @ste14, who used the same instrument, GMOS, to study the transmission spectrum of the hot Jupiter WASP-12b in the red-optical. This model addresses effects caused by the Cassegrain Rotator Position Angle (CRPA), the airmass, and the time-varying PSF of the instrument. Our light-curve model also accounts for any approximately linear or quadratic trends as a function of time that are commonly seen in transit studies. We applied this model to both the R150 and B600 data and to both the white-light curves and the wavelength-dependent light curves.
We parameterized the systematic effect due to the CRPA by approximating the photometric dependence of the flux on the CRPA as a cosine function: $$S(A_{CRPA}, \theta_{CRPA}) = 1 + A_{CRPA}\cos(\theta(t) + \theta_{CRPA}).$$ Here, $A_{CRPA}$ and $\theta_{CRPA}$ are the free parameters, while $\rm \theta$(t) is the known CRPA as a function of time. Like @ste14, we considered a systematic trend, or a “ramp”, as a function of time in the light curve in the form, $$R(t) = c_{ramp} + b_{ramp}(t-t_0) + a_{ramp}(t-t_0)^2,
\label{eqn:ramp}$$ where $a$ and $b$ are free parameters, while $t$ is time and $t_0$ is the time of observation of the first frame in the time series. However, we find that with our data, $a_{ramp}$ is degenerate with the transit depth, and based on the resulting Bayesian information criterion (BIC) values, the data are better described by a pure linear ramp. We included an airmass correction: $$A(\alpha) = C_{\alpha}\alpha(t),$$ where $\alpha(t)$ is the known run of airmasses at the time of observation, and $C_{\alpha}$ is a free parameter.
These parameters account well for the long-timescale (hours) variations of the light curves, but do not correct short-timescale ($\lesssim10$min) effects. To address these, we experimented with several time-varying quantities: the ratio of the widths of the target and reference PSFs, the cross-dispersion position of the target PSF on the detector, and the dispersion direction shifts of the target spectrum. We find that most of these are correlated with the light curves at some level, but the data are best described by the width of the target PSF as a function of time: $$P(W) = C_{PSF\,width}W(t),$$ where $C_{PSF\,width}$ is a free parameter, and W(t) is the run of the median PSF widths throughout the observations. This correction term is likely related to the state of the atmosphere above the telescope during the observations.
Thus, the final model to fit to the light curve (normalized to the out-of-transit flux) can be represented as $$F(t) = T(t) S(A_{CRPA}, \theta_{CRPA}) R(t) A(\alpha) P(W).$$ $T(t)$ is the astrophysical transit light-curve model, normalized to 1 out-of-transit.
We used several open-source packages to model and fit the transit light curves. $T(t)$ was calculated with the transit code [*batman*]{} [@kre15], and for the fit we used [*emcee*]{}, a pure-Python implementation of the affine-invariant Markov chain Monte Carlo (MCMC) ensemble sampler [@goo10; @for13]. The stellar limb darkening was pre-computed for each wavelength bin using PyLDTk [@par15], which uses the spectral library in @hus13, based on the atmospheric code PHOENIX. We used the nonlinear parameterization for limb-darkening coefficients [@cla00; @sin10].
White-light curve {#sec:wlc}
-----------------
We constructed transit white-light curves for each transit (B600 and R150) by integrating each observed target and reference spectrum over wavelength and dividing one by the other. This approach eliminates many of the light-curve artifacts caused by variations in Earth’s atmosphere. Because some transit parameters are not wavelength dependent (e.g., the ratio of semimajor axis to stellar radius, $a/R_\star$, the central time of transit, $T_0$, and the orbital period, $P$), we can use the precise white-light curves, which have a high signal-to-noise ratio, to measure them, rather than the noisier light curves at specific wavelengths.
The data cover wide wavelength ranges from the blue optical to the near-infrared. Our B600 light curve includes intensity measurements between 330 to 770nm. Similarly, we integrated the R150 spectral time series between 550 and 950nm (Figure \[fig:wlc\_fit\_both\]). We estimated the uncertainties of each photometric point in two ways: from photon noise (assuming the only source of uncertainty of a given point is due to Poisson noise of the incoming photons), and based on the local scatter of the points, after subtracting a white-light curve smoothed by convolving it with a Gaussian. The latter approach is an empirical estimate of the uncertainty of the points, and we used it to fit the white-light curve.
As we discuss in Section \[sec:wind\], there is a sharp change in the slope of the ramp in the R150 data (Figure \[fig:wlc\_fit\_both\]), possibly related to a change in the head-on component of the wind. However, phase 0.9805 is also close to the phase where the target crossed the meridian (airmass of 1.057), and telescope field rotation may also have contributed. Regardless, the residuals of the first hour of observations (Figure \[fig:wlc\_fit\_both\], bottom panel) show a different slope than the rest of the observation. Because this happens well before egress, we simplified our analysis and dropped these data. The final 45min of R150 data were also excluded because they were taken at high airmass and their residuals are higher than those of the preceding white-light curve points.
We also removed the data near phase 0.987 in the B600 data set, where the light curve is strongly distorted. We removed all phases covered by the unusual spike in the PSF FWHM, which coincides with a change in the direction from which the transverse wind component impacts the dome (Figure \[fig:winds\]). We also dropped the last several minutes of this light curve, taken at high airmass where the white-light curve appears to become unstable (Figure \[fig:wlc\_fit\_both\]).
We fit the white-light curve with the model described in Section \[sec:lcsys\]. In the statistical fits, we allowed the semimajor axis ($a/R_{\star}$), the transit depth ($R_p/R_{\star}$), and the central time of transit ($T_0$), which are a part of the $T(t)$ calculation, to vary freely. A large uncertainty in the orbital inclination could result in a degeneracy with the slope of the transit spectrum [@ale18]. However, the orbital parameters of HAT-P-1b are well known [e.g., @nik14], therefore we fixed the limb-darkening coefficients and inclination of the planetary orbit in our analysis (Table \[tab:fitting\]).
Unlike the R150 white-light curve, the B600 white-light curve is poorly described by a single linear ramp. The white-light curve appears to increase slightly until the defect near orbital phase 0.987, after which the ramp slope appears to decrease with a shallow slope during ingress and then sharply drop off near phase 0.994 (Figure \[fig:winds\]). We experimented with a segmented version of the linear ramp and found that the fit quality and the BIC are optimal for a ramp with three segments (before phase 0.987, between 0.987 and 0.994, and after 0.994), where each segment is linear with a free slope. We explored a number of different configurations for the break points and correction parameters for the white-light curve. We tried to fit the B600 white-light curve without any break points, and using the airmass function to correct for the changing slope. This led to a reduced $\chi^2$ of the fit of 1.9 and $BIC=650$. Relying on a single break point improved these values, but the best fits were achieved with two break points: reduced $\chi^2 = 0.94$, and $BIC = 360$, showing a clear preference for the models with two break points. The location of the break near 0.994 is clearly visible by eye, while for the break near 0.987, we ran the white-light curve fit on a grid of locations and selected the location that yielded the best BIC. We experimented by letting the break-point locations vary as a free parameter, but the MCMC fits did not converge, even with strict priors. We therefore fixed the break-point locations. At these points the adjacent segments of the ramp were required to have the same value to avoid discrete jumps in the ramp.
Additional detrending options can be explored [e.g., @ste14], but regardless of that choice, the ramp slope of this observation is variable and changes sharply during ingress and the transit itself. Because of this, any decorrelation function that could be applied to the B600 data would be degenerate with the transit depth, leading to larger transit spectrum uncertainties (Section \[sec:spec\]).
We present the R150 white-light curve corrected for systematic effects in Figure \[fig:wlc\_hst\] and compare it to an HST/STIS white-light curves in the optical [@nik14]. In our fit, we kept the inclination of the system fixed to the value found by @nik14, based on multiple HST transits with the STIS and WFC3 instruments. The $a/R_{\star}$ parameter we find for the R150 light curve is well within 1$\sigma$ of the HST value.
[lll]{} Parameter & B600 white-light curve & R150 white-light curve\
\
Orbital parameters\
\
T$_0$ ($\rm BJD_{TDB}$) & $2457328.907659 \pm (34)$ & $2456243.839084 \pm (76)$\
$R_p/R_{\star}$ & $0.1198 \pm 0.0010$ & $0.11899 \pm 0.00035$\
$a/R_{\star}$ & $9.8775 \pm 0.0147$ & $9.8387 \pm 0.0056$\
$i$ (degrees) & 85.634$^{\circ}$ (fixed) & 85.634$^{\circ}$ (fixed)\
$e$ (eccentricity) & 0 (fixed) & 0 (fixed)\
\[tab:fitting\]
Transit spectrum {#sec:spec}
----------------
We divided the time-series spectra into wavelength bins. For consistency, we used 200pixel bins in both R150 and B600, which corresponds to bins of 36nm and 16nm, respectively. To measure the transit spectrum of HAT-P-1b, we applied the model discussed in the previous section to the transit light curves corresponding to individual wavelength bins ($\rm \lambda$LC) with several changes. To correct for wavelength-independent systematic effects that were not accounted for by our white-light curve model, we divided each $\lambda$LC by the white-light curve residuals (data minus best-fit model). We smoothed the white-light curve residuals by convolving them with a Gaussian to reduce the effect of high-frequency noise when the light curve was divided. When we skipped this step, the point-to-point scatter in the R150 and B600 $\rm \lambda$LCs typically increased by $\sim20$% and $\lesssim 5$%, respectively. This common-mode correction resulted in lower $\chi^2$ fits, but did not constitute an additional free parameter because we always divided the light curves by the same white-light curve residuals.
We fixed the ratio of the semimajor axis to stellar radius, $a/R_\star$, the central time of transit, $T_0$ to their best-fit values from the white-light curve fits because they are wavelength independent. The transit depth, $A_{CRPA}$, $\theta_{CRPA}$, $C_{\alpha}$, and $C_{PSF\,width}$ remained free, as were the linear ramp parameters. As with the white-light curve, we used a segmented ramp for the B600 $\lambda$LCs.
We fit each light curve in every bin again using the PyLDTk code to compute the limb-darkening coefficients and the *emcee* package to retrieve the posterior distributions of the wavelength-dependent free parameters. We plot the transit fit results in Figures \[fig:r150\_colors\] and \[fig:b600\_colors\]. While the typical R150 light curve appears to be well corrected for systematic effects, the fits near 750nm and $\gtrsim850$nm are potentially unreliable due additional correlated noise effects of strong and variable telluric absorption by O$_2$ and H$_2$O, respectively. The B600 data at wavelengths shorter than $\sim400$nm are heavily affected by high airmass toward the end of the observation. We tabulate both the R150 and B600 transit spectra in Table \[tab:tr\_spec\].
The uncertainties of photometric points in each light curve were estimated as described in Section \[sec:wlc\]. We estimated how well our data corrections work by comparing uncertainties in $R_p/R_{\star}$ for the fits where we assumed pure photon noise for the points in the light curves to the fits using the empirical photometric scatter (Table \[tab:tr\_spec\]).
--------------- ----------------------- ------------------------------- -------------
Wavelength R$_p$/R$\rm _\star$ $\sigma/\sigma_{photon}$ $^1$ Data
(nm) quality$^2$
551.6 - 587.4 0.11915 $\pm$ 0.00085 1.5 1
587.4 - 623.2 0.12088 $\pm$ 0.00064 1.4 1
623.2 - 659.1 0.12045 $\pm$ 0.00069 1.5 1
659.1 - 694.9 0.11919 $\pm$ 0.00056 1.5 1
694.9 - 730.7 0.11989 $\pm$ 0.00054 1.4 1
730.7 - 766.5 0.11876 $\pm$ 0.00059 1.5 0
766.5 - 802.4 0.12003 $\pm$ 0.00062 1.6 1
802.4 - 838.2 0.11865 $\pm$ 0.00110 2.7 1
838.2 - 874.0 0.11898 $\pm$ 0.00061 1.4 1
874.0 - 909.9 0.11793 $\pm$ 0.00087 1.7 0
909.9 - 945.7 0.11859 $\pm$ 0.00081 1.3 0
327.9 - 343.6 0.09987 $\pm$ 0.01503 4.1 0
343.6 - 359.3 0.10876 $\pm$ 0.01066 2.4 0
359.3 - 375.0 0.10976 $\pm$ 0.00900 4.3 0
375.0 - 390.7 0.11828 $\pm$ 0.00707 3.6 0
390.7 - 406.4 0.11393 $\pm$ 0.00517 2.3 1
406.4 - 422.2 0.11779 $\pm$ 0.00450 2.7 1
422.2 - 437.9 0.11522 $\pm$ 0.00374 2.2 1
437.9 - 453.6 0.11983 $\pm$ 0.00331 1.2 1
453.6 - 469.3 0.12232 $\pm$ 0.00577 4.0 1
469.3 - 485.0 0.11977 $\pm$ 0.00231 1.7 1
485.0 - 500.8 0.11791 $\pm$ 0.00223 1.7 1
500.8 - 516.4 0.12090 $\pm$ 0.00174 1.6 1
516.4 - 532.2 0.12286 $\pm$ 0.00276 2.7 1
532.2 - 547.9 0.12011 $\pm$ 0.00177 1.0 1
547.9 - 563.6 0.11905 $\pm$ 0.00177 1.7 1
563.6 - 579.4 0.12020 $\pm$ 0.00160 1.7 1
579.4 - 595.1 0.12133 $\pm$ 0.00152 1.9 1
595.1 - 610.8 0.12042 $\pm$ 0.00169 1.8 1
610.8 - 626.5 0.12039 $\pm$ 0.00163 2.0 1
626.5 - 642.2 0.12073 $\pm$ 0.00277 3.3 1
642.2 - 657.9 0.12086 $\pm$ 0.00163 2.0 1
657.9 - 673.7 0.12136 $\pm$ 0.00157 1.6 1
673.7 - 689.4 0.12079 $\pm$ 0.00162 1.5 1
689.4 - 705.1 0.12060 $\pm$ 0.00177 2.2 1
705.1 - 720.8 0.12027 $\pm$ 0.00178 2.4 1
720.8 - 736.6 0.11938 $\pm$ 0.00145 1.8 1
736.6 - 752.2 0.11890 $\pm$ 0.00179 2.4 1
752.2 - 768.0 0.12157 $\pm$ 0.00162 2.3 1
--------------- ----------------------- ------------------------------- -------------
: HAT-P-1b transit spectrum
The ratio between the uncertainty of $R_p/R_{\star}$ at a given wavelength, assuming a realistic estimate of the noise per spectrophotometric point, compared to an ideal photon noise case.
We present all transit depths we measured, but flag some of them with poor data quality (value of 0). These measurements were excluded from further analysis. Specifically, the bluest light curves in B600 are affected by high airmass at the end of the observation. The R150 data near 750nm and $>850$nm might be affected by O$_2$ and H$_2$O absorption bands, and we considered them unreliable. Robust transit depth measurements are marked with a data quality value of 1.
\[tab:tr\_spec\]
Core of the Na I absorption line {#sec:res_sodium}
--------------------------------
Both the R150 and the B600 spectra cover the Na I doublet at 589nm. Sodium has been detected in the atmosphere of HAT-P-1b by @nik14 and @sin16 in the narrow core of the line (3nm wide bin centered on 589.3nm at $3.3\sigma$). However, these authors did not detect the pressure-broadened wings of the Na feature.
We explored this wavelength region by comparing the narrow-band transit depth in the core of the sodium line in both the R150 and B600 observations with the broadband transit depth in that region. We divided the region around the Na I line into narrow-band bins of 2nm each. The bins overlap each other by 1nm in order to ensure that any increase in measured $R_p/R_{\star}$ is not due to a single anomalous pixel or pixel column. The overlapping-bin light curves were fit as described in Section \[sec:spec\].
If the feature is real, it should ideally result in a smooth rise and drop of transit depths as a function of wavelength, essentially tracing the dispersion function of the instrument. This approach does not rule out wavelength-dependent systematic effects, but prevents our results from being influenced by a spurious feature due to a pixel defect that might produce a sharp and discontinuous change in transit depth. We tested this on wavelengths beyond the immediate vicinity of the Na I feature where we expect the spectrum to be flat. Seeing features in our spectrum in this region might indicate that narrow-band transit depth variations are difficult to measure reliably with Gemini/GMOS. In addition, we experimented with the “divide white” approach described by @ste14, for example, where we compared the transit depth in the narrow-wavelength region around the Na I line to a transit light curve covering a wider wavelength region around the feature. For both transit observations, these two approaches result in narrow-band spectra that are consistent with flat lines, and we did not detect narrow-band Na absorption. Scaling our narrow-band uncertainties to the bin size of @nik14, we expect that their 3.3$\sigma$ Na I narrow-band absorption detection would translate into a 2.1$\sigma$ detection in our R150 observation, which is marginal, and therefore we did not consider our nondetection to be in disagreement with the HST study.
Another aspect of our observations is that they are seeing limited ($\sim1.5$), corresponding to a resolution element (line-spread function width) of $\sim3.5$nm near a wavelength of 600nm. This is similar to the bin size used by @nik14 and also @nik16 [3nm bin centered on 589nm], for instance, who observed the core of the Na line on the warm Saturn-mass exoplanet WASP-39b using ground-based transit spectroscopy with the FOcal Reducer/low dispersion Spectrograph 2 (FORS2) instrument on the Very Large Telescope (VLT).
Both observations in principle also cover the K I resonant doublet at 770nm. This wavelength is strongly affected by telluric O$_2$ absorption, however, and our systematic corrections are not reliable. We therefore excluded these data from consideration.
Stellar variability {#sec:starvar}
-------------------
Star spots and surface inhomogeneities have been known to affect the apparent transit depth at a given wavelength, potentially affecting the transit spectrum of a transiting exoplanet. Star spots occulted by the planet during transit have been investigated in detail [e.g., @des11a; @des11b; @nut11; @san11]. We see no evidence of star spot occultations during transit in either of our white-light curves. More recently, @mcc14 and @rac17, for example, have shown that unocculted star spots can mimic a Rayleigh-like slope in the transit spectra of exoplanets due to what the latter authors called the transit light-source effect (TLSE). Faculae have been demonstrated to potentially affect the transit depths near strong stellar absorption lines such as H$\alpha$, Ca II K, and Na I D [e.g., @cau18; @rac19]. @esp19 analyzed six transit spectroscopy observations of the hot Jupiter WASP-19b and characterized the surface of the host star. They detected two spot-crossing events, including a planetary occultation of a bright spot.
However, HAT-P-1 and its companion (which is our reference star) have not been reported to show signs of activity so far. @bak07 reported low atmospheric variability for both stars. Measurements of the Ca II H&K activity index of HAT-P-1, $\log(R^{\prime}_{HK}),$ have yielded a value of $-4.984$, which indicates relatively low activity [@noy1984; @knu10]. @nik14 monitored HAT-P-1 for 223 days, between 2012 May 4 and 2012 December 13, using the RISE camera on the Liverpool Telescope. This campaign covers our R150 observation on November 12, and @nik14 placed an upper limit on the variability of the star of 0.5%.
We also examined the potential for variability and activity of the host star. We used our LCO photometric monitoring in the Johnson-Cousins/Bessell B-band, which should be particularly sensitive to magnetic star spots. Even though our photometric monitoring covers eight months in 2016 (Section \[sec:lcodata\]) and does not cover our transit observations, any detection of variability in the B band would indicate that star spots and the TLSE effect need to be taken into account. We removed the photometric points taken during transit and secondary eclipse (20 and 37 points, respectively). We created a Lomb-Scargle periodogram using the remaining 617 data points in order to identify the stellar rotation period and the amplitude of variability, thus estimating the fraction of the stellar surface covered by spots (Figure \[fig:lomb\_scargle\]). However, we find no peak in the periodogram near the expected stellar rotation period [$\lesssim15$days, based on rotational line broadening, @joh08]. This suggests that the star had few if any stellar spots during the period it was monitored. We find no evidence for stellar activity in our photometric and spectroscopic data, nor in the literature. We therefore do not consider it necessary to correct our results for the transit light-source effect.
Our periodogram in Figure \[fig:lomb\_scargle\] suggests a tentative low-amplitude periodic variation [0.5% in the B band, consistent with @nik14] that coincides with the planetary orbital period. It is unlikely that this signal is an alias of the stellar rotation period because we do not detect its primary order. We speculate that this variability might be an indication of a magnetic planet-star interaction, although in this case, the variability period would be expected to be the synodic period of the star in the frame of the planet [$\lesssim6.3$days, see, e.g., @fis19].

Discussion {#sec:mod}
==========
We compare our observed transit spectrum to models computed using the Exo-Transmit code [@kem16] and to previous studies. The uncertainties of the spectrophotometry in the B600 observations exceed the photon noise by a factor of up to four. This is likely an effect of the complex systematic effects that influence this data set, compounded by the high airmass toward the end of the transit observation. Consequently, the resulting B600 transit spectrum is consistent with both cloudy and clear models, and does not have the necessary precision to constrain the transit spectrum of HAT-P-1b. Because the effect of the airmass on the data quality is especially strong at wavelengths $\lesssim500$nm, it is difficult to place limits on the Rayleigh slope in the atmosphere of HAT-P-1b based on these data.
The R150 observation is less affected by systematic effects and typical uncertainties are only $\sim50$% above photon noise. The transit depths measured in R150 are comparable in terms of precision to the measurements made with HST/STIS, and the two observations are consistent with each other (Figure \[fig:compare\_all\]).
We compared the R150 reduced-$\chi^2$ values for several exo-transmit models (Table \[tab:models\]). All models we considered have isothermal atmospheres at 1500K, which is close to the equilibrium temperature of the planet. Our spectrum excludes strong TiO and VO features and shows preference to models with cloud decks deep in the atmosphere, or completely clear atmospheres. This is consistent with the HST/STIS results in @nik14 and @sin16, and also with ground-based broadband transit measurements with TNG/DOLORES [@mon15]. Atmospheric spectra without clear TiO and VO absorption features are common in hot Jupiters with equilibrium temperatures similar to that of HAT-P-1b [e.g., @des08; @sin16].
[ll]{} Model parameters & $\chi^2_{reduced}$\
\
\
TiO, clear, no condensation & 2.9\
no TiO/VO, clear & 0.94\
no TiO/VO, cloud at 0.01mbar & 1.4\
no TiO/VO, cloud at 0.1mbar & 1.4\
no TiO/VO, cloud at 1mbar & 1.4\
no TiO/VO, cloud at 10mbar & 1.2\
no TiO/VO, cloud at 100mbar & 0.95\
\[tab:models\]

We tested whether our observations can be used to probe narrow-band features like the Na I resonance doublet at 589nm. Unlike @nik14, we did not detect the core of the Na absorption line, but our uncertainty estimates on the narrow-band transit depth uncertainties are $\sim60\%$ larger than those derived from the HST observations, which might explain this difference. We did not observe the broad line wings either, but this is consistent with the results of @nik14, who also did not detect them.
Conclusion
==========
We presented ground-based observations of the visible low-resolution transit spectrum of the hot Jupiter HAT-P-1b. We showed that transit spectroscopy observations from the ground can be comparable to HST/STIS in precision and in information content, as is the case of our R150 observation. Based on our R150 observations between 550 and 850nm, we are able to reject models with a high abundance of TiO and VO. This is consistent with the findings of previous space-based studies with HST/STIS. The B600 transit data are affected by strong systematic effects and result in a highly uncertain transit spectrum.
Our long-term photometric B-band monitoring of the host star, HAT-P-1, suggests that its activity is very low and is unlikely to affect our spectrophotometry. However, our Lomb-Scargle diagram results in an unexpected peak at the planetary orbital period. Because of this, we speculate that planet-star interactions might play a role in this system.
We thank the referee for a careful and balanced review. We thank Jacob Arcangeli, Claire Baxter, and Gabriela Muro-Arena for useful discussions, and Nikolay Nikolov for graciously sharing with us the HST/STIS white light curves. This work is based on observations obtained at the Gemini Observatory (acquired through the Gemini Observatory Archive and Gemini Science Archive), which is operated by the Association of Universities for Research in Astronomy, Inc. (AURA), under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina), and Ministério da Ciência, Tecnologia e Inovação (Brazil). Based in part on Gemini observations obtained from the National Optical Astronomy Observatory (NOAO) Prop. ID: 2012B-0398; PI: J-.M Désert. This work makes use of observations from the LCOGT network. J.M.D acknowledges support by the Amsterdam Academic Alliance (AAA) Program. The research leading to these results has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement no. 679633; Exo-Atmos). This material is based upon work supported by the National Science Foundation (NSF) under Grant No. AST-1413663, and supported by the NWO TOP Grant Module 2 (Project Number 614.001.601). This research has made use of NASA’s Astrophysics Data System. This research made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration, 2018). Other software used: diff\_atm\_refr.pro (http://www.eso.org/gen-fac/pubs/astclim/lasilla/diffrefr.html); ATLAS [@kur93 available at http://kurucz.harvard.edu]; MPFIT [@mar09]; Time Utilities [@eas10].
|
---
abstract: |
We use photometric and spectroscopic observations of the detached eclipsing binaries V40 and V41 in the globular cluster NGC 6362 to derive masses, radii, and luminosities of the component stars. The orbital periods of these systems are 5.30 and 17.89 d, respectively. The measured masses of the primary and secondary components ($M_p$, $M_s$) are (0.8337$\pm$0.0063, 0.7947$\pm$0.0048) M$_\odot$ for V40 and (0.8215$\pm$0.0058, 0.7280$\pm$0.0047) M$_\odot$ for V41. The measured radii ($R_p$, $R_s$) are (1.3253$\pm$0.0075, 0.997$\pm$0.013) R$_\odot$ for V40 and (1.0739$\pm$0.0048, 0.7307$\pm$0.0046) R$_\odot$ for V41. Based on the derived luminosities, we find that the distance modulus of the cluster is 14.74$\pm$0.04 mag – in good agreement with 14.72 mag obtained from CMD fitting. We compare the absolute parameters of component stars with theoretical isochrones in mass-radius and mass-luminosity diagrams. For assumed abundances \[Fe/H\] = -1.07, \[$\alpha$/Fe\] = 0.4, and Y = 0.25 we find the most probable age of V40 to be 11.7$\pm$0.2 Gyr, compatible with the age of the cluster derived from CMD fitting (12.5$\pm$0.5 Gyr). V41 seems to be markedly younger than V40. If independently confirmed, this result will suggest that V41 belongs to the younger of the two stellar populations recently discovered in NGC 6362. The orbits of both systems are eccentric. Given the orbital period and age of V40, its orbit should have been tidally circularized some $\sim$7 Gyr ago. The observed eccentricity is most likely the result of a relatively recent close stellar encounter.
Key words: binaries: eclipsing – binaries: spectroscopic – globular clusters: individual (NGC 6362) – stars: individual (V40 NGC 6362, V41 NGC 6362)
author:
- 'J. Kaluzny$^1$, I. B. Thompson$^2$, A. Dotter$^3$, M. Rozyczka$^1$, A. Schwarzenberg-Czerny$^1$'
title: 'The cluster ages experiment (CASE). VII. Analysis of two eclipsing binaries in the globular cluster NGC 6362[$^\ast$]{} '
---
Introduction {#sect:intro}
============
The Cluster AgeS Experiment (CASE) is a long term project conducted mainly at Las Campanas Observatory with a principal aim of the detection and detailed study of detached eclipsing binaries (DEBs) in nearby globular clusters (GCs). The main goal is to measure masses, luminosities and radii of DEB components with a precision of better than one per cent in order to determine GC ages and distances, and to test stellar evolution models [@jka05]. The methods and assumptions we employ follow the ideas of @pac97 and @tho01. Thus far, we have presented results for seven DEBs from four clusters: $\omega$ Cen, 47 Tuc, M4 and M55. These were the first and, to the best of our knowledge, the only direct measurements of the fundamental parameters of main-sequence and subgiant stars in GCs. More details on the project and references to earlier papers can be found in @jka14a.
The present paper is devoted to an analysis of two DEBs, V40 and V41, belonging to the globular cluster NGC 6362. Radial velocity and photometric observations are used to derive masses, luminosities, and radii of the component stars. In Section \[sect:photobs\] we describe the photometric observations and determine orbital ephemerides. Section \[sect:spectobs\] presents spectroscopic observations and radial-velocity measurements. The combined photometric and spectroscopic solutions for the orbital elements and the resulting component parameters are obtained in Section \[sect:lca\_sp\]. In Section \[sect:age\], we compare the derived parameters to Dartmouth stellar evolution models, with an emphasis on estimating the age of the binaries. Finally, in Section \[sect:discus\] we summarize our findings.
Photometric observations {#sect:photobs}
========================
Photometric observations analyzed in this paper were obtained at Las Campanas Observatory between 1995 and 2011 using the 2.5-m du Pont telescope equipped with the TEK5 and SITE2K CCD cameras, and the 1.0-m Swope telescope equipped with the SITE3 CCD camera. The scale was 0.26 arcsec/pixel and 0.44 arcsec/pixel for the du Pont and Swope telescopes, respectively. For each telescope the same set of $BV$ filters was used for the entire duration of the project. The first likely eclipse events of V40 and V41 were observed in June 1991. For the next eight years observations were conducted in a survey mode with the aim of detecting more eclipses and to establish ephemerides for both variables. The periodic variability of V40 and V41 was firmly established by @bat99. Starting with the 2001 season, we attempted to cover predicted eclipses of V40 or V41. Several eclipses were observed with both telescopes. The light curve analysis in Section \[sect:lca\_sp\] is based exclusively on the du Pont data. The less accurate Swope photometry was used solely to improve the ephemerides (the difference in accuracy between the two telescopes resulted from the difference in the light-gathering power and the poorer sampling of the point spread function of the Swope telescope).
The core and half-light radii of NGC 6362 are $r_{c}=70$ and $r_{h}=123$, respectively [@har96]. V40 is located in the core region at a distance of $d=59$ from the cluster center. V41 resides at the half-light radius with a distance of $d=121$. The du Pont images show no evidence of unresolved visual companions to either binary. In the case of V40 this is confirmed by the examination of HST/ACS images obtained within the program GO-10775 (PI Ata Sarajedini). Both systems are proper-motion members of NGC 6362 [@zlo12; @jka14b]. Coordinates and finding charts for V40 and V41 can be found in @bat99.
The light curves presented here were measured with profile fitting photometry extracted with the Daophot/Allstar codes [@ste87; @ste90], and transformed to the standard Johnson system based on observations of @land92 standards. A more detailed description of the data and the reduction procedures can be found in a paper presenting the photometry of 69 variables from the field of NGC 6362 [@jka14b]. We note that, in general, more accurate photometry of crowded fields can be obtained with an image subtraction technique. However, profile fitting photometry is less vulnerable to systematic errors, and as such it is a better option whenever the star is free from blending.
Ephemerides
------------
Several eclipse events of V40 were covered densely enough to determine individual times of minima. They were found using the KWALEX code which is based on an improved version of the algorithm presented by @kwee (see Appendix). The linear ephemerides: $$\begin{aligned}
HJD^\mathrm{Min I}& = &245 3915.50966(25) + 5.2961725(14)\times E\nonumber\\
HJD^\mathrm{Min II}& = &245 3891.82853(14) + 5.29617486(99)\times E\label{eq:ephe40}\end{aligned}$$ proved to be an adequate fit to the moments of minima listed in Table \[V40\_min\]. The orbit of V40 is eccentric. Since the secondary (shallower) minimum was observed more times than the primary one, we decided to choose it as a base of the phase count. Thus, we set $\phi^\mathrm{Min II}=0.5$, which necessarily results in $\phi^\mathrm{Min I}\ne0$.
Altogether, 13 eclipses of V41 were observed between August 2007 and September 2011. However, ingress and egress were only covered for a few of them. This prevented us from a classical period study based on the determination of individual moments of minima and the O-C technique. To find the ephemeris, a combined $V$ light curve including data from all seasons and both telescopes was used. This was fitted with the help of the JKTEBOP v34 code[^1] [@south13 and references therein], allowing only for variations of $P$ and moment of the primary eclipse $T_{0}$. Mode 8 of the code was used, which enabled a robust determination of the errors of the two parameters with the help of Monte Carlo simulations. The following linear ephemeris $$HJD^\mathrm{Min I} = 245 4263.65242(25) + 17.8888441(40)\times E
\label{eq:ephe41}$$ was obtained. The light curves phased with ephemerides given by equations (\[eq:ephe40\]) and (\[eq:ephe41\]), and used for the subsequent analysis, are shown in Fig. \[fig:lc\].
Spectroscopic observations {#sect:spectobs}
==========================
The spectra were taken with the MIKE echelle spectrograph [@bern03] on the Magellan Baade and Clay 6.5-m telescopes, using a 0.7 slit which provided a resolution R $\approx$40,000. A typical observation consisted of two 1800-second exposures of the target, flanking an exposure of a thorium-argon hollow-cathode lamp. The raw spectra were reduced with the pipeline software written by Dan Kelson, following the approach outlined in @kel03. For the further processing, the IRAF package ECHELLE was used. The velocities were measured using software based on the TODCOR algorithm [@zuc94], kindly made available by Guillermo Torres. Synthetic echelle-resolution spectra from the library of @coelho06 were used as velocity templates. These templates were interpolated to the values of $\log g$ and $T_{eff}$ derived from the photometric solution (see Section \[sect:lca\_sp\]) and Gaussian-smoothed to match the resolution of the observed spectra. For the interpolation in chemical composition we assumed ${\rm [Fe/H]} = -1.07$ and $[\alpha/{\rm Fe}]=0.4$, i.e. values adopted for the cluster as a whole [@car09; @dotter10]. These values are not well determined (see the discussion in Sect. \[sect:lca\_sp\]), however the results of the velocity measurements are insensitive to minor changes in \[Fe/H\] or $[\alpha/{\rm Fe}]$.
Radial velocities were measured on the wavelength intervals 4120-4320 Åand 4350-4600 Å, covering the region of the the best signal-to-noise ratio while avoiding the H$\gamma$ line. The final velocities, obtained by taking the mean of these two measurements, are given in Tables \[V40vel\] a nd \[V41vel\], which list the heliocentric Julian dates at mid-exposure, velocities of the primary and secondary components, and the orbital phases of the observations calculated according to the ephemerides given by Equations (1) and (2). An estimate of the error in the velocity measurements is the [*rms*]{} of half of the differences between velocities obtained from the two wavelength intervals. The corresponding values are 0.63, 0.59, 0.23 and 0.52 km/s for the primary and secondary of V40 and the primary and secondary of V41, respectively. They are consistent with the [*rms*]{} values of the residuals from orbital fits given in Table \[param\].
Analysis of light and velocity curves {#sect:lca_sp}
=====================================
We assume that reddening is uniform for NGC 6362. @har96 lists $E(B-V)=0.09$ as measured by West and Zinn (1984) based on the Q39 index. The map of total galactic extinction by Schlegel et al. (1998) predicts $E(B-V)=0.075$ mag for the position of NGC 6362. This value agrees with an estimate resulting from isochrone fitting applied to HST/ACS photometry of the cluster by @dotter10. They obtained $E(6-8)=0.070$, equivalent to $E(B-V)=0.071$. This value is also compatible with values obtained by @kov (${E(B-V)}=0.073\pm 0.007$) and @ol01 (${E(B-V)}=0.08\pm 0.01$) from an analysis of RR Lyr stars in NGC 6362. We adopt $E(B-V)=0.075$ and neglect the differential reddening, which according to [@bon13] is 0.025 mag on the average. [@bon13] comment that this differential reddening may result from uncorrected zero-point variations in the original photometry.
We employed a quadratic formula for limb darkening, and calculated coefficients iteratively together with effective temperatures of the components of both systems. The temperatures were derived from dereddened $B-V$ colors using the empirical calibration of @cas10. The coefficients were then interpolated from the tables of @claret00 with the help of the JKTLD code.[^2] The updated colors in turn were computed from total $V$ and $B$ magnitudes at maximum light and light ratios derived from the light curve solution. We note that a slightly more recent calibration by Sousa et al. (2011) results in temperatures higher by 8 K on average, however the agreement between calibrations is excellent over the relevant range of colors.
We checked that using a square root or logarithmic formula for the limb darkening instead of a quadratic formula had a negligible impact on the photometric solution. In particular, relative radii derived from the three two-parameter formulas differed by less than $\sim$$0.2\sigma$), whereas using a linear limb darkening formula resulted in a slightly poorer fit, and caused the relative radii to differ by $\sim$$1\sigma$ from those obtained with the quadratic formula. We conclude that, for the stars analyzed here, two-parameter formulate provide a better description of the limb darkening than the linear law.
Light and radial velocity curves of V40 and V41 were analyzed with the JKTEBOP v34 code, which allows for a simultaneous solution of one light curve and two radial velocity curves (one for the primary and one for the secondary component). The $V$-curve was solved for the following parameters: inclination $i$, sum of fractional radii $r_p$ and $r_s$, ratio of the radii $r_{s}/r_{p}$; surface brightness ratio $S$ (secondary to primary), luminosity ratio $L_{s}/L_{p}$, $e\cos(\omega)$ and $e\sin(\omega)$ where $e$ is the orbital eccentricity and $\omega$ the longitude of the periastron, amplitudes of radial velocity curves $K_{p}$ and $K_{s}$, and systemic velocity $\gamma$. Errors of stellar parameters were derived using a Monte Carlo approach implemented in mode 8 of the JKTLD code. We performed 10000 Monte Carlo simulations for each of V40 and V41.
Since the $B$-data were of poorer quality than those collected in the $V$-band, we decided to use them solely to a measurement of the luminosity ratio $(L_s/L_p)_B$. This was accomplished by fixing all parameters but the surface brightness ratio $S_B$. However, the formal error of $S_B$ calculated by JKTEBOP would have been seriously underestimated if uncertainties of other parameters had not been taken into account. To avoid this problem we used the PHOEBE package v0.31a [@prsa05], which allows for a simultaneous solution of several light curves but can be unstable when attempting to solve light and velocity curves simultaneously. We initialized the package with the photometric solution found by JKTEBOP, loaded $BV$ curves, and, allowing all geometrical parameters to vary, found the standard error of $S_B$ using another Monte Carlo procedure written in PHOEBE-scripter. The procedure replaces the observed light curves $B_0$ and $V_0$ with the fitted values $B_f$ and $V_f$ , generates Gaussian perturbations $\delta B_f$ and $\delta V_f$ such that the standard deviation of the perturbation is equal to the standard deviation of the corresponding residuals shown in Figure \[fig:lcres\], and performs PHOEBE iterations on $B_f + \delta B_f$ and $V_f + \delta V_f$. As before, 10000 Monte Carlo simulations were performed for each binary. In our opinion such a hybrid approach based on both the JKTEBOP and PHOEBE codes is optimal when analyzing data sets similar to those presented in this paper.
The quality of the final fits is illustrated in Figures \[fig:lcres\] and \[fig:vc\]. Orbital and photometric parameters derived for both systems are presented in Table \[param\], and absolute parameters of the components are listed in Table \[param2\]. The errors of the effective temperatures include uncertainties of observed colors, reddening, and photometric zero points (0.015 mag for $V$ and $B-V$). The bolometric luminosities are calculated from radii and temperatures adopting $L_\odot^{bol}=3.827\times10^{33}$ ergs.[^3] To derive absolute visual magnitudes, we used bolometric corrections based on atmosphere models of @cas04 which ranged from $-0.07$ mag for the primary of V40 to $-0.10$ mag for the secondary of V41. The absolute visual magnitudes along with the $V$-magnitudes derived from light curve solutions allowed for a determination of the apparent distance moduli listed in Table \[param2\]. The location of the components of V40 and V41 on the CMD of NGC 6362 is shown in Figure \[fig:cmd\]. All four stars are placed on the main sequence: three of them at the turnoff region, and the fourth one (the secondary of V41) well below the turnoff.
Age and distance analysis {#sect:age}
=========================
The fundamental parameters obtained from the analysis of V40 and V41 allow us to derive the ages of the components of these systems, using theoretical isochrones. However, before a model comparison is made it is necessary to review the information available on the chemical composition of NGC 6362 since the model-based ages are sensitive to the adopted values of helium abundance, \[Fe/H\] and \[$\alpha$/Fe\].
Unfortunately, the metallicity of NGC 6362 is not well known. A literature search reveals that determinations of \[Fe/H\] from high resolution spectra are limited to the study by @car97, who found ${\rm [Fe/H]}=-0.96$ and ${\rm [Fe/H]}=-0.97$ for two red giants. @car09 used this determination along with some older estimates based on the integrated photometry of NGC 6362 to derive ${\rm [Fe/H]}=-1.07\pm 0.05$ on their newly proposed metallicity scale. For the sake of completeness, we note that @har96 listed ${\rm [Fe/H]}=-0.99$ – a value obtained by combining that of @car09 with ${\rm [Fe/H]}=-0.74\pm 0.05$ derived by Geisler et al. (1997) based on Washington photometry of 10 cluster giants. @ol01 obtained a helium content Y=0.292 from the analysis of RRc stars. Finally, @dal14 presented convincing evidence of multiple populations in NGC 6362. We adopt a fiducial composition of \[Fe/H\]=$-1.07$, \[$\alpha$/Fe\]=+0.4 and Y=0.25, but will investigate the dependence on these parameters when deriving the ages of the component stars.
In a study of three binary systems in M4 [@jka13] we used two sets of theoretical isochrones: Dartmouth [@dotter08 henceforth DSED] and Victoria-Regina [@vandenberg12]. Since the differences in the derived ages turned out to be smaller than the uncertainties imposed by the observational errors, the present analysis is based on DSED isochrones only. Figure \[fig:rmlm\] shows DSED isochrones for ages ranging from 9-Gyr to 13-Gyr plotted on mass-radius and mass-luminosity diagrams, together with values for V40 and V41. Both diagrams indicate that V41 may be younger than V40, and the mass-radius diagram suggests that the secondary of V41 is younger than the primary (or, if both components are of the same age, the secondary is too small compared to the primary).
A detailed comparison of the ages of the individual components, which requires a quantitative age estimate from the data and the isochrones, was undertaken following the approach developed by @jka13 to study the binary systems in M4. Briefly, the $(M-R)$ or $(M-L)$ plane is densely sampled in both dimensions for each component separately, and the age of an isochrone matching the properties of the sampled point is found. The age ensembles generated in this way are visualized in the form of mass-age diagrams (MADs). The MAD of a given component is composed of two ellipses derived from $(M-R)$ and $(M-L)$ relations. Each ellipse defines a range of ages achieved by stars whose parameters differ from the corresponding entries $(M,R)$ or $(M,L)$ in Table \[param2\] by at most $\varepsilon_R$ or $\varepsilon_L$, where $\varepsilon_R^2 = \sigma(M/M_\odot)^2 +\sigma(R/R_\odot)^2$ for the $(M-R)$ plane and likewise for the $(M-L)$ plane; $\sigma(M/M_\odot)$, $\sigma(R/R_\odot)$ and $\sigma(L/L_\odot)$ being the standard errors of $M$, $R$ and $L$ from Table \[param2\]. The MADs of the components of V40 and V41 are shown in Figure \[fig:ellipses\].
The age of each component was calculated as the weighted mean of the ages derived from the two corresponding MADs. For V40 we obtained 11.7$\pm$0.3 Gyr for the primary and 11.5$\pm$0.4 Gyr for the secondary. The difference is 0.2$\pm$0.5 Gyr. For V41 the ages are 10.7$\pm$0.3 Gyr and 9.2$\pm$0.6 Gyr, respectively, for the primary and the secondary. The difference is 1.5$\pm$0.7 Gyr. Possible origins of this two-sigma effect are discussed in Section \[sect:discus\]. We estimated the age of each system from the weighted mean of the ages derived from each pair of corresponding MADs. For V40 we obtained 11.7$\pm$0.2 Gyr, in agreement with 12.5$\pm$0.5 Gyr derived by @dotter10 from CMD fitting. For V41 we obtained 10.4$\pm$0.3 Gyr, which suggests that this system is significantly younger than V40, with a formal difference of 1.3$\pm$0.4 Gyr.
To independently estimate the age of the cluster as a whole, we fitted the CMD of NGC 6362 with DSED isochrones (see Figure \[fig:cmd\_iso\]). Since uncertainties of stellar models grow larger with evolutionary time, when fitting we assigned the largest weight to main sequence and turnoff stars. This results in an age estimate of 12.5$\pm$0.75 Gyr - again in agreement with the result of @dotter10, which is based on an independent set of photometric data. The CMD fit yielded a distance modulus of 14.72 mag - larger than 14.55 mag derived by @dotter10, but in good agreement with 14.68 mag listed by @har96. The reddening derived from the CMD-fit ($E(B-V)=0.061$ mag) is slightly lower than the value of 0.075 mag adopted in Sect. \[sect:lca\_sp\], but decreasing \[Fe/H\] from the fiducial -1.07 to -1.15 brings it back to the adopted value (we checked that such a small change in metallicity does not influence the parameters listed in Tables \[param\] and \[param2\] in any significant way). The distance moduli of all four stars forming V40 and V41 agree with each other (see Table \[param2\]), and the mean weighted modulus of 14.74$\pm$0.04 mag is consistent with that obtained from CMD fitting.
To sum up, the age derived from the MADs of V40 components is compatible with that derived from CMD fitting, however the age of the primary of V41 is $1.0\pm0.4$ Gyr younger than the age of V40, with the secondary of V41 being another $1.5\pm0.7$ Gyr younger. The problem looks much milder in the $(M-L)$ diagram, in which all the four components are marginally coeval within 1-$\sigma$ limits. We stress, however, that the $(M-L)$ diagram and associated MADs must be given a smaller weight because the luminosities in Table \[param2\] are sensitive to adopted chemical composition [via $T_{eff}$, see @dotter10] and possible differential reddening, whereas the radii are entirely reddening and composition-independent.
Discussion and summary {#sect:discus}
======================
We derived absolute parameters of the components of V40 and V41, two detached eclipsing binaries located near the main sequence turnoff of NGC 6362. The accuracy of mass and radii measurements is generally better than 1 per cent, with the only exception being the radius of the secondary in V40 which is determined with an accuracy of 1.3 per cent. The luminosities of the components were found using effective temperatures estimated from $(B-V)$ - $T_\mathrm{eff}$ calibrations compiled by @cas10. While their errors are rather large (between 7 and 10 per cent), all the four distance moduli have nearly the same value, and the formally calculated mean modulus of 14.74$\pm$0.04 mag agrees very well with 14.72 mag derived from the comparison of DSED isochrones with the CMD of the cluster.
Based on the parameters of V40 and DSED isochrones for Y=0.25, \[Fe/H\]=$-1.07$ and \[$\alpha$/Fe\] =$+0.4$, we derived an age of 11.7$\pm$0.2 Gyr. The good agreement of this result with the age of NGC 6362 derived from isochrone comparison with the CMD of the cluster is encouraging. This is particularly the case given the age of the cluster as obtained by @dotter10 based on an independent photometry (12.5$\pm$0.5 Gyr). However, one should keep in mind that the uncertainty of 0.2 Gyr is a purely statistical one, and, as pointed out by @jka13, it should be increased by $\sim$0.85 Gyr to account for the sensitivity of stellar age to uncertainties in the chemical composition.
The situation with V41 is less clear. Isochrone fits to $(M-R)$ and $(M-L)$ diagrams of this system suggest that it is significantly (by $1.3\pm0.4$ Gyr) younger than V40. However, given the recent discovery of two stellar populations in NGC 6362, such a possibility does not seem to be entirely implausible. Upon examining CMDs obtained from HST photometry, @dal14 found that the red giant branch of the cluster splits into two sub-branches separated by about 0.1 mag in $(U-V)$, while such effect is not observed in $(V-I)$. Their Figure 1 explains the split by a difference in light-element content. While they use isochrones of the same age, which is reasonable given the relatively weak dependence of RGB location on time, they clearly identify the Na-poor populations with the first generation of stars, and the Na-rich population with the second one “formed during the first few $\sim$100 Myr of the cluster life from an intra-cluster medium polluted by the first generation”. We observed similar effects (i.e. a large split in $(U-V)$ and a much smaller one in $(V-I)$) using same-age DSED isochrones differing by $\sim$0.1 dex in \[Fe/H\], or $\sim$0.3 dex in \[$\alpha$/Fe\], or $\sim$0.1 in Y. The RGB also splits when the populations have the same composition but differ in age by 4-5 Gyr, however an age difference this large causes the subgiant branch to split even more widely than the RGB, while the opposite is observed in NGC 6362. Thus, we corroborate the results of @dal14 and their scenario involving two star formation episodes, with the second-generation stars slightly enriched in metals (and possibly in helium).
The discrepancy of the ages of V41 components ($1.5\pm0.7$ Gyr) may be regarded as statistically insignificant. However we decided to study the possible origin of the difference by varying chemical composition parameters Y, \[Fe/H\] and \[$\alpha$/Fe\] (only one of them was varied at a time), and looking for isochrones joining locations of the primary and the secondary in the $(M-R)$ and $(M-L)$ diagrams. We found acceptable (albeit only marginally) fits with Y = 0.26 at an age of 9.5 Gyr; \[$\alpha$/Fe\] = 0.2 at an age of 9.7 Gyr, and \[Fe/H\] = -0.90 at an age of 11.3 Gyr. This suggests that V41 may have formed from material enriched in products of stellar nucleosynthesis, and thus it may indeed be younger than V40. We note parenthetically that the results of our fitting experiments are in line with the findings of @jka13, who, while discussing $(M-R)$ diagrams for binaries in M4, wrote “increasing Y *decreases* the age derived from the $(M-R)$ plane while increasing \[Fe/H\] or \[$\alpha$/Fe\] *increases* the age.”
V40 is also somewhat peculiar because of the eccentricity of its orbit. At an age of $\sim$12 Gyr and a period of 5.3 d the orbit should long ago have been fully circularized by tidal friction [@maz08; @mat04]. There is no evidence for the presence of a third body in this system, which means that it must have undergone a close encounter during the last few Gyr. According to @dal14 NGC 6362 may have had a complicated dynamical history, losing perhaps as much as 80% of its original mass. Since close encounters account for a few per cent of the total mass loss (M. Giersz, private communication), the very existence of the nonzero eccentricity of V40 lends some support to this conjecture.
While V40 and V41 seem to belong to distinct populations, we feel it is too early to claim a direct detection of non-coevality among stars in globular clusters. It is clear, however, that DEBs offer the potential to test theories describing the origin of multiple populations in GCs, and to verify estimates of age differences between populations based on CMD-fitting. For the latter, both the theoretical predictions and fitting procedures yield values ranging from $\sim10^7$ yr to several hundred Myr , and any independent constraints are highly desirable. The most obvious test for the reality of the age difference between our two systems would be to determine the chemical composition of their components using disentangling software [see e.g. @had09]. The existing spectra do not have adequate quality to measure abundances, but, given the brightnesses of the components and the orbital periods, it should be possible to improve the S/N ratio.
There is also room for improving the accuracy of parameter measurements. As noted by @jka13, the errors in the masses of the components originate almost entirely from the orbital solution, which may only be improved by taking additional spectra (preferably with the same instrument). Admittedly, this would require a substantial observational effort, as doubling of the present set of radial velocity data would lead to an improvement of only 33% in the mass estimates [@tho10]. The errors in the radii of the components are dominated in turn by the photometric solution, whose quality depends on the accuracy of the photometry. We estimate that an accuracy of 0.002 – 0.003 mag in $V$ would reduce the errors of the radii by 50%. This goal could be easily achieved on a 6-8 m class telescope equipped with a camera capable of good PSF sampling. Finally, IR photometry should be obtained to reduce errors in luminosity with the help of existing accurate calibrations linking $(V-K)$ color to $V$-band surface brightness.
This series of papers is dedicated to the memory of Bohdan Paczyński. JK and MR were supported by the grant DEC-2012/05/B/ST9/03931 from the Polish National Science Center. AD received support from the Australian Research Council under grant FL110100012.
Appendix: Kwalex: A method for timing eclipses {#appendix-kwalex-a-method-for-timing-eclipses .unnumbered}
==============================================
@kwee proposed an analyis of the timing of eclipses by fitting them with symmetric curves, so that any asymmetries would be reflected in observed-minus-calculated $(O-C)$ deviations. Their classical method does have its drawbacks. As far as the algorithm is concerned: (i) the interpolation of a light curve into a somewhat arbitrary evenly spaced time mesh yields a slight dependence of a final estimate of the central time on its initial approximation and (ii) the resulting estimate of the eclipse centre depends on the initial selection of data points and on the initial guess of the eclipse centre. There are also statistical drawbacks: (iii) interpolation effectively assigns uneven weights to points and (iv) the effective model, a polyline, suffers from excess number of parameters describing all nodes. Such a waste of degrees of freedom is known in statistics to decrease accuracy.
Let us adopt the zero point of the time scale as the current estimate of the eclipse centre $T_c^{(n)}$. Following @kwee we fit both the input light curve and its reflexion with respect to $T_c^{(n)}$. The novelty here is that we employ the orthogonal polynomial approximation of the combined light curve (e.g. @ral65),rather than on a poly-line interpolation. In doing so we optimize the sensitivity of our method by keeping the number of model parameters at a statistically justifiable minimum (Occam razor). By virtue of the Fisher lemma (e.g. @bra75), parameters of the orthogonal model are uncorrelated with noise hence they yield the minimum variance model estimate.
Next, again following @kwee, we freeze the model curve and shift $T_c^{(n)}$ by $\pm\delta$ and calculate $\chi^2$ of $O-C$ for the time shifts $-\delta,\;0,+\delta$. These 3 values of $\chi^2$ are fitted with a parabola $$\chi^2(t)= at^2+bt+c\label{e1.0}$$ and the location of its minimum $T_c^{(n+1)}=-b/2a$ may be adopted as the next approximation of $T_c^{(n+1)}$. However, to assure convergence, we trim time steps $dT_c=T_c^{(n+1)}-T_c^{(n)}$ and carefully select $\delta$ to avoid going too far away from the solution, yet possibly ensuring bracketing it within $\pm\delta$. Such orthogonal polynomials retain only even powers and are optimal both in terms of degrees of freedom and numerical procedure relying on orthogonal projection. By the incorporation of weight coefficients in the scalar product our algorithm is able to account for uneven errors of data.
Complications, both conceptual and algorithmic, arise due to our ultimate aim of minimizing the dependence of the final result on somewhat accidental selection of the initial data points. Discrete selection/modification of observations may result in numerical instability and/or non-converging limit cycles. To minimize such danger we resort to smooth down-weighting of points near edge of eclipse by a symmetric Fermi-Dirac distribution function (e.g. @kit96): $$f(t) = \frac{1}{e^{\frac{|t|-\Delta}{\delta}}+1}\label{e1.1}$$ The final weight of data measured at time $t_i$ with an error $\sigma_i$ is $w_i=f(t_i)/\sigma_i^2$. As our time zero-point moved during iterations towards the final solution, so our weights vary between iterations. We roughly followed principles of the minimization algorithm by simulated annealing [@ott89]. At the start of iterations we adopt a rather crude model curve for $m=2$ and smooth weighting $\delta=\Delta/10$. Note, that in statistical physics $\delta$ would be proportional to the temperature. Only after successful progress of our algorithm do we tighten our data sample by decreasing $\delta$ (i.e. ’lower temperature’) and refine our model by increasing $m$. Also, we trimmed the time step $dT_c\leq\Delta/10$.
After each iteration we calculate two $T_c$ error estimates: $\Sigma$ and $\sigma$. The error $\Sigma$ corresponds to such a model shift $\delta=\Sigma$ that $\chi^2$ grows over its minimum value $\chi^2_{min}$ by its variance $\sqrt{2d}$, where $d=n-m/2-1$ is the number of degrees of freedom. The error $\sigma$ is defined similarly, for growth by $1$. For this purpose $\chi^2$ has to be renormalized by a factor $C$ such, that $E\{C\chi^2\}=d$. For an approximate normalization we substitute the minimum value of the parabola in Eq. (\[e1.0\]) for its expectation, so that $C=(4ac-b^2)/4ad$. $\Sigma$ and $\sigma$ are calculated by solving of Eq. (\[e1.0\]). In this way we obtain: $a\Sigma^2= \sqrt{2d}\chi^2_{min}/d$ and similarly for $\sigma^2$: $$\begin{aligned}
\Sigma^2&=&\sqrt{2}\frac{4ac-b^2}{4a^2\sqrt{d}}\label{e1.2}\\
\sigma^2&=&\frac{4ac-b^2}{4a^2d}\label{e1.3}\end{aligned}$$ To force a positive result we apply Eq. (\[e1.0\]) to $\log{\chi^2}$ rather then to $\chi^2$. In statistical terms, $\Sigma$ corresponds to the maximum reasonable uncertainty of $T_c$, were other parameters strongly correlated with it and $\sigma$ is its likely error, assuming weak correlation.
During iterations we keep $\delta$ as large as $\delta=2\Sigma$. Thus $\delta$ decreases for an improved model fit at a moderate rate. The polynomial order is raised in steps of $2$, up to its maximum allowed value, only after decrease of corrections of $T_c$ to $dT_c\equiv T_c^{(n+1)}-T_c^{(n)} < 0.01\Sigma$. We terminate iterations when $m$ reaches its selected maximum value and time steps become small $dT_c < \epsilon\Delta$, where $\epsilon$ is our tolerance parameter. Our result is the final eclipse centre $T_c$ and its likely error $\sigma$. Note, that for correlated observations errors need to be increased [@asc91].
Bernstein, R., Shectman, S. A., Gunnels, S. M., Mochnacki, S. & Athey, A. E. 2003, Instrument Design and Performance for Optical/Infrared Ground-based Telescopes. Edited by Iye, M. & Moorwood, A. F. M. Proceedings of the SPIE, 4841, 1694 Bonatto, C., Campos, F. & Kepler, S. O., , 435, 263 Brandt, S. 1975, Statistical and Computational Methods in Data Analysis, Amsterdam, North Holland. Casagrande, L., Ramírez , I., Meléndez, J., Bessell, M. & Asplund, M. 2010, , 512, 54 Claret, A. 2000, , 363, 1081 Carretta, E., Bragaglia, A., Gratton, R., D’Orazi, V. & Lucatello, S. 2009, , 508, 695 Carretta, E. & Gratton, R. G. 1997, A&AS, 121, 95 Castelli, F. & Kurucz, R. L. 2004, arXiv:astro-ph/0405087 Coelho, P., Barbuy, B., Meléndez, J., Schiavon, R. P. & Castilho, B. V. 2005, , 443, 735 Dalessandro, E., Massari, D., Bellazzini, M. et al. 2014, , 791, L4 Dotter, A., Chaboyer, B. Jevremović, D. et al. 2008, , 178, 89 Dotter, A., Sarajedini, A., Anderson, J. et al. 2010, , 708, 698 Hadrava, P. 2009, A&A, 494, 399 Harris, W. E. 1996, , 112, 1487 Kaluzny, J., Rozyczka, M., Pych, W. & Thompson, I. B. 2014, , 64, 309 Kaluzny, J., Thompson, I. B., Dotter, A., Rozyczka, M., Pych, W. et al. 2014, , 64, 11 Kaluzny, J., Thompson, I. B., Rozyczka, M., Dotter, A., Pych, W. et al. 2013, , 145, 43 Kaluzny, J., Thompson, I. B., Krzeminski, W. et al. 2005, AIP Conf. Proc., Vol. 752, Stellar Astrophysics with the World’s Largest Telescopes, ed. J. Mikolajewska and A. Olech, p. 70 Kelson D. D. 2003, , 115, 688 Kovács, G. & Walker, A. R. 2010, , 371, 579 Kwee, K. K. & van Woerden, H. 1956, Bull. Astron. Inst. Netherlands, 12, 327 Kittel, Ch., 1996, Introduction to Solid State physics, NY, Wiley. Landolt, A. U. 1992, , 104, 372 Mathieu, R. D., Meibom, S. & Dolan, C. J. 2004 , 602, L121 Mazeh, T. 2008, EAS Publ. Series, 29, 1 Nardiello, D., Piotto, G., Milone, A. P. et al. 2015, , 451, 4831 Mazur, B., Kaluzny, J. & Krzeminski, W. 1999, , 306, 727 Otten, R.H.J.M. & van Ginneken, L.P.P.P. 1989, The Annealing Algorithm (Boston: Kluver) Olech, A., Kaluzny, J., Thompson, I. B. et al. 2001, , 321, 421 Paczyński, B. 1997, in Space Telescope Science Institute Series, The Extragalactic Distance Scale, ed. M. Livio (Cambridge: Cambridge Univ. Press), p. 273 Prša, A. & Zwitter, T. 2005, , 628, 426 Ralston, A. 1965, (NY: McGraw-Hill) Schwarzenberg-Czerny, A. 1991, , 198 Southworth, J. 2013, , 434, 1300 Stetson, P. B. 1987, , 99, 191 Stetson, P. B. 1990, , 102, 932 Thompson, I. B., Kaluzny, J., Pych, W. et al. 2001, , 121, 3089 Thompson, I. B., Kaluzny, J., Rucinski, S. M., et al. 2010, , 139, 329 VandenBerg, D. A., Bergbusch, P. A., Dotter, A. et al. 2012, , 755, 15 Zloczewski, K., Kaluzny, J., Rozyczka, M., Krzeminski, W., Mazur, B., & Thompson, I. B. 2012, , 62, 357 Zucker, S. & Mazeh, T. 1994, , 420, 806
[lccc]{}
-480.5 & 1349.66400 & 0.00129 & 0.00060\
-274.5 & 2440.67681 & 0.00083 & -0.00019\
0.5 & 3891.82809 & 0.00027 & 0.00045\
3.5 & 3907.71733 & 0.00024 & -0.00027\
16.5 & 3976.56802 & 0.00139 & -0.00069\
77.5 & 4299.63424 & 0.00026 & -0.00024\
138.5 & 4622.70052 & 0.00025 & 0.00015\
276.5 & 5353.57276 & 0.00031 & 0.00003\
0. & 3915.50987 & 0.00028 & -0.00022\
1. & 3920.80387 & 0.00096 & 0.00196\
139.0 & 4651.67750 & 0.00035 & 0.00014\
335.0 & 5689.72753 & 0.00039 & -0.00007\
[lccc]{} 2870.5433 & 52.73 & -78.06 & 0.665\
3183.6775 & 57.18 & -85.40 & 0.790\
3201.5720 & -74.19 & 52.16 & 0.169\
3206.5845 & -65.72 & 41.55 & 0.115\
3210.5848 & 35.22 & -64.64 & 0.871\
3517.7456 & 36.05 & -63.77 & 0.867\
3521.7538 & 40.02 & -66.70 & 0.624\
3580.5437 & 60.61 & -87.52 & 0.725\
3585.6468 & 56.59 & -84.07 & 0.688\
3586.6453 & 33.28 & -59.01 & 0.877\
3815.8955 & -74.88 & 52.41 & 0.163\
3816.8897 & -62.77 & 41.06 & 0.350\
3877.8517 & 38.86 & -66.45 & 0.861\
3890.7002 & -74.76 & 55.72 & 0.287\
3935.6890 & 57.56 & -86.79 & 0.782\
3937.6503 & -72.90 & 50.94 & 0.152\
3938.5635 & -68.71 & 47.47 & 0.324\
3991.5267 & -68.84 & 47.33 & 0.325\
4258.7016 & 59.64 & -88.13 & 0.771\
4258.7465 & 57.69 & -86.47 & 0.780\
4258.7904 & 58.16 & -85.71 & 0.788\
4329.5454 & -72.97 & 49.67 & 0.148\
4648.6241 & -49.91 & 27.55 & 0.395\
4656.6819 & 15.95 & -41.39 & 0.916\
[lccc]{} 2871.5571 & -32.73 & 8.93 & 0.181\
2872.5519 & -38.27 & 15.99 & 0.236\
3179.7609 & -47.66 & 27.81 & 0.410\
3206.5336 & 20.39 & -52.18 & 0.906\
3517.7940 & -43.64 & 23.06 & 0.306\
3520.7452 & -46.33 & 26.13 & 0.471\
3521.7074 & -40.89 & 20.15 & 0.525\
3580.6885 & 42.56 & -73.79 & 0.822\
3581.6731 & 30.56 & -59.28 & 0.877\
3582.5946 & 16.30 & -44.91 & 0.928\
3891.6858 & -34.69 & 14.52 & 0.207\
3937.6050 & 44.58 & -76.40 & 0.774\
3938.6590 & 40.59 & -73.04 & 0.833\
3990.5131 & 35.50 & -66.03 & 0.731\
4329.6373 & 15.02 & -42.95 & 0.689\
4648.7184 & -41.08 & 19.93 & 0.525\
4649.6742 & -30.79 & 8.04 & 0.579\
4649.7192 & -30.27 & 7.02 & 0.581\
4652.7191 & 40.41 & -73.29 & 0.749\
4655.7155 & 19.30 & -48.17 & 0.917\
4967.7970 & -47.13 & 25.65 & 0.362\
5012.6510 & 30.80 & -62.41 & 0.870\
5038.5921 & -44.61 & 23.99 & 0.320\
6842.6401 & -30.67 & 7.75 & 0.167\
6844.6397 & -41.48 & 20.30 & 0.279\
6845.6556 & -46.30 & 24.06 & 0.336\
[lcc]{} $\gamma$ (km s$^{-1}$) & -12.34(11) & -12.55(8)\
$K_p$ (km s$^{-1}$) & 70.18(17) & 46.59(14)\
$K_s$ (km s$^{-1}$) & 73.63(25) & 52.57(17)\
$\sigma_p$ (km s$^{-1}$) & 0.72 & 0.53\
$\sigma_s$ (km s$^{-1}$) & 1.07 & 0.65\
$A$ (R$_\odot$) & 15.037(32) & 33.294(71)\
$e$ & 0.05054(73) & 0.3125(13)\
$\omega$ (deg) & 206.6(17) & 321.23(33)\
$i$ (deg) & 88.22(7) & 89.547(20)\
$r_p$ & 0.08817(45) & 0.03225(13)\
$r_s$ & 0.06627(87) & 0.02195(12)\
$S_V$ & 0.9815(35) & 0.7333(63)\
$S_B$ & 0.9686(18) & 0.669(11)\
$(L_p/L_s)_V$ & 1.803(54) & 2.964(27)\
$(L_p/L_s)_B$ & 1.8272(57 ) & 3.264(44)\
$\sigma_\mathrm{rms}(V)$ (mmag) & 11 & 12\
$\sigma_\mathrm{rms}(B)$ (mmag) & 13 & 16\
$V_p$ (mag) & 18.698(13)(20) & 19.089(7)(17)\
$V_s$ (mag) & 19.338(22)(27) & 20.274(10)(18)\
$(B-V)_p$ (mag) & 0.542(18)(23) & 0.550(10)(18)\
$(B-V)_s$ (mag) & 0.556(31)(34) & 0.650(16)(22)\
[lcc]{} $M_p$ (M$_\odot$) & 0.8337(63) & 0.8215(58)\
$M_s$ (M$_\odot$) & 0.7947(48) & 0.7280(47)\
$R_p$ (R$_\odot$) & 1.3253(77) & 1.0739(48)\
$R_s$ (R$_\odot$) & 0.997(13) & 0.7307(46)\
$T_p$ (K) & 6156(123) & 6124(121)\
$T_s$ (K) & 6100(158) & 5747(122)\
$L^\mathrm{bol}_p$ (L$_\odot$) &2.27(18) &1.46(11)\
$L^\mathrm{bol}_s$ (L$_\odot$) &1.24(13) &0.524(45)\
$M_{V_p}$ (mag) &3.92(8) & 4.40(8)\
$M_{V_s}$ (mag) &4.59(11) & 5.54(9)\
$(m-M)_{V_p}$ (mag) & 14.78(8)& 14.69(8)\
$(m-M)_{V_s}$ (mag) & 14.75(9) & 14.73(8)\
$Age_p$ (Gyr) & 11.7(3) &10.7(3)\
$Age_s$ (Gyr) & 11.5(4) &9.2(6)\
$Age_b$ (Gyr) &11.7(2) &10.4(3)\
[^1]: described in detail at and available from http://www.astro.keele.ac.uk/jkt/codes/jktebop.html
[^2]: Written by John Southworth and available at www.astro.keele.ac.uk/jkt/codes/jktld.html
[^3]: https://sites.google.com/site/mamajeksstarnotes/basic-astronomical-data-for -the-sun
|
---
author:
- 'Shivali Malhotra[^1]'
- 'Md. Naimuddin[^2]'
- Thomas Hebbeker
- Arnd Meyer
- Holger Pieta
- Paul Papacz
- Stefan Antonius Schmitz
- Mark Olschewski
title: Model Unspecific Search in CMS
---
Introduction {#intro}
============
The start-up of the LHC brings forth a new era of high energy particle physics. What we will find is yet unknown, however there is a large number of theories predicting possible outcomes. Many of those theories are tested by dedicated analyses at CMS and the other LHC experiments. However new physics could as well manifest itself in ways no-one has yet thought of. Thus we have implemented a Model Unspecific Search in CMS: **MUSiC**. Model unspecific searches aim to analyse a large fraction of data, systematically scanning them for deviations from the Standard Model (SM). Therefore the selection cuts are not optimized for any expected new physics signal; however, the quality of the measurement is ensured by selecting well-measured and well-understood physics objects such as isolated high-$p_{T}$ leptons. Any deviation seen might be a detector effect, a lack of understanding of the event generation and simulation or truly a new physics signal. Similar strategies have already been applied successfully at other accelerator experiments, like H1 at HERA, D$ \phi $ and CDF at Tevatron.
Search Algorithm {#sec:1}
================
New Physics usually shows up in distinctive final states and so the first task is to sort the events. We consider the following physics objects: muons ($ \mu $), electrons (e), photons ($ \gamma $), hadronic jets (jet), and missing transverse energy (MET). Events are sorted into event classes, which represent a single exclusive final state, depending on their content of reconstructed objects. To avoid using the same energy entry more than once, objects are removed if they are too close to each other: Jets are removed if there are photons or electrons nearby, and photons are removed if an electron is close. An event class can include up to four leptons, two photons and eight jets, with at least one lepton in each event class. After assigning events to event classes, three kinematic distributions are examined, which are promising to spot new physics:
- Scalar Sum of the Transverse Momentum of all participating objects: $ \Sigma p_{T}$.
- Combined Invariant Mass M of those objects. For classes containing MET, the transverse invariant mass $M_{T}$ is used.
- Missing Transverse Energy in classes containing MET above our predefined threshold.
Out of these distributions $ \Sigma p_{T}$ is the most general observable, sensitive to many new physics models involving heavy new particles or modified high-energy behaviour. The invariant mass allows the discovery of new resonances. New physics with heavy or highly boosted invisible particles will show up in the MET distribution.
The algorithm systematically scans for deviations, comparing the simulated SM prediction with the measured data. This is done by calculating a p-value for each connected bin region. Since all distributions are analysed as binned histograms, the bin width is adjusted to the resolution of the considered variable. It is selected to be the smallest multiple of 10GeV which exceeds the expected resolution. The p-value is the probability of a discrepancy between data and Monte Carlo, which is at least as extreme as observed, if the hypothesis is true that the Monte Carlo describes the data within its uncertainties. The region with the smallest p-value contains the most significant discrepancy and we call it the Region of Interest (*RoI*).
While p denotes the probability of each RoI seen individually, it cannot be used as a statistical estimator for the global significance for such deviation in any region. A penalty factor needs to be included to account for the number of investigated regions. Using pseudo-experiments, we can compute a new probability $ \tilde{p} $ of measuring at least one deviation in any region in this distribution with a lower p-value than the one seen in the RoI of the data. The value $ \tilde{p} $ for a given distribution is then simply the number of pseudo-experiments with $p_{pseudo}$ $<$ $p_{data}$, divided by the total number of pseudo-experiments.
Both probabilities p and $ \tilde{p} $ can be translated into standard deviations of normal distributions, which is a common way to show significances. As MUSiC is sensitive to both an excess and lack of events, a two-sided normal distribution is used.
The reconstruction efficiencies and the misidentification probability are determined from simulation and uncertainties are applied to cover possible differences to data. The uncertainty on the reconstruction efficiency is estimated with 4% for muons, 3% for electrons and photons and 1% for jets. For the uncertainty on the misidentification probability, we use 30% for photons, 50% for muons, and 100% for electrons.
Sensitivity Tests {#sec:2}
=================
To claim the absence of any obvious deviations the search needs to have a sufficient sensitivity to new physics. A number of tests scenarios has been studied to show that MUSiC would be able to successfully point out discrepancies. One test was to remove top-quark pair production process from the SM background and eight classes with a significance of 3$\sigma$ or higher were observed. Another test was to assume a 500 GeV Z’(neutral heavy gauge boson) as potential signal for New Physics. Here, di-muon and di-electron invariant mass distribution exceeded 4$\sigma$ level. And after assuming SUSY benchmark point LM0 as potential signal, deviations in a large number of classes were clearly visible.
Results {#sec:3}
=======
The analysis has been applied to the data taken by CMS in 2010 and no significant deviations have been observed. Overall 287 distributions in 118 classes have been analysed. The distributions of the $\tilde{p}$ of all analysed event classes, separated by kinematic distribution, i.e. Scalar Sum of the Transverse Momentum (Fig. 1), Invariant Mass (Fig. 2) and Missing Transverse Energy (Fig. 3) are shown here. The shaded area is the SM simulation and the crosses represent the data, and no unexpected deviations are found. The complete results for all the classes can be found at: *https://cms-project-music.web.cern.ch/cms-project-music*.
Summary {#sec:4}
=======
The implementation and results of a model independent analysis of the 2010 CMS data (at an integrated luminosity of 36.1/pb) have been presented. Studies have been performed to evaluate the sensitivity of the approach to certain kinds of scenarios. MUSiC illustrates good agreement between data and SM expectation within uncertainties.
[^1]:
[^2]:
|
---
abstract: 'The STAR Collaboration reported measurements of di-hadron azimuthal correlation in medium-central Au+Au collisions at 200 A$\,$GeV, where the data are presented as a function of the trigger particle’s azimuthal angle relative to the event plane $\phi_s\,$. In particular, it is observed that the away-side correlation evolves from single- to double-peak structure with increasing $\phi_s$. In this work, we present the calculated correlations as functions of both $\phi_s$ and particle transverse momentum $p_T\,$, using the hydrodynamic code NeXSPheRIO. The results are found to be in reasonable agreement with the STAR data. We further argue that the above $\phi_s$ dependence of the correlation structure can be understood in terms of one-tube model, as due to an interplay between the background elliptic flow caused by the initial state global geometry and the flow produced by fluctuations.'
author:
- 'Wei-Liang Qian$^1$, Rone Andrade$^2$, Fernando Gardim$^2$, Frédérique Grassi$^2$, Yogiro Hama$^2$'
bibliography:
- 'references\_qian.bib'
date: 'August 2, 2012'
title: 'On the origin of trigger-angle dependence of di-hadron correlations'
---
I. Introduction
===============
Di-hadron correlations in ultrarelativistic heavy-ion collisions provide valuable information on the properties of the created medium. The correlated hadron yields at intermediate and low $p_T$, when expressed in terms of the pseudo-rapidity difference $\Delta\eta$ and azimuthal angular spacing $\Delta\phi$, are strongly enhanced [@star-ridge1; @star-ridge2; @star-ridge3; @phenix-ridge4; @phenix-ridge5; @phobos-ridge6; @phobos-ridge7; @alice-vn4; @cms-ridge5; @atlas-vn1] compared to those at high $p_T$ [@star-jet-1; @star-jet-2]. The structure in the near side of the trigger particle is usually referred to as “ridge”. It has a narrow $\Delta\phi$ located around zero and a long extension in $\Delta\eta$, and therefore it is tied to long range correlation in pseudo-rapidity. The away-side correlation broadens from peripheral to central collisions, and may exhibit double peak in $\Delta\phi$ for certain centralities and particle $p_T$ ranges. The latter is usually called “shoulders”. These structures, for the most part, can be successfully interpreted as consequence of collective flow [@sph-corr-1; @sph-corr-3; @hydro-v3-1; @ph-vn-3; @ph-corr-1; @sph-corr-4] due to the hydrodynamical evolution of the system.
Recently, efforts have been made to investigate the trigger-angle dependence of di-hadron correlations. STAR Collaboration reported measurements [@star-plane1] of di-hadron azimuthal correlations as a function of the trigger particle’s azimuthal angle relative to the event plane, $\phi_s=|\phi_{tr}-\Psi_{EP}|$ at different trigger and associated transverse momenta $p_T$. The data are for 20-60% mid-central Au+Au collisions at 200 A$\,$GeV. In a more recent study [@star-plane2], the correlation was further separated into “jet” and “ridge”, where the ridge yields are obtained by considering hadron pairs with large $|\Delta\eta|$. In this procedure, one assumes that the ridge is uniform in $\Delta\eta$ while jet yields are not. Such assumption is quite reasonable when one takes into account the measured correlation at low $p_T$ without a trigger particle [@star-ridge5]. In their work, all correlated particles at $|\Delta\eta| > 0.7$ are considered to be part of the ridge. It was observed that the correlations vary with $\phi_s$ in both the near and away side of the trigger particle. The “ridge” drops when the trigger particle goes from in-plane to out-of-plane and, moreover, the correlations in the away-side evolve from single- to double-peak with increasing $\phi_s$.
Owing to the $|\Delta\eta|$ cut, it is quite probable that these data mainly reflect the properties of the medium. If this is the case, the main features of the observed trigger-angle dependence of di-hadron correlations should be reproduced by hydrodynamic simulations, as hydrodynamic models have been shown capable of reproducing many important characteristics in collective flow [@hydro-review-1; @hydro-v2-heinz-1; @hydro-v2-heinz-2; @hydro-v2-heinz-3; @hydro-v2-shuryak-1; @hydro-v2-hirano-1; @hydro-v2-hirano-2; @sph-v2-1; @sph-v2-2; @hydro-v3-1; @hydro-v3-3; @hydro-eve-3; @hydro-eve-4; @epos-2]. The NeXSPheRIO code provides a good description of observed hadron spectra [@sph-cfo-1], collective flow[@sph-v2-1; @sph-v2-2; @sph-v1-1; @sph-vn-1], elliptic flow fluctuations [@sph-v2-3], and two-pion interferometry [@sph-hbt-1]. In addition, it is known to reproduce the structures in di-hadron long-range correlations [@sph-corr-1]. In our previous studies on ridge [@sph-corr-3; @sph-corr-2; @sph-corr-5], we obtained some of the experimentally known properties such as: centrality dependence and $p_T$ dependence. It is therefore interesting to see whether the model is further able to reproduce the observed trigger-angle dependence of the data. Moreover, it is intriguing to identify the underlying physical origin behind the phenomenon and numerical simulations. We, therefore, propose an intuitive explanation on the mechanism of the trigger-angle dependence of the ridge structures based on hydrodynamics. This is the main purpose of the present study.
The paper is organized as follows. In the next section, we carry out a hydrodynamic study on the trigger-angle dependence of di-hadron correlations by using NeXSPheRIO code. The calculations are done both with and without the pseudo-rapidity cut $|\Delta\eta| > 0.7\,$. Numerical results are presented for different angles of trigger particles and at different associated-particle transverse momentum, and they are compared with STAR data [@star-plane1; @star-plane2]. We try to understand the origin of the observed features in section III, by making use of an analytic parametrization of one-tube model [@sph-corr-3]. It is shown that the main features of the experimentally observed $\phi_s$ dependence can be obtained by the model, where one takes into account the interplay of the flow harmonics caused by a peripheral energetic tube and those from bulk background.[^1] In our approach, the background modulation is evaluated both by using cumulant and ZYAM method, and very similar results are obtained. Section IV is devoted to discussions and conclusions.
II. Numerical results of NeXSPheRIO
===================================
Here, we present the numerical results on di-hadron azimuthal correlation, using the hydrodynamic code NeXSPheRIO. This code uses initial conditions (IC) provided by the event generator NeXuS [@nexus-1; @nexus-rept], solves the relativistic ideal hydrodynamic equations with SPheRIO code [@sph-review-1]. By generating many NeXuS events, and solving independently the equations of hydrodynamics for each of them, one takes into account the fluctuations of IC in event-by-event basis. At the end of the hydrodynamic evolution of each event, a Monte-Carlo generator is employed to achieve hadron emission, in Cooper-Frye prescription, and then hadron decay is considered.
To evaluate di-hadron correlations, we generate 1200 NeXuS events in 20-60% centrality window for 200 A$\,$GeV Au-Au collisions. At the end of each event, Monte-Carlo generator is invoked for decoupling, from 300 times for more central to 500 times for more peripheral collisions. Here we emphasize that there is no free parameter in the present simulation, since the few existing ones have been fixed in earlier studies of $\eta$ and $p_T$ distributions [@sph-v2-3]. To subtract the combinatorial background, we evaluate the two-particle cumulant. In order to make different events similar in characters, the whole centrality window is further divided equally into four smaller centrality classes, from 20-30% to 50-60%. Then one picks a trigger particle from one event and an associated particle from a different event to form a hadron pair. Averaging over all the pairs within the same sub-centrality class, one obtains the two particle cumulant. Background modulation is evaluated and subtraction is done within each sub-centrality class and then they are summed up together at the end. We calculated both cases: with and without the pseudo-rapidity cut. In the first case, all hadron pairs are included. Then in the latter case, only hadron pairs with $|\Delta\eta| > 0.7$ are considered as done in the STAR analysis. Note that in our approach, the IC constructed by “thermalizing” NeXus output do not explicitly involve jets, but they are not totally forgotten in the IC as they manifect themselves as high transverse fluid velocity in some localized region (see for instance Fig.2 of Ref.[@sph-corr-2]). These regions with high transverse velocity, if not smeared out during hydrodynamic evolution, shall show up as a part of the near side “jet" peak in the resulting two particle correlaiton. Owing to its different physical origin, it is not correlated to the “ridge” and “shoulder” due to initial geometrical irregularities. The implementation of the pseudo-rapidity cut in our calculation should reduce the correlation due to such effect.
The numerical results are shown in Fig.1 and Fig.2 in solid lines. They are, respectively, compared with the STAR data [@star-plane1; @star-plane2] in filled circles and flow systematic uncertainties in histograms. From Fig.1 and Fig.2, one sees that the data are reasonably reproduced by NeXSPheRIO code. Correlations decrease both in the near side and in the away side when $\phi_s$ increases. For high-$p_T$ triggers, this was thought to be related to the path length that the parton traverses [@star-plane2]. Here, we see that such feature is also presented by intermediate- and low-energy particles, and reproduced well in a hydrodynamic approach. The magnitude of correlations in Fig.2 is globally smaller than those in Fig.1. This is because due to the $\Delta\eta$ cut in the former case, the total yields of associated particles is reduced. If one only takes into consideration the overall shape of the correlations, one may notice that the hydrodynamic results in both plots are quite similar, Fig.2 can be approximately obtained if one scales the plots in Fig.1 by a factor of 0.6. The above results can be understood as a consequence of approximate Bjorken scaling in our hydrodynamic model. On one hand, the $\Delta\eta$ cut effectively removes a portion of associated particles with selected pseudo-rapidity difference to the trigger particle. On the other hand, since the correlation is divided by the total trigger-particle number, the reduction of this does not affect the normalization. As expected, NeXSPheRIO results fit better for low momentum and the deviations increase at higher momentum. The calculated ridge structure in $\Delta\phi$ varies, especially on the away side, with trigger direction. For in-plane trigger, simulation results exhibit a one-peak structure, which is broader in comparison with the near-side peak. The away-side structure changes continuously from one peak at in-plane direction ($\phi_s=0$) to double peak at out-of-plane direction ($\phi_s=\pi/2$). This characteristic is manifested particularly for larger transverse momentum of the associated particle. All these features are in good agreement with the STAR measurements.
III. The origin of trigger-angle dependence of ridge structure
==============================================================
As pointed out, the motivation to introduce the $|\Delta\eta|$ cut was to separate “ridge” from “jet”, therefore the measured “ridge” correlations are expected to reflect the properties of the medium. In the previous section, we showed that hydrodynamic simulations are able to reproduce the main features observed experimentally. Now, how this effect is produced? In a previous study [@sph-corr-ev-1], we tried to clarify the origin of the effect by using the one-tube model [@sph-corr-3] adapted to non-central collisions. Full details of the model can be found in [@sph-corr-3; @sph-corr-2; @sph-corr-4; @sph-corr-5; @sph-corr-6]. Here we only outline the main thoughts. In one-tube model, or more generally peripheral-tube model (allowing more than one tube), one treats the initial energy profile on a transverse plane as superposition of peripheral high energy tubes on top of the background, which can be thought as the average distribution. The problem is further simplified by studying the transverse hydrodynamic expansion of a system consisting of only one peripheral tube with the assumption of longitudinal invariance. As the high energy tube deflects the collective flow generated by the background, the resulting single particle azimuthal distribution naturally possesses two peaks. They eventually give rise to the desired “ridge” and “shoulder” structures. In ref.[@sph-corr-ev-1], we took as background the average energy-density distribution obtained with NeXSPheRIO, which has an elliptical shape for non-central collisions. A peripheral tube sits on top of the background and its azimuthal position varies from event to event. As shown there, the di-hadron correlation obtained from such a simple IC configuration does reproduce the main features of the data.
Though the IC of the problem was greatly simplified through such an approach, the underlying physical mechanism of the obtained features were still not clear. In order to identify it more transparently, we further simplify it using an approximate analytical model. The derivation of the results relies on the following three hypotheses:
- The collective flow consists of contributions from the background and those induced by a peripheral tube.
- A small portion of the flow is generated due to the interaction between background and the peripheral tube and, therefore, the flow produced in this process is correlated to the tube.
- Event by event multiplicity fluctuations are further considered as a correction that sits atop of the above collective flow of the system.
Let us comment briefly on these hypotheses. Based on the idea of one-tube model, a small portion of the background flow is deflected by the peripheral tube; extra Fourier components of the flow are generated by this process. The event plane of these extra flow harmonics are consequently correlated to the location of the tube, as stated in the second hypothesis. Since the contribution from the tube is small, we will treat it perturbatively, considering the resultant flow a superposition of the background flow and the one produced in the tube-background interaction as described above. We believe that, at least qualitative behavior of the results will remain valid also in more realistic case. Here, we are considering just one tube as fluctuation. However, in the limit of small perturbations, it is quite straightforward to generalize our results to the case of $N$ tubes. There are many possible sources for fluctuations, such as: flow fluctuations, multiplicity fluctuations etc, we will only consider multiplicity fluctuations in this simple model as assumed in the third hypothesis. As it will be shown below, this turn out to be enough to derive the observed feature in di-hadron correlations.
Using the hypotheses stated above, we write down the one-particle distribution as a sum of two terms: the distribution of the background and that of the tube. $$\begin{aligned}
\frac{dN}{d\phi}(\phi,\phi_t) =\frac{dN_{bgd}}{d\phi}(\phi) +\frac{dN_{tube}}{d\phi}(\phi,\phi_t),
\label{eq1} \end{aligned}$$ where $$\begin{aligned}
\frac{dN_{bgd}}{d\phi}(\phi)&=&\frac{N_b}{2\pi}(1+2v_2^b\cos(2\phi)), \label{eq2}\\
\frac{dN_{tube}}{d\phi}(\phi,\phi_t)&=&\frac{N_t}{2\pi}\sum_{n=2,3}2v_n^t\cos(n[\phi-\phi_t]) \label{eq3} \end{aligned}$$ Since the background is dominated by the elliptic flow in non-central collisions, as observed experimentally, in Eq.(\[eq2\]) we consider the most simple case, by parametrizing it with the elliptic flow parameter $v_2^b$ and the overall multiplicity, denoted by $N_b$. As for the contributions from the tube, for simplicity, we assume they are independent of its angular position $\phi_t$ and take into account the minimal number of Fourier components to reproduce the two-particle correlation due to the sole existence of a peripheral tube in an isotropic background (see the plots of Fig.2 of ref.[@sph-corr-4], for instance). Therefore, only two essential components $v_2^t$ and $v_3^t$ are retained in Eq.(\[eq3\]). We note here that the overall triangular flow in our approach is generated only by the tube, [*i.e.*]{}, $v_3=v_3^t$ and so its symmetry axis is correlated to the tube location $\phi_t$. The azimuthal angle $\phi$ of the emitted hadron and the position of the tube $\phi_t$ are measured with respect to the event plane $\Psi_2$ of the system. Since the flow components from the background are much bigger than those generated by the tube, as discussed below, $\Psi_2$ is essentially determined by the elliptic flow of the background $v_2^b$.
The di-hadron correlation is given by $$\begin{aligned}
\left<\frac{dN_{pair}}{d\Delta\phi}(\phi_s)\right> =\left<\frac{dN_{pair}}{d\Delta\phi}(\phi_s)\right>^{proper} -\left<\frac{dN_{pair}}{d\Delta\phi}(\phi_s)\right>^{mixed} ,
\label{eq4} \end{aligned}$$ where $\phi_s$ is the trigger angle ($\phi_s=0$ for in-plane and $\phi_s=\pi/2$ for out-of-plane trigger). In one-tube model, $$\begin{aligned}
\left<\frac{dN_{pair}}{d\Delta\phi}\right>^{proper}
=\int\frac{d\phi_t}{2\pi}f(\phi_t)
\frac{dN}{d\phi}(\phi_s,\phi_t)\frac{dN}{d\phi}
(\phi_s+\Delta\phi,\phi_t), \end{aligned}$$ where $f(\phi_t)$ is the distribution function of the tube. We will take $f(\phi_t)=1$, for simplicity.
The combinatorial background $\left<{dN_{pair}}/{d\Delta\phi}\right>^{mixed}$ can be calculated by using either cumulant or ZYAM method [@zyam-1]. As shown below, both methods lend very similar conclusions in our model. Here, we first carry out the calculation using cumulant, which gives $$\begin{aligned}
\left<\frac{dN_{pair}}{d\Delta\phi}\right>^{mixed(cmlt)}
=\int\frac{d\phi_t}{2\pi}f(\phi_t)
\int\frac{d\phi_t'}{2\pi}f(\phi_t')\frac{dN}{d\phi}
(\phi_s,\phi_t)\frac{dN}{d\phi}(\phi_s+\Delta\phi,\phi_t'). \end{aligned}$$ Notice that, in the averaging procedure above, integrations both over $\phi_t$ and $\phi_t'$ are required in the mixed events, whereas only one integration over $\phi_t$ is enough for proper events. This will make an important difference between two terms in the subtraction of Eq.(\[eq4\]).
Using our simplified parametrization, Eqs.(\[eq1\]-\[eq3\]) and, by averaging over events, one obtains $$\begin{aligned}
\left<\frac{dN_{pair}}{d\Delta\phi}\right>^{proper}
&=&\frac{<N_b^2>}{(2\pi)^2}(1+2v_2^b\cos(2\phi_s))
(1+2v_2^b\cos(2(\Delta\phi+\phi_s))) \nonumber\\
&+&(\frac{N_t}{2\pi})^2
\sum_{n=2,3}2(v_n^t)^2\cos(n\Delta\phi)
\label{proper}\end{aligned}$$ and $$\begin{aligned}
\left<\frac{dN_{pair}}{d\Delta\phi}\right>^{mixed(cmlt)}
=\frac{<N_b>^2}{(2\pi)^2}(1+2v_2^b\cos(2\phi_s)
(1+2v_2^b\cos(2(\Delta\phi+\phi_s))).
\label{mixedc} \end{aligned}$$ Because of random distribution, contributions from peripheral tube are cancelled out upon averaging in the mixed-event correlation. Observe the difference between the factors multiplying the background terms of the proper- and mixed-event correlation. By subtracting Eq.(\[mixedc\]) from Eq.(\[proper\]), the resultant correlation is $$\begin{aligned}
\left<\frac{dN_{pair}}{d\Delta\phi}(\phi_s)\right>^{(cmlt)}
&=&\frac{<N_b^2>-<N_b>^2}{(2\pi)^2}(1+2v_2^b\cos(2\phi_s))
(1+2v_2^b\cos(2(\Delta\phi+\phi_s))) \nonumber\\
&+&(\frac{N_t}{2\pi})^2\sum_{n=2,3}2({v_n^t})^2
\cos(n\Delta\phi).
\label{ccumulant}\end{aligned}$$ So, one sees that, [*as the multiplicity fluctuates, the background elliptic flow does contribute to the correlation.*]{} Now, by taking $\phi_s=0$ in Eq.(\[ccumulant\]), the correlation for the in-plane trigger is finally given as $$\begin{aligned}
\left<\frac{dN_{pair}}{d\Delta\phi} \right>_{in-plane} ^{(cmlt)}
&=&\frac{<N_b^2>-<N_b>^2}{(2\pi)^2}
(1+2v_2^b)(1+2v_2^b\cos(2\Delta\phi))\nonumber\\
&+&(\frac{N_t}{2\pi})^2
\sum_{n=2,3}2(v_n^t)^2\cos(n\Delta\phi).
\label{eq9} \end{aligned}$$ We note that, due to the summation of the two terms concerning $v_3^t$ and $v_2^b$, the away-side peak becomes broader than the near-side one, as shown below in Fig.\[fig3\].
Similarly, the out-of-plane correlation is obtained by putting $\phi_s=\pi/2$ as $$\begin{aligned}
\left<\frac{dN_{pair}}{d\Delta\phi}\right>_{out-of-plane}^{(cmlt)}
&=&\frac{<N_b^2>-<N_b>^2}{(2\pi)^2}
(1-2v_2^b)
(1-2v_2^b\cos(2\Delta\phi))\nonumber\\
&+&(\frac{N_t}{2\pi})^2
\sum_{n=2,3}2(v_n^t)^2\cos(n\Delta\phi).
\label{eq10} \end{aligned}$$ One sees that, because of the shift in the trigger angle $\phi_s$ ($0\rightarrow\pi/2$), the cosine dependence of the background contribution gives an opposite sign, as compared to the in-plane correlation. This negative sign leads to the following consequences. Firstly, there is a reduction in the amplitude of out-of-plane correlation both on the near side and on the away side. More importantly, it naturally gives rise to the observed double peak structure on the away side. Therefore, despite its simplicity, the above analytic model reproduces the main characteristics of the observed data. The overall correlation is found to decrease meanwhile away-side correlation evolves from a broad single peak to a double peak as $\phi_s$ increases. The correlations in Fig.3 is plotted with the following parameters $$\begin{aligned}
<N_t^2>=0.45 \nonumber \\
v_2^t=v_3^t=0.1 \nonumber \\
<N_b^2>-<N_b>^2=0.022 \nonumber \\
v_2^b=0.25 \label{parameterset}\end{aligned}$$ We note that both the correlated yields and the flow harmonics are actually dependent on the specific choice of the $p_T$ interval of the observed hadrons as shown in Fig.1 and Fig.2. Since the transverse momentum dependence has not been explicitly taken into account in this simple model, the multiplicities in the above parameter set are only determined up to an overall normalization factor, and the flow coefficients are chosen to reproduce the qualitative behavior of the trigger-angle dependence of di-hadron correlation as shown by data.
Now we will show that very similar results will be again obtained, if one evaluates the combinatorial mixed event contribution using ZYAM method. The spirit of ZYAM method is to first estimate the form of resultant correlation solely due to the average background collective flow and, then, the evaluated correlation is rescaled by a factor $B$, the latter is determined by assuming zero signal at minimum of the subtracted correlation. Di-hadron correlation for the background flow is given by $$\begin{aligned}
\left<\frac{dN_{pair}}{d\Delta\phi}\right>^{mixed(ZYAM)}
=B(\phi_s)\int\frac{d\phi}{2\pi}\delta(\phi-\phi_s)
\frac{dN_{bgd}}{d\phi}(\phi)\frac{dN_{bgd}}{d\phi}(\phi+\Delta\phi).
\label{mixedz1} \end{aligned}$$ In the STAR analyses, both $v_2$ and $v_4$ have been taken into account for the background. In our simplified model, however, the average background flow contains, besides the radial one, only the elliptic flow $v_2^b$. Therefore, a straightforward calculation gives $$\begin{aligned}
\left<\frac{dN_{pair}}{d\Delta\phi}\right>^{mixed(ZYAM)}
=B(\phi_s)\frac{<N_b>^2}{(2\pi)^2}(1+2v_2^b\cos(2\phi_s)(1+2v_2^b\cos(\Delta\phi+\phi_s)).
\label{mixedz2} \end{aligned}$$ We remark that, since in ZYAM method fluctuations are not explicitly considered, the multiplicity of background distribution, as given by Eq.(\[eq2\]), is evidently average multiplicity $<N_b>$. By combining this term with the proper correlation, given by Eq.(\[proper\]), the resultant correlation reads $$\begin{aligned}
\left<\frac{dN_{pair}}{d\Delta\phi}(\phi_s)\right>^{(ZYAM)}
&=&\frac{<N_b^2>-B(\phi_s)<N_b>^2}{(2\pi)^2}(1+2v_2^b\cos(2\phi_s))(1+2v_2^b\cos(2(\Delta\phi+\phi_s))) \nonumber\\
&+&(\frac{N_t}{2\pi})^2\sum_{n=2,3}2{v_n^t}^2\cos(n\Delta\phi) \label{czyam} \end{aligned}$$ The consistency of this expression with what is used in the STAR analyses will be discussed below, in the next section. Note that the normalization factor $B(\phi_s)$ is a function of trigger angle. It is fixed to give zero yield at the minimum of the subtracted correlation, namely $\left<{dN_{pair}}/{d\Delta\phi}(\phi_s)\right>^{(ZYAM)}=0$ at the minimum. Because the correlation in Eq.(\[czyam\]) is positively defined, and the second term can be positive or negative, the coefficient of the first term $<N_b^2>-B(\phi_s)<N_b>^2$ must be positive. One sees clearly that the above expression is almost identical to the cumulant result, Eq.(\[ccumulant\]), so will cause similar trigger-angle dependence. This can also be seen from the plots of correlations obtained by adopting the same parameters as in Eq.(\[parameterset\]) and additionally $$\begin{aligned}
<N_b^2>=100 , \end{aligned}$$ where the value of $<N_b^2>$ is choosen to be larger compared to its fluctuation. The scale factor $B(\phi_s)$ is subsequently fixed by the minimum condition as $$\begin{aligned}
B(\phi_s=0) = 1.000053 \nonumber \\
B(\phi_s=\pi/2)= 1\end{aligned}$$ The in-plane correlation plot is shown in Fig.4, which is very close to the one in Fig.3. The one-tube contribution and out-of-plane correlation are not plotted, since they are exactly the same as those shown in Fig.3.
IV. Discussions and Conclusions
===============================
Here, we first show that the expression of di-hadron correlation of background flow in Eq.(\[mixedz2\]) is in agreement with that obtained in Ref.[@ph-corr-ev-1], which is employed in STAR analyses [@star-plane1; @star-plane2]. The only difference is that we have only taken into account the second-order harmonic, and the reason for not including any higher-order flow components in our calculation is simply because we wanted to transparently show the mechanism of in-plane/out-of-plane effect by using a model as simple as possible. In Ref.[@ph-corr-ev-1], it was shown that $$\frac{dN_{pair}}{d\Delta\phi}=B^{(R)}\left[1+2\sum_{n=1}^{\infty}v_n^{(a)}v_n^{(t,R)}\cos(n\Delta\phi)\right],\label{eq:bkgd}$$ where $v_n^{(a)}$ is the associated particle’s $n$-th harmonic, $v_n^{(t,R)}$ is the average $n$-th harmonic of the trigger particles, and $B^{(R)}$, the background normalization, denotes the integrated inclusive pair yield $$\begin{aligned}
B^{(R)}&=&B\left(1+\sum_{k=2,4,6,...}2v_k^{(t)}\cos(k\phi_s)\frac{\sin(kc)}{kc}\right), \nonumber \\
v_n^{(t,R)}&=&\frac{v_n^{(t)}+\delta_{n,{\rm even}}T_n+\sum_{k=2,4,6,...}\left(v_{k+n}^{\rm (t)}+v_{|k-n|}^{\rm (t)}\right)T_k}
{1+\sum_{k=2,4,6,...}2v_{k}^{(t)}T_k}\ , \label{eq:vR} \\
T_k&=&\cos(k\phi_s)\frac{\sin(kc)}{kc}\left< \cos(k\Delta\Psi)\right>\,, \nonumber \end{aligned}$$ where $2c$ is the angular width where a trigger is located. In our approach, the size of the slice is taken to be infinitely small ($c\to 0$), and perfect event plane resolution ($\left< \cos(k\Delta\Psi)\right>=1$) is assumed. Take, for instance, the in-plane correlation by substituting $\phi_s =0$, and considering terms up to the second order, one obtains $$\begin{aligned}
B^{(R)}&=&B\left(1+2v_2^{(t)}\right) \nonumber \\
v_2^{(t,R)}&=&1 \nonumber \end{aligned}$$ which is readily shown to be consistent with Eq.(\[mixedz2\]). In fact, it is intuitive to understand since, the one-particle distribution of the trigger in this case is a $\delta$ function which peaks at $\phi_s=0$.
In our approach, the trigger-angle dependence of di-hadron correlation is understood as due to the interplay between the elliptic flow caused by the initial almond deformation of the whole system and flow produced by fluctuations. The contributions due to fluctuations are expressed in terms of a high-energy-density tube and the flow deflected by it. However, the generic correlation due to the tube is preserved even after the background subtraction[@sph-corr-4], by this simple model we show explicitly that the result does not depend on either cumulant or ZYAM method. This is because the form of combinatorial background is determined by the average flow harmonics, as shown in Eqs.(\[ccumulant\]) and (\[czyam\]). Though in this approach, only elliptic flow is considered for simplicity, it is straightforward to extend the result here to a more general case. Due to multiplicity fluctuations or due to the procedure in ZYAM, the background flow may also contribute to the subtracted correlation. Since the background modulation is shifted, changing the phase, when the trigger particle moves from in-plane to out-of-plane as seen in Eqs(\[mixedc\]) and (\[mixedz2\]), the summation of the contributions of the background and that of the tube give rise to the desired trigger-angle dependence.
In the one-tube model, a part of the flow is caused by the peripheral energetic tube. Since the tube deflects the global flow of the background, the event planes of such flow components are correlated to the localization of the tube as expressed in Eq.(\[eq3\]). Particularly, it also contains the second harmonic $v_2^t$. Though it is present in the proper two particle correlation (Eq.(\[proper\])), it is not considered in Eq.(\[mixedz2\]) when evaluating combinatorial background correlation. The reason is two-fold. Firstly, to calculate the average $v_2$ of a given event, one must use multiplicity as weight, in our model, the multiplicity of background $N_b$ is assumed to be much bigger than that of the tube $N_t$. (The parameter $<N_b^2>=100$ can be freely changed to a much bigger number.) Moreover, since $\phi_t$ varies from event to event, the contribution to $v_2$ from $v_2^t$ is positive at $\phi_t=0$ and negative at $\phi_t=\pi/2$. When averaging over different events, most contributions cancel each other at different $\phi_t$ values. As it happens, $v_2^t$ contributes to the subtracted correlation while it does not manifest itself in average background flow. This is an important feature of the present approach.
It is interesting to note that the two particle correlation has also been studied using Fourier expansion in[@ph-corr-1; @ph-corr-ev-2] $$\begin{aligned}
\left<\frac{dN_{pair}}{d\Delta\phi}(\phi_s)\right>^{proper}
&=&\frac{N^2}{(2\pi)^2}(1+2V_{2\Delta}\cos(2\Delta\phi)+2V_{3\Delta}\cos(3\Delta\phi))+\cdots\end{aligned}$$ For comparison, we rewrite Eq.(\[proper\]) in terms of $V_{n\Delta}$ as follows $$\begin{aligned}
V_{2\Delta}&=&\frac{N_t^2}{\left<N_b^2\right>\left(1+ 2v_2^b\cos(2\phi_s)\right)}\left(v_2^t\right)^2+ \cos(2\phi_s)v_2^b \label{V2d} \\
V_{3\Delta}&=&\frac{N_t^2}{\left<N_b^2\right>\left(1+ 2v_2^b\cos(2\phi_s)\right)}\left(v_3^t\right)^2 \label{V3d}\, .\end{aligned}$$ One sees that the background elliptic flow $v_2^b$ dominates $V_{2\Delta}$ for both in-plane and out-of-plane directions, while $V_{3\Delta}$ is determined by the triangular flow $v_3^t$ produced by the tube. Due to the factor $\cos(2\phi_s)$, the second term of Eq.(\[V2d\]) changes sign when the trigger angle goes from $\phi_s=0$ to $\phi_s=\pi/2$. Dominated by this term, $V_{2\Delta}$ decreases with $\phi_s$, and it intersects $V_{2\Delta}=0$ at around $\phi_s=\pi/4$. Since the first term in Eq.(\[V2d\]) is positive definite, the integral of $V_{2\Delta}$ with respect to $\phi_s$ is positive. These features are in good agreement with the data analysis (see Fig.1 of ref.[@ph-corr-1]). On the other hand, the axis of triangularity is determined by the position of the tube $\Psi_3=\phi_t$. Since we have assumed uniform distribution $f(\phi_t)=1$ in our calculation, the event plane of triangularity is totally uncorrelated with the event plane $\Psi_2$, as generally understood [@hydro-v3-1; @glauber-en-3], and consequently, the contribution from triangular flow should not depend much on the event-plane angle. This is indeed shown in the above expression Eq(\[V3d\]). $V_{3\Delta}$ barely depends on $\phi_s$, if anything, it slightly increases with increasing $\phi_s$. This characteristic is also found in the data[@ph-corr-1].
In conclusion, the NeXSPheRIO code gives correct qualitative behavior of the in-plane/out-of-plane effect. Physically, we understand that this effect appears because, besides the contribution coming from the peripheral tube, additional contribution from the background flow has to be considered. The latter is back-to-back (peaks at $\Delta\phi= 0 , \pi$) in the case of in-plane triggers ($\phi_s=0$) and rotated by $\pi/2$ (peaks at $\Delta\phi=-\pi/2 , \pi/2$) in the case of out-of-plane triggers ($\phi_s = \pi/2$). A simplified analytical model is proposed, and it is shown to successfully reproduce the observed features.
V. Acknowledgments
==================
We thank for valuable discussions with Fuqiang Wang, Paul Sorensen, Lanny Ray, Roy Lacey, Jiangyong Jia and Wei Li. We acknowledge funding from Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), Fundação de Amparo à Pesquisa do Estado de Minas Gerais (FAPEMIG) and Conselho Nacional de Desenvolvimento Cientitífico e Tecnológico (CNPq).
![ The subtracted di-hadron correlations as a function of $\Delta\phi$ for different $\phi_s=\phi_{trig}-\phi_{EP}$ and $p_T^{assoc}$ with $3 < p_T^{trig} < 4 GeV$ in 20 - 60% Au+Au collisions. The $\phi_s$ range increases from 0-15$^{\circ}$ (left column) to 75-90$^{\circ}$ (right column); the $p_T^{assoc}$ range increases from 0.15-0.5 GeV (top row) to 1.5-2 GeV (bottom row). NeXSPheRIO results in solid curves, are compared with STAR data in filled circles [@star-plane1]. The histograms indicate the systematic uncertainties from flow subtraction. []{data-label="fig1"}](fig1.eps){width="16.cm"}
![ The subtracted di-hadron correlations as a function of $\Delta\phi$ for different $\phi_s=\phi_{trig}-\phi_{EP}$ and $p_T^{assoc}$ with $3 < p_T^{trig} < 4 GeV$ and $|\Delta\eta| > 0.7$ in 20 - 60% Au+Au collisions. The $\phi_s$ range increases from 0-15$^{\circ}$ (left column) to 75-90$^{\circ}$ (right column); the $p_T^{assoc}$ range increases from 0.15-0.5 GeV (top row) to 2-3 GeV (bottom row). NeXSPheRIO results in solid curves are compared with STAR data in filled circles [@star-plane2]. The grey histograms indicate the systematic uncertainties from flow subtraction. []{data-label="fig2"}](fig2.eps){width="16.cm"}
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Plots of di-hadron correlations calculated by cumulant method. From the left to the right: (i) the peripheral-tube contribution; (ii) the one from the background (dashed line) and the resultant correlation (solid line) for in-plane triggers, as given by Eq.(\[eq9\]); and (iii) the corresponding ones for the out-of-plane triggers, Eq.(\[eq10\]).[]{data-label="fig3"}](fig3tube.eps){width="220pt"} ![Plots of di-hadron correlations calculated by cumulant method. From the left to the right: (i) the peripheral-tube contribution; (ii) the one from the background (dashed line) and the resultant correlation (solid line) for in-plane triggers, as given by Eq.(\[eq9\]); and (iii) the corresponding ones for the out-of-plane triggers, Eq.(\[eq10\]).[]{data-label="fig3"}](fig3inplane.eps){width="220pt"}
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Plots of di-hadron correlations calculated by cumulant method. From the left to the right: (i) the peripheral-tube contribution; (ii) the one from the background (dashed line) and the resultant correlation (solid line) for in-plane triggers, as given by Eq.(\[eq9\]); and (iii) the corresponding ones for the out-of-plane triggers, Eq.(\[eq10\]).[]{data-label="fig3"}](fig3outofplane.eps){width="220pt"}
![Plot of in-plane di-hadron correlation by using ZYAM as given by Eq.(\[czyam\]). The one-tube contribution and out-of-plane correlation are exactly the same as the ones shown in Fig.3.[]{data-label="fig4"}](fig4.eps){width="220pt"}
[^1]: A preliminary report of this discussion has been presented by Y.H. at the ISMD2011 meeting. [@sph-corr-ev-2]
|
---
abstract: 'We developed the theory of finite volume form factors in the presence of integrable defects. These finite volume form factors are expressed in terms of the infinite volume form factors and the finite volume density of states and incorporate all polynomial corrections in the inverse of the volume. We tested our results, in the defect Lee-Yang model, against numerical data obtained by truncated conformal space approach (TCSA), which we improved by renormalization group methods adopted to the defect case. To perform these checks we determined the infinite volume defect form factors in the Lee-Yang model exactly, including their vacuum expectation values. We used these data to calculate the two point functions, which we compared, at short distance, to defect CFT. We also derived explicit expressions for the exact finite volume one point functions, which we checked numerically. In all of these comparisons excellent agreement was found.'
author:
- 'Z. Bajnok$^{1}$, F. Buccheri$^{2}$, L. Hollo$^{1}$, J. Konczer$^{1}$ and G. Takacs$^{2,3}$'
title: Finite volume form factors in the presence of integrable defects
---
$^{1}$MTA Lendület Holographic QFT Group, Wigner Research Centre for Physics\
H-1525 Budapest 114, P.O.B. 49, Hungary\
\
$^{2}$MTA-BME “Momentum” Statistical Field Theory Research Group\
1111 Budapest, Budafoki út 8, Hungary\
\
$^{3}$Department of Theoretical Physics,\
Budapest University of Technology and Economics\
1111 Budapest, Budafoki út 8, Hungary\
Introduction
============
The complete solution of a quantum field theory means the determination of its spectrum and correlation functions. This ultimate goal is almost impossible to achieve in general dimensions and for generic interacting theories. However, in two dimensional integrable models even such an ambitious plan can be fulfilled.
For many quantities, such as operator matrix elements, a numerical determination is only possible in finite volumes [@Maiani:1990ca]. Such determination can be performed using the tools of lattice field theory or, in the case of two-dimensional quantum field theories, a more specialized method such as the truncated conformal space approach (TCSA) [@Yurov:1989yu]. Finite volume quantities are also interesting on their own right in statistical field theory, as well as in particle physics. The general strategy to compute them analytically is to solve the theory in infinite volume first and then to take into account the finite size corrections systematically. The infinite volume solution is carried out in the bootstrap framework, and it consists of the scattering matrix bootstrap and the form factor bootstrap parts. All finite size corrections can be expressed purely in terms of these infinite volume characteristics of the theory in a framework that was pioneered by Lüscher [@Luscher:1985dn; @Luscher:1986pf; @Luscher:1990ux].
The infinite volume solution of an integrable QFT starts with the S-matrix bootstrap. The scattering matrix satisfies unitarity and crossing symmetry and all of its poles are located on the imaginary rapidity axes and correspond to bound-states or some Coleman-Thun type diagrams. Assuming one single particle in the spectrum with a self-fusing pole the bootstrap leads to the S-matrix of the scaling Lee-Yang model. Introducing integrable boundaries or integrable defects requires additionally to perform this bootstrap program for the reflection and transmission matrices. The structure of the scattering, reflection and transmission matrices contain all information about the infinite volume spectrum of bulk, boundary or defect excitations. Once this first bootstrap step is completed, the resulting scattering, reflection and transmission matrices can be used to formulate consistency requirements for the matrix elements of local operators between asymptotic states (form factors). Solutions to these requirements compatible with the analytical structure demanded by physics lead to the determination of the form factors of all local bulk, boundary and defect operators. These form factors then can be used to build up all correlation functions in infinite volume.
Once all infinite volume characteristics are determined they can be used to decrease the volume gradually and continue the quantities for finite volumes. The leading volume dependence is polynomial in the inverse of the volume, while the sub-leading ones are exponentially small. The polynomial finite size corrections for the spectrum can be formulated in terms of the scattering, reflection or transmission matrices. At this order the dispersion relation is not changed, but the energy levels are quantized. Momentum is quantized in a finite volume as, if we move a particle around the 1D ’world’, we collect not only the translational phase, but also the phases of all scatterings and/or reflections and transmissions: therefore, imposing periodic boundary conditions restricts the allowed particle momenta. The resulting equations are called Bethe-Yang equations. Exponentially small corrections are due to vacuum polarization effects and can be taken into account by the Thermodynamic Bethe Ansatz method.
Finite volume form factors are the matrix elements of local operators between the finite volume eigenstates of the Hamiltonian. Just like for the spectrum, leading finite size corrections are polynomial in the inverse of the volume, while sub-leading corrections are exponentially small. The polynomial corrections are related to the normalization of the Bethe-Yang eigenvectors and were systematically analyzed in [@Pozsgay:2007kn; @Pozsgay:2007gx] for periodic boundary conditions, while in [@Kormos:2007qx] for general integrable boundary conditions. The aim of the present paper is to generalize this analysis for the defect case. We mention for completeness that the exponential finite size corrections have not been described yet, except for some recent results accounting for the composite structure of bound states [@Pozsgay:2008bf; @Takacs:2011nb], and for diagonal form factors in the case of periodic boundary conditions [@Pozsgay:2013jua].
Integrable interacting defects are either purely reflective (i.e. boundaries) or purely transmissive [@Delfino:1994nr]. Recently there was relevant progress in identifying and solving purely transmitting defect theories [@Bowcock:2004my; @Bowcock:2005vs; @Corrigan:2010ph; @Avan:2012rb; @Doikou:2012fc]. In this paper we focus on the simplest integrable defect theory, namely the defect Lee-Yang model. The T-matrix bootstrap of this model was performed in [@Bajnok:2007jg], while the form factor bootstrap program was initiated in [@Bajnok:2009hp].
Our paper is organized as follows: In section 2, we recall the infinite volume form factor solution of the defect Lee-Yang model. We start by listing the defect form factor axioms. We then introduce the defect Lee-Yang model both from the scattering (IR) and from the perturbed CFT (UV) side. In presenting the defect form factors we go beyond the results available in the literature, as we determine all form factor solutions of primary fields. Technical details are relegated to Appendix A. In order to have properly normalized form factors, we calculate the vacuum expectation values of all primary fields. The form factors and normalizations are checked against the defect CFT two point functions. In section 3 we develop the leading finite size corrections of defect form factors. We analyze separately the non-diagonal and diagonal cases as in the latter one disconnected terms appear which have to be determined carefully. Section 4 contains the checks of our results against the numerically “measured” finite volume matrix elements calculated by the defect TCSA (DTCSA) method. In section 5 we derive the exact finite volume vacuum expectation values of local fields what we also check numerically. Finally we draw our conclusions in section 6.
Form factors in infinite volume
===============================
In this section we recall the theory of form factors in the presence of integrable defects following [@Bajnok:2009hp]. We focus on a relativistic integrable theory which contains one particle type only. Energy and momentum of the particles are parametrized by the rapidity variable $\theta$: $$E(\theta)=m\cosh\theta\quad,\quad p(\theta)=m\sinh\theta$$ and Lorentz transformation simply shifts the rapidity: $\theta\to\theta+\Lambda$. Integrability forces the multi-particle scattering matrix to factorize into pairwise two particle scatterings, which depends on the rapidity differences $S(\theta_{1}-\theta_{2})$ and satisfies unitarity and crossing symmetry $$S(\theta)=S(-\theta)^{-1}\qquad;\qquad S(i\pi-\theta)=S(\theta)\,.$$ If the $S$-matrix has a pole at $\theta=i\frac{2\pi}{3}$ (like in the scaling Lee-Yang model) then it has to satisfy the fusion equation $$S(\theta)\vert_{\theta\approx i\frac{2\pi}{3}}=-i\frac{\Gamma^{2}}{\theta-i\frac{2\pi}{3}}+\mathrm{reg.\ terms}\quad\longrightarrow\quad S(\theta)=S\left(\theta-i\frac{\pi}{3}\right)S\left(\theta+i\frac{\pi}{3}\right)\,,\label{eq:fusionpole}$$ which shows that the particle is a bound-state of itself.
Introducing an integrable defect means that we cut the space-time into two halves and associate an amplitude of crossing through the defect. Particles coming from the left cross with $T_{-}(\theta)$, while those coming from the right cross with $T_{+}(-\theta)$. The $T_{+}$ transmission factor is parametrized such that for its physical domain of rapidities $(\theta<0)$ its argument is always positive. Unitarity and crossing symmetry relates these two amplitudes as [@Bajnok:2004jd]: $$T_{-}(\theta)=T_{+}(-\theta)^{-1}\qquad;\qquad T_{-}(\theta)=T_{+}(i\pi-\theta)$$ A prototype of a parity symmetric defect is a standing particle with $T_{-}(\theta)=S(\theta)=T_{+}(\theta).$ A non parity symmetric defect can be realized as an imaginary rapidity “bound” particle: $T_{\pm}(\theta)=S(\theta\pm iu).$ In the case of a fusion pole, (\[eq:fusionpole\]), the defect satisfies the fusion equation as well: $$T_{-}(\theta)=T_{-}\left(\theta-i\frac{\pi}{3}\right)T_{-}\left(\theta+i\frac{\pi}{3}\right)$$ and likewise for $T_{+}$. It might happen that the transmission factor exhibits a pole $$T_{-}(\theta)\vert_{\theta\approx i\nu}=-i\frac{g^{2}}{\theta-i\nu}+\mathrm{reg.\ terms}$$ signaling a defect bound-state.
Integrable defects are topological in the sense that the location of the defect can be changed without altering the amplitude of any multi-particle transmission process unless it crosses an insertion point of any field. As a consequence multi-particle transmissions factorize into the product of pairwise scatterings and individual transmissions.
In the following we recall the form factor axioms in the presence of these integrable defects [@Bajnok:2009hp].
Summary of the defect form factor bootstrap \[sub:Summary-of-the\]
------------------------------------------------------------------
Form factors are the matrix elements of local operators between asymptotic states: $$\langle\theta_{m'+n'}^{'},\dots,\theta_{n'+1}^{'};\theta_{n'}^{'},\dots,\theta_{1}^{'}\vert\mathcal{O}(x,t)\vert\theta_{1},\dots,\theta_{n};\theta_{n+1},\dots,\theta_{n+m}\rangle\label{eq:genff}$$ where in the presence of defects we have to distinguish if particles arrive from the left or from the right. In an initial state the rapidities are ordered as $$\theta_{1}>\dots>\theta_{n}>0>\theta_{n+1}>\dots>\theta_{n+m}$$ while in the final state oppositely $\theta_{m'+n'}^{'}>\dots>\theta_{n'+1}^{'}>0>\theta_{n'}^{'}>\dots>\theta_{1}^{'}$. The form factor is originally defined for initial and final states and then analytically continued for any orderings of its arguments. Interestingly, the presence of an integrable defect breaks the translation invariance by having non-zero momentum (like a bound particle) and not by destroying the existence of the momentum itself. As a consequence the space-time dependence of the form factor is $$\begin{aligned}
\langle\theta_{m'+n'}^{'},\dots,\theta_{n'+1}^{'};\theta_{n'}^{'},\dots,\theta_{1}^{'}\vert\mathcal{O}(x,t)\vert\theta_{1},\dots,\theta_{n};\theta_{n+1},\dots,\theta_{n+m}\rangle=\qquad\qquad\qquad\qquad\\
\qquad e^{it\Delta E-ix\Delta P}F_{(n',m')(n,m)}^{\mathcal{O}_{\pm}}(\theta_{n'+m'}^{'},\dots,\theta_{n'+1}^{'};\theta_{n'}^{'},\dots,\theta_{1}^{'}\vert\theta_{1},\dots,\theta_{n};\theta_{n+1},\dots,\theta_{n+m})\nonumber \end{aligned}$$ with $\Delta E=m(\sum_{j}\cosh\theta_{j}-\sum_{j'}\cosh\theta_{j'}^{'})$ and $\Delta P=m(\sum_{j}\sinh\theta_{j}-\sum_{j'}\sinh\theta_{j'}^{'})$, and we distinguished if the operator was localized on the left, $\mathcal{O}_{-}$, or on the right, $\mathcal{O}_{+}$, of the defect as they might not be continuous there. Same apply for operators localized at the defect ($x=0$). Crossing transformation of any of the form factors $$\begin{aligned}
F_{(n',m')(n,m)}^{\mathcal{O}}(\theta_{n'+m'}^{'},\dots,\theta_{n'+1}^{'};\theta_{n'}^{'},\dots,\theta_{1}^{'}\vert\theta_{1},\dots,\theta_{n};\theta_{n+1},\dots,\theta_{n+m})=\qquad\qquad\qquad\qquad\\
\qquad F_{(n',m'+1)(n,m-1)}^{\mathcal{O}}(\theta_{n+m}+i\pi,\theta_{n'+m'}^{'},\dots,\theta_{n'+1}^{'};\theta_{n'}^{'},\dots,\theta_{1}^{'}\vert\theta_{1},\dots,\theta_{n};\theta_{n+1},\dots,\theta_{n+m-1})\nonumber \end{aligned}$$ $$\begin{aligned}
F_{(n',m')(n,m)}^{\mathcal{O}}(\theta_{n'+m'}^{'},\dots,\theta_{n'+1}^{'};\theta_{n'}^{'},\dots,\theta_{1}^{'}\vert\theta_{1},\dots,\theta_{n};\theta_{n+1},\dots,\theta_{n+m})=\qquad\qquad\qquad\qquad\\
\qquad F_{(n'+1,m')(n-1,m)}^{\mathcal{O}}(\theta_{n'+m'}^{'},\dots,\theta_{n'+1}^{'};\theta_{n'}^{'},\dots,\theta_{1}^{'},\theta_{1}-i\pi\vert\theta_{2},\dots,\theta_{n};\theta_{n+1},\dots,\theta_{n+m})\nonumber \end{aligned}$$ can be used to bring all particles into one side: $$F_{(n,m)}^{\mathcal{O}}(\theta_{1},\dots,\theta_{n};\theta_{n+1},\dots,\theta_{n+m}):=F_{(0,0)(n,m)}^{\mathcal{O}}(;\vert\theta_{1},\dots,\theta_{n};\theta_{n+1},\dots,\theta_{n+m})$$ We can also use transmission $$F_{(n,m)}^{\mathcal{O}}(\theta_{1},\dots,\theta_{n};\theta_{n+1},\dots,\theta_{n+m})=T_{-}(\theta_{n})F_{(n-1,m+1)}^{\mathcal{O}}(\theta_{1},\dots,\theta_{n-1};\theta_{n},\theta_{n+1},\dots,\theta_{n+m})\label{eq:axtrans}$$ to define the *elementary* form factors $$F_{n}^{\mathcal{O}}(\theta_{1},\dots,\theta_{n})=F_{(n,0)}^{\mathcal{O}}(\theta_{1},\dots,\theta_{n};)\label{eq:ffdef}$$ which satisfies the axioms $$F_{n}^{\mathcal{O}}(\theta_{1},\dots\theta_{i},\theta_{i+1},\dots,\theta_{n})=S(\theta_{i}-\theta_{i+1})F_{n}^{\mathcal{O}}(\theta_{1},\dots\theta_{i+1},\theta_{i},\dots,\theta_{n})\label{eq:axperm}$$ $$F_{n}^{\mathcal{O}}(\theta_{1},\theta_{2},\dots,\theta_{n})=F_{n}^{\mathcal{O}}(\theta_{2},\dots\theta_{n},\dots,\theta_{1}-2i\pi)\label{eq:axper}$$ $$-i\mbox{Res}_{\theta=\theta^{\prime}}F_{n+2}^{\mathcal{O}}(\theta+i\pi,\theta^{\prime},\theta_{1},...,\theta_{n})=\bigl(1-\prod_{j=1}^{n}S(\theta-\theta_{j})\bigr)F_{n}^{\mathcal{O}}(\theta_{1},...,\theta_{n})\label{eq:axkin}$$
$$-i\mbox{Res}_{\theta=\theta^{\prime}}F_{n+2}^{\mathcal{O}}(\theta+\frac{i\pi}{3},\theta^{\prime}-\frac{i\pi}{3},\theta_{1},\ldots,\theta_{n})=\Gamma F_{n+1}^{\mathcal{O}}(\theta,\theta_{1},\ldots,\theta_{n})\label{eq:axdyn}$$
$$-i\mbox{Res}_{\theta=iu}F_{n+1}^{\mathcal{O}}(\theta_{1},\ldots,\theta_{n},\theta)=ig\tilde{F}_{n}^{\mathcal{O}}(\theta_{1},\ldots,\theta_{n})\label{eq:axbnd}$$
where $\tilde{F}$ is the form factor on the excited defect state. Although the axioms (\[eq:axperm\]-\[eq:axdyn\]) look the same as the form factor axioms without the defect they are valid only for particles coming from the left (see eq. (\[eq:ffdef\])). For any particle coming from the right one has to include a transmission factor, see eq. (\[eq:axtrans\]).
The form factor of a bulk operator localized on the left of the defect, $\mathcal{O}_{-}$, is simply its bulk form factor $$F_{n}^{\mathcal{O}_{-}}(\theta_{1},\dots,\theta_{n})=B_{n}^{\mathcal{O}}(\theta_{1},\dots,\theta_{n})$$ but when the same operator is localized on the right of the defect, $\mathcal{O}_{+},$ its form factor is $$F_{n}^{\mathcal{O}_{+}}(\theta_{1},\dots,\theta_{n})=\prod_{i}T_{-}(\theta_{i})B_{n}^{\mathcal{O}}(\theta_{1},\dots,\theta_{n})$$ These apply for the left/right limits of the bulk fields at the defect as well.
As we assume that the bulk form factors are already determined in [@Zamolodchikov:1990bk] we focus on form factors of defect operators. In general, the solution compatible with the form factor axioms takes the form $$F_{n}^{\mathcal{O}}(\theta_{1},\dots,\theta_{n})=\langle\mathcal{O}\rangle H_{n}\prod_{i}d(\theta_{i})\prod_{i<j}\frac{f(\theta_{i}-\theta_{j})}{x_{i}+x_{j}}Q_{n}(x_{1},\dots,x_{n})$$ where $f(\theta)$ is the minimal bulk two particle form factor, which satisfies: $$f(\theta)=S(\theta)f(-\theta)\quad;\qquad f(i\pi-\theta)=f(i\pi+\theta)$$ The one particle minimal defect form factor $d(\theta)$ is responsible for defect bound-states, $H_{n}$ is some normalization constant and $Q_{n}$ is a symmetric polynomial in its arguments $x_{i}=e^{\theta_{i}}.$
Defect form factors in the scaling Lee-Yang model
--------------------------------------------------
The Lee-Yang model is the simplest conformal field theory, the $\mathcal{M}_{2,5}$ minimal model, whose central charge is $c=-\frac{22}{5}$ and which has only two irreducible Virasoro representations $V_{0}$ and $V_{h}$ with highest weights $0$ and $h=-\frac{1}{5}$, respectively. The periodic Lee-Yang model carries a representation of $Vir\otimes\overline{Vir}$ and its Hilbert-space is decomposed as $$\mathcal{H}=V_{0}\otimes\bar{V}_{0}+V_{h}\otimes\bar{V}_{h}$$ We can associate a local field for all vector of the Hilbert-space, and for the highest weight states these fields are the identity $\mathbb{I}$ and $\Phi$, respectively. These local fields form an operator-algebra with the operator product expansions $$\Phi\left(z,\bar{z}\right)\Phi\left(0,0\right)=C_{\Phi\Phi}^{\mathbb{I}}\vert z\vert^{-4h}\mathbb{I}+C_{\Phi\Phi}^{\Phi}\vert z\vert^{-2h}\Phi\left(0,0\right)+\dots$$ In order to have a real field $\Phi^{\dagger}=\Phi$, we normalized the field as $C_{\Phi\Phi}^{\mathbb{I}}=-1$. A consistent choice of the other structure constant is $C_{\Phi\Phi}^{\Phi}=\sqrt{\frac{2}{1+\sqrt{5}}}\frac{\Gamma\left(\frac{1}{5}\right)\Gamma\left(\frac{6}{5}\right)}{\Gamma\left(\frac{3}{5}\right)\Gamma\left(\frac{4}{5}\right)}\approx1.91131\dots$
In this subsection we specify the previous defect form factor considerations for the scaling Lee-Yang model [@Bajnok:2007jg]. The scaling Lee-Yang model is the single relevant perturbation of the Lee-Yang model $$\mathcal{A}=\mathcal{A}_{LY}-\lambda\int d^{2}z\,\Phi(z,\bar{z})\label{eq:SLY}$$ and has a single particle in the spectrum with the two-particle scattering matrix: $$S(\theta)=\frac{\sinh\theta+i\sin\frac{\pi}{3}}{\sinh\theta-i\sin\frac{\pi}{3}}$$ There is a one parameter family of integrable defect perturbation of the defect Lee-Yang model [@Bajnok:2013] which has the transmission factor $$T_{\pm}(\theta)=S\left(\theta\pm i\frac{\pi}{6}(3-b)\right)$$ Formally it is like a particle with rapidity $\theta=i\frac{\pi}{6}(3-b)$ and the defect energy and momentum are indeed $$e_{d}=m\sin\frac{\pi b}{6}\qquad;\qquad p_{d}=im\cos\frac{\pi b}{6}\label{eq:defectenergyandmomentum}$$ This theory can be realized as a unique one parameter family of integrable perturbation of the defect Lee-Yang model: $$S^{d}=S_{LY}^{d}-\lambda\int d^{2}z\,\Phi(z,\bar{z})-\mu\int dy\,\varphi(y)-\bar{\mu}\int dy\,\bar{\varphi}(y)$$ where integrability forces the constraint: $$\lambda=\mu\bar{\mu}\xi^{-2}\quad;\qquad\xi^{-2}=2i\sqrt{\frac{1+\sqrt{5}}{2}}\frac{1-e^{i\pi/5}}{1+e^{i\pi/5}}=0.826608$$ The relation between the Lagrangian and scattering parameters is $$\lambda=\left(\frac{m}{\kappa}\right)^{\frac{12}{5}}\quad;\qquad\kappa=2^{\frac{19}{12}}\sqrt{\pi}\frac{\left(\Gamma(\frac{3}{5})\Gamma(\frac{4}{5})\right)^{\frac{5}{12}}}{5^{\frac{5}{16}}\Gamma(\frac{2}{3})\Gamma(\frac{5}{6})}=2.642944\label{eq:lambda}$$ $$\mu=\left(\frac{m}{\kappa}\right)^{\frac{6}{5}}\xi e^{-\frac{i\pi}{5}(3+b)}\qquad;\qquad\bar{\mu}=\left(\frac{m}{\kappa}\right)^{\frac{6}{5}}\xi e^{\frac{i\pi}{5}(3+b)}\label{eq:mu}$$ Due to the nontrivial defect, the Hilbert space of the defect Lee-Yang model $$\mathcal{H}=V_{0}\otimes\bar{V}_{h}+V_{h}\otimes\bar{V}_{0}+V_{h}\otimes\bar{V}_{h}=:[\bar{d}]+[d]+[D]$$ does not coincide with the operator space localized on the defect which is $$V_{0}\otimes\bar{V}_{0}+V_{h}\otimes\bar{V}_{0}+V_{0}\otimes\bar{V}_{h}+2V_{h}\otimes\bar{V}_{h}=:[\mathbb{I}]+[\varphi]+[\bar{\varphi}]+[\Phi_{\pm}]$$ We are going to compute the form factors of these operators.
### Form factor solutions
The ingredients of the form factor solutions are as follows: the minimal solution of the bulk two particle form factor is $$f(\theta)=\frac{x+x^{-1}-2}{x+x^{-1}+1}\, v(i\pi-\theta)\, v(-i\pi+\theta)\quad,\qquad x=e^{\theta}$$ where $$\log v(\theta)=2\int_{0}^{\infty}\frac{dt}{t}e^{\frac{i\theta t}{\pi}}\frac{\sinh\frac{t}{2}\sinh\frac{t}{3}\sinh\frac{t}{6}}{\sinh^{2}t}\quad,$$ which automatically includes the pole of the dynamical singularity. This minimal solution satisfies the identities $$\begin{aligned}
f\left(\theta\right)f\left(\theta+i\pi\right) & = & \frac{\sinh\left(\theta\right)}{\sinh\left(\theta\right)-i\sin\left(\frac{\pi}{3}\right)}\nonumber \\
f\left(\theta+\frac{i\pi}{3}\right)f\left(\theta-\frac{i\pi}{3}\right) & = & \frac{\cosh\left(\theta\right)+\frac{1}{2}}{\cosh\left(\theta\right)+1}f\left(\theta\right).\label{eq:fidentity}\end{aligned}$$ The normalization factor is the same as in the bulk $$H_{n}=\left(\frac{i3^{\frac{1}{4}}}{2^{\frac{1}{2}}v(0)}\right)^{n}\quad.$$ The one particle defect form factor, which accommodates the possible defect bound-state pole is $$d(\theta)=\frac{1}{\sqrt{3}+x\nu+x^{-1}\bar{\nu}}\label{eq:d}$$ where we introduced $\nu=e^{i\frac{\pi b}{6}}$ and $\bar{\nu}=\nu^{-1}$, satisfying $$\begin{aligned}
d\left(\theta+i\pi\right)d\left(\theta\right) & = & \frac{1}{1-2\cos\left(\frac{b\pi}{3}-2i\theta\right)}\nonumber \\
d\left(\theta+\frac{i\pi}{3}\right)d\left(\theta-\frac{i\pi}{3}\right) & = & \frac{1}{2\cos\left(\frac{b\pi}{6}-i\theta\right)}d\left(\theta\right).\label{eq:didentity}\end{aligned}$$ The singularity axioms provide recursion relations between the polynomials $Q_{n}$ as $$\begin{aligned}
Q_{n+2}(-x,x,x_{1},...,x_{n}) & = & K_{n}(x,x_{1},...,x_{n})Q_{n}(x_{1},...,x_{n})\\
Q_{n+1}(x\omega,x\bar{\omega},x_{1},...,x_{n-1}) & = & D_{n}(x,x_{1},...,x_{n-1})Q_{n}(x,x_{1},...,x_{n-1})\nonumber \end{aligned}$$ where $\omega=e^{\frac{i\pi}{3}}$, $\bar{\omega}=\omega^{-1}$ and we explicitly have $$\begin{aligned}
K_{n}(x,x_{1},...,x_{n}) & = & (-1)^{n}(x^{2}\nu^{2}-1+x^{-2}\nu^{-2})\\
& & \frac{x}{2(\omega-\bar{\omega})}\left(\prod_{i=1}^{n}(x\omega+x_{i}\bar{\omega})(x\bar{\omega}-x_{i}\omega)-\prod_{i=1}^{n}(x\omega-x_{i}\bar{\omega})(x\bar{\omega}+x_{i}\omega)\right)\nonumber \end{aligned}$$ for the kinematical recursion and $$\begin{aligned}
D_{n}(x,x_{1},...,x_{n-1}) & = & (\nu x+\nu^{-1}x^{-1})x\prod_{i=1}^{n-1}(x+x_{i})\end{aligned}$$ for the dynamical one. Since $Q_{n}(x_{1},...,x_{n})$ is a symmetric polynomial we use the elementary symmetric polynomials $\sigma_{k}^{(n)},\bar{\sigma}_{k}^{(n)}$, defined by the generating function: $$\prod_{i=1}^{n}(x+x_{i})=\sum_{k\in\mathbb{Z}}x^{n-k}\sigma_{k}^{(n)}(x_{1},...,x_{n})\quad;\qquad\bar{\sigma}_{k}^{(n)}\left(x_{1},\dots,x_{n}\right)=\sigma_{k}^{(n)}(x_{1}^{-1},\dots,x_{n}^{-1})$$ to formulate the results. Note that $\sigma_{k}^{(n)}=0$, if $k>n$ or if $k<0$. We sometimes abbreviate $\sigma_{k}^{(n)}(x_{1},...,x_{n})$ to $\sigma_{k}$ if it does not lead to any confusion.
The form factors of the left/right limits of the bulk operator $\Phi_{\mp}$ follows from our previous considerations and are trivially related to the bulk form factors. The low lying form factors of the two chiral fields living only at the defects have been already calculated[^1] [@Bajnok:2009hp]. The results are summarized in Table \[tab:Q\] .
Operator $Q_{1}$ $Q_{2}$
----------------- ------------------------------------------------------ ----------------------------------------------------------------------------------------------------------------------------------------------------
$\Phi_{-}$ *$\nu\sigma_{1}+\bar{\nu}\bar{\sigma}_{1}+\sqrt{3}$* $\sigma_{1}(\nu^{2}\sigma_{2}+\sqrt{3}\nu\sigma_{1}+\sigma_{1}\bar{\sigma}_{1}+1+\sqrt{3}\bar{\nu}\bar{\sigma}_{1}+\bar{\nu}^{2}\bar{\sigma}_{2})$
$\Phi_{+}$ *$\nu\sigma_{1}+\bar{\nu}\bar{\sigma}_{1}-\sqrt{3}$* $\sigma_{1}(\nu^{2}\sigma_{2}-\sqrt{3}\nu\sigma_{1}+\sigma_{1}\bar{\sigma}_{1}+1-\sqrt{3}\bar{\nu}\bar{\sigma}_{1}+\bar{\nu}^{2}\bar{\sigma}_{2})$
$\bar{\varphi}$ $\bar{\nu}\bar{\sigma}_{1}$ $\bar{\nu}\sigma_{1}(\bar{\nu}\bar{\sigma}_{2}+\nu)$
$\varphi$ $\nu\sigma_{1}$ $\nu\sigma_{1}(\nu\sigma_{2}+\bar{\nu})$
: The form factor solutions of the primary fields up to level 2[]{data-label="tab:Q"}
In Appendix A we explicitly derive all possible solutions of the form factor equations and extend these results to any order. Here we list only the outcome.
The form factors of the two limits of the bulk fields, $\Phi_{\pm},$ are described in terms of the bulk form factor solutions as $$Q_{n}^{\Phi_{\pm}}(x_{1},\dots,x_{n})=\prod_{i=1}^{n}(\nu x_{i}+\bar{\nu}x_{i}^{-1}\mp\sqrt{3})\sigma_{1}\sigma_{n-1}P_{n}\label{eq:QPhipm}$$ where $$P_{n}=\det\Sigma^{(n)}\quad,\qquad\Sigma_{ij}^{(n)}=\sigma_{3i-2j+1}^{(n)}(x_{1},\dots,x_{n})$$ The defect form factors have the generic structure: $$Q_{n}=R(\sigma_{1},\bar{\sigma}_{1})\sigma_{n}P_{n}S_{n}$$ where $S_{n}$ does not depend on the operator: $$S_{n}=\tau_{n-1}+\sum_{m\geq1}\left(-1\right)^{m}\left(\tau_{n+1-6m}+\tau_{n-1-6m}\right)\qquad;\qquad\tau_{k}=\sum_{l\in\mathbb{Z}}\nu^{2l-k}\bar{\sigma}_{k-l}\sigma_{l}\,,$$ while the operator dependent parts are $$R^{\bar{\varphi}}(\sigma_{1},\bar{\sigma}_{1})=\bar{\nu}\bar{\sigma}_{1}\qquad;\qquad R^{\varphi}(\sigma_{1},\bar{\sigma}_{1})=\nu\sigma_{1}\:.$$
### Exact vacuum expectation values \[sub:Exact-vacuum-expectation\]
The form factors are normalized with their vacuum expectation values (VEVs), so here we determine them. The exact VEV of the bulk field of the scaling Lee-Yang theory is known from conformal perturbation theory and from TBA in the ultraviolet limit [@Zamolodchikov:1989cf].
Since the VEV of the bulk field does not depend on its location and the defect fields $\Phi_{\pm}$ correspond to its limits, we conclude that $$\langle\Phi\rangle=\langle\Phi_{+}\rangle=\langle\Phi_{-}\rangle=\frac{1}{\pi\lambda(1-h)}\frac{\pi}{4\sqrt{3}}m^{2}\simeq1.23939m^{2h}\label{eq:PhiVEV}$$
In order to obtain the expectation values of the chiral fields $\varphi$, $\bar{\varphi}$, we adopt an argument similar to [@Dorey:2000eh]. The vacuum energy of the system can be generically parametrized as [@Bajnok:2007jg]
$$E_{0}(L)=e_{d}(b)+LE_{bulk}+E_{0}^{TBA}(L)\label{eq:E0L}$$
where the bulk energy constant is $E_{bulk}=-\frac{1}{4\sqrt{3}}m^{2}$, the defect energy is given in (\[eq:defectenergyandmomentum\]) and $E_{0}^{TBA}(L)$ is the ground-state energy in the TBA scheme, given in (\[eq:DTBAE-1\]) below.
The presence of an integrable defect does not destroy the existence of momentum, merely modifies the momentum eigenvalues, yielding a nonzero value in the ground state. In the same way as for the defect energy, this defect momentum can be conveniently extracted in the ultraviolet limit, if one makes use of the expression $$P_{0}^{TBA}(L)=-\frac{m}{2\pi}\int d\theta\sinh\theta\log\left(1+e^{-\tilde{\varepsilon}(\theta)}\right)=P_{0}(L)-p_{d}(b)\label{eq:momentumdef}$$ for the exact finite-volume ground state momentum. Here $\tilde{\epsilon}$ satisfies the ground-state TBA equation (see also Section \[sec:Expectation-values-in\]) $$\tilde{\epsilon}(\theta)=mL\cosh\theta-\log T_{+}(\frac{i\pi}{2}-\theta)-\varphi(\theta)\star\log(1+e^{-\tilde{\epsilon}(\theta)})\quad;\qquad\varphi(\theta)=-\frac{i}{2\pi}\partial_{\theta}\log S(\theta)\label{eq:DTBA-2}$$ In the $L\to0$ limit, the solution of (\[eq:DTBA-2\]) develops two plateau regions, which grow as $\sim\log\frac{2}{mL}$, separated by a breather region around the origin. The plateaus end in two kink regions, which do not contribute to the ground state momentum. We therefore focus onto the central region and make use of the pseudo energy
$$\varepsilon_{0}(\theta)=\lim_{L\to0}\tilde{\epsilon}(\theta)\label{eq:epsilonzero}$$
describing the root distribution in the deep ultraviolet limit. Expanding the expression (\[eq:DTBA-2\]) with $L=0$ around $\theta\to\pm\infty$ as in [@Zamolodchikov:1990bk; @Bajnok:2007jg] produces $$\varepsilon_{0}(\theta\to\pm\infty)\simeq-A_{\pm}e^{\mp\theta}+\varepsilon_{*}\pm\frac{C}{2\pi}I_{\pm}e^{\mp\theta}\label{eq:asyepsilon}$$ where the asymptotic behavior of the kernel and of the source term of (\[eq:epsilonzero\]) define $\varphi(\theta\to\pm\infty)\sim Ce^{\mp\theta}$ and $\log T_{+}(i\frac{\pi}{2}-\theta\to\pm\infty)\sim A_{\pm}e^{\mp\theta}$. For the model at hand, $C=-2\sqrt{3}$ and $A_{\pm}=\mp2i(e^{\pm i\pi\frac{b+1}{6}}+e^{\pm i\pi\frac{b-1}{6}})$. Finally, $\varepsilon_{*}=\log\frac{1+\sqrt{5}}{2}$ is the plateau value of the limit solution for the pseudo energy and we introduce the notation: $$I_{\pm}=\int_{-\infty}^{\infty}d\theta e^{\pm\theta}\frac{d}{d\theta}\log(1+e^{-\varepsilon_{0}(\theta)})$$
We define $Y(\theta)=e^{-\varepsilon_{0}(\theta)}$, which satisfies the
$$Y\left(\theta+i\frac{\pi}{3}\right)Y\left(\theta-i\frac{\pi}{3}\right)=1+Y(\theta),$$
functional relation, i.e. the Lee-Yang $Y$ system [@Bajnok:2007jg]. This function has period $\frac{5i\pi}{3}$ in the imaginary direction, hence its asymptotic expansion can only contain powers of $e^{\pm\frac{6}{5}\theta}$: cancellation of the terms proportional to $e^{\pm\theta}$ in (\[eq:asyepsilon\]) allows to compute $I_{\pm}$ exactly. From the expression (\[eq:momentumdef\]), one then has $$\lim_{L\to0}\frac{P_{0}^{TBA}}{m}=-\frac{A_{+}-A_{-}}{2C}$$ so that the defect momentum is (\[eq:defectenergyandmomentum\]).
As a last step, we take the vacuum expectation values of the Hamiltonian and the momentum operator, (\[eq:HTCSA\]) (\[eq:PTCSA\]), and differentiate them with respect to the defect parameter $b$ $$\frac{\partial}{\partial b}\langle H\rangle=i\frac{\pi}{5}(\mu\langle\varphi\rangle-\bar{\mu}\langle\bar{\varphi}\rangle)\qquad\qquad\frac{\partial}{\partial b}\langle P\rangle=i\frac{\pi}{5}(\mu\langle\varphi\rangle+\bar{\mu}\langle\bar{\varphi}\rangle)\label{eq:dbHP}$$ By virtue of (\[eq:mu\]), (\[eq:defectenergyandmomentum\]), we obtain the vacuum expectation values of the fields $\varphi$ and $\bar{\varphi}$, $$\langle\varphi\rangle=-\frac{5i}{12\mu}e^{-i\frac{\pi b}{6}}\qquad\qquad\langle\bar{\varphi}\rangle=\frac{5i}{12\bar{\mu}}e^{i\frac{\pi b}{6}}\label{eq:varphiVEV}$$ which we compare with the numerical values in Section \[sec:Numerical-comparison\]. Observe that differentiating in eq. (\[eq:dbHP\]) the exact finite volume ground state energy (\[eq:E0L\]) and ground state momentum (\[eq:momentumdef\]), instead of their asymptotic values (\[eq:defectenergyandmomentum\]), we could exactly derive the complete finite size one point function of the defect operators $\varphi$ and $\bar{\varphi}.$
### Spectral expansion of two-point functions \[sub:Spectral-expansion-of\]
The form factor expansion for correlation functions has proven to be extremely rapidly convergent in the Lee-Yang theory [@Zamolodchikov:1990bk], providing a good estimate for the correlation function $$\langle\Phi(x,t)\Phi(0,0)\rangle$$ up to very small values of the separation between the fields. It is therefore a legitimate check of the form factors expressions, as well as a due comparison of the convergence properties of the spectral expansion in the case of operators living on the defect, to repeat this analysis for the various two-point functions of $\varphi$, $\bar{\varphi}$, $\Phi_{\pm}$.
In the following we analyze the two point functions of local operators $\langle\hat{O}_{1}(r)\hat{O}_{2}(0)\rangle$ by inserting a resolution of the identity $$\langle\hat{O}_{1}(r)\hat{O}_{2}(0)\rangle=\sum_{n=0}^{\infty}\langle0\vert\hat{O}_{1}(0)\vert n\rangle\langle n\vert\hat{O}_{2}(0)\vert0\rangle e^{-E_{n}r}$$ The various matrix elements are the generic form factors (\[eq:genff\]), which all can be expressed in terms of the elementary ones. Truncation of the series up to two particle terms gives a good approximation valid even for very small separations, which can be compared to the short distance CFT predictions. In so doing we assume that the local fields in the perturbed theory are in one-to-one correspondence with the operator content of the CFT, apart from additive renormalization constants [@Zamolodchikov:1990bk]. The products of fields living on the defect $\hat{O}_{1}(r),\hat{O}_{2}(0)$, for small Euclidean time separation, $r$, can be treated by exploiting their operator product expansion in the short-distance CFT
$$\hat{O}_{1}(r)\hat{O}_{2}(0)\sim\sum_{j}\frac{C_{12}^{j}\hat{O}_{j}}{|r|^{h_{1}+h_{2}-h_{j}}}\label{eq:ope}$$
where $h_{i}$ denote the scaling dimension of the operators and the structure constants $C_{12}^{j}$ were given in [@Bajnok:2013]. Knowledge of the exact vacuum expectation values (\[eq:PhiVEV\]), (\[eq:varphiVEV\]), allows one to extract the behavior of the correlation function as $r\to0$.
We found that the spectral series reproduce the correlation functions to very good accuracy, even for small values of $r$, by a restricted number of terms only. In the following we focus on the one- and two-particle contributions.
The relevant structure constants are written in terms of the constants $\beta=\sqrt{\frac{2}{1+\sqrt{5}}}$, $\alpha=\sqrt{\frac{\Gamma(1/5)\Gamma(6/5)}{\Gamma(3/5)\Gamma(4/5)}}$ and $\eta=e^{i\frac{\pi}{5}}$. In particular, we show in Figure \[fig:Phippertphi\] the real and imaginary part of the $\Phi_{+}\varphi$ correlation function, which for short distance is expanded as $$\langle\Phi_{+}(r)\varphi(0)\rangle\sim C_{\Phi_{+}\varphi}^{\Phi_{+}}\langle\Phi_{+}\rangle r^{1/5}+C_{\Phi_{+}\varphi}^{\Phi_{-}}\langle\Phi_{-}\rangle r^{1/5}+C_{\Phi_{+}\varphi}^{\bar{\varphi}}\langle\bar{\varphi}\rangle r^{2/5}$$ with $C_{\Phi_{+}\varphi}^{\Phi_{+}}=\frac{\alpha}{2}(\beta+\beta^{-1}+\frac{i}{\sqrt[4]{5}})$, $C_{\Phi_{+}\varphi}^{\Phi_{-}}=\frac{\alpha\beta}{2}(1-\frac{i(\beta-\beta^{-1})}{\sqrt[4]{5}})$, $C_{\Phi_{+}\varphi}^{\bar{\varphi}}=-\frac{\eta}{\beta}$. The short distance CFT expansion is shown with continuous lines, while the form factor expansion with dots.
![$\Phi_{+}\varphi$ correlation function: blue and purple dots show the real and imaginary part calculated from the form factor expansion up to two particle terms, while black and green curves the first order CFT perturbation results\[fig:Phippertphi\]](PhiplusVarphi){width="70.00000%"}
The two point functions of other fields are analyzed in Appendix B.
Finite volume form factors: theoretical framework
==================================================
To verify the predictions made by the defect form factor bootstrap one can use the finite volume form factor formalism developed by Pozsgay and Takacs [@Pozsgay:2007kn; @Pozsgay:2007gx]. Based on the description of the finite volume spectrum provided by the Bethe-Yang equations, the formalism gives all finite volume corrections that decay as a power in the inverse volume. The remaining corrections are suppressed exponentially as the volume increases, and at present only a partial description is available for them [@Takacs:2011nb]. In this paper we confine ourselves to the power corrections, as these are sufficient to verify the validity of the defect form factor bootstrap.
Finite volume energy levels
---------------------------
The finite volume energy levels can be identified with multi-particle states $|\{I_{1},\dots,I_{n}\}\rangle_{L}$ containing $n$ particles, labeled by quantum numbers $I_{1},\dots,I_{n}$ which parametrize the quantization of particle momenta. The quantization conditions satisfied by the particle rapidities $\theta_{k}$ are the Bethe-Yang equations $$e^{imL\sinh\theta_{k}}T_{-}(\theta_{k})\prod_{j\neq k}S(\theta_{k}-\theta_{j})=1\qquad k=1,\dots,n\label{eq:expdybe}$$ Taking the logarithm, these equations can be rewritten as $$Q_{k}(\theta_{1},\dots,\theta_{n})_{L}=2\pi I_{k}\qquad k=1,\dots,n\label{eq:dby1}$$ where $$Q_{k}(\theta_{1},\dots,\theta_{n})_{L}=mL\sinh\theta_{k}-i\log T_{-}(\theta_{k})-\sum_{j\neq k}i\log S(\theta_{k}-\theta_{j})\label{eq:dby2}$$ and the energy is given by $$E(L)=E_{0}(L)+\sum_{k=1}^{n}m\cosh\theta_{k}+O(e^{-\mu L})$$ where $\mu$ is some characteristic scale, and $E_{0}(L)$ is the ground state (vacuum) energy.
The $k$-th equation in (\[eq:dby1\]) characterizes the monodromy of the wave functions under moving the $k$-th particle to the right and around the circle; in doing so, one picks up the phase from crossing the defect, the scattering with the other particles (note the order of the rapidity difference inside $S$, which corresponds to particle $k$ entering the scattering from the left!). The reason why only $T_{-}(\theta)$ enters is that it is the phase-shift suffered by a particle of $\theta>0$ when crossing the defect from the left; on the other hand, if a given particle has $\theta<0$ its monodromy when crossing from the right to the left would be given by $T_{+}(-\theta)$; therefore, when crossing from the left to the right the phase-shift is given by $$T_{+}(-\theta)^{-1}$$ which is equal to $T_{-}(\theta)$ by defect unitarity. As a result, equations (\[eq:dby1\]), (\[eq:dby2\]) describe all states in the spectrum by letting the sign of the rapidities free, i.e. by letting $I_{k}$ to take any integer values. However, due to $S(0)=-1$ the multi-particle wave-functions are non-vanishing only if all the rapidities, and therefore all the quantum numbers, take distinct values.
The density of states in finite volume can be obtained from the Jacobi determinant of the mapping from rapidity space to the space of the quantum numbers: $$\rho_{n}(\theta_{1},\dots,\theta_{n})_{L}=\det\left\{ \frac{\partial Q_{k}(\theta_{1},\dots,\theta_{n})_{L}}{\partial\theta_{j}}\right\} _{k,j=1,\dots,n}$$
Non-diagonal matrix elements
----------------------------
Using the arguments in the work [@Pozsgay:2007kn], the finite volume matrix elements of a defect operator can be obtained as $$\begin{aligned}
& & \left\langle \left\{ J_{1},\dots,J_{n}\right\} \right|\mathcal{O}\left(t=0\right)\left|\left\{ I_{1},\dots,I_{k}\right\} \right\rangle _{L}=\label{eq:Finite_volume_FF}\\
& & \pm\frac{F^{\mathcal{O}}\left(\tilde{\theta}_{n}'+i\pi,\dots,\tilde{\theta}_{1}'+i\pi,\tilde{\theta}_{1},\dots,\tilde{\theta}_{k}\right)}{\sqrt{\rho_{n}\left(\tilde{\theta}_{1}',\dots,\tilde{\theta}_{n}'\right)_{L}\rho_{k}\left(\tilde{\theta}_{1},\dots,\tilde{\theta}_{k}\right)}_{L}}\Phi\left(\tilde{\theta}_{1}',\dots,\tilde{\theta}_{n}'\right)^{*}\Phi\left(\tilde{\theta}_{1},\dots,\tilde{\theta}_{k}\right)+O\left(e^{-\mu L}\right)\nonumber \end{aligned}$$ where $F^{\mathcal{O}}$ is the infinite volume form factor, $\tilde{\theta}_{1},\dots,\tilde{\theta}_{k}$ and $\tilde{\theta}_{1}',\dots,\tilde{\theta}_{n}'$ are the solutions to the Bethe-Yang equations (\[eq:dby1\]), (\[eq:dby2\]) with quantum numbers $I_{1},\dots,I_{k}$ and $J_{1},\dots,J_{n}$, respectively, and the $\Phi$ are phase factors ensuring that the finite volume scattering state is symmetric in its arguments and is invariant under crossing any particle of the defect. Clearly in finite volume the particles are not ordered and are on the left and on the right of the defect in the same time. The correct phase factor with this properties have the form $$\Phi\left(\theta_{1},\dots,\theta_{n}\right)=\pm\left(\prod_{{j,l=1\atop j<l}}^{n}S(\theta_{j}-\theta_{l})\prod_{j=1}^{n}T_{-}(\theta_{j})\right)^{-\frac{1}{2}}$$ We remark that the phases $\Phi$ have no physical significance as they cancel in any correlation function computed in the theory, whether in finite or infinite volume; it is only necessary to keep track of them when analytically continuing (\[eq:Finite\_volume\_FF\]) to complex values of the rapidities, as in [@Pozsgay:2008bf; @Takacs:2011nb]. Finally, the $\pm$ sign results from the ambiguity in choosing the branch of the square root functions; the same ambiguity is present in TCSA as the eigenvectors still have a residual sign ambiguity even after imposing a suitable reality condition.
In the case of the Lee-Yang model we have a physical normalization of the TCSA eigenvectors. We chose the normalization of the UV primary defect creating fields as $$C_{dd}^{\mathbb{I}}=C_{\bar{d}\bar{d}}^{\mathbb{I}}=1\quad;\qquad C_{DD}^{\mathbb{I}}=-1,$$ the same way as in [@Bajnok:2013]. The ground state of the perturbed CFT flows to the lowest energy state in the UV limit which is the field $D$ with negative squared norm. This is therefore natural to chose the ground state TCSA vectors at any volume to be purely imaginary.
The second and the third lowest energy states in the TCSA (which correspond to the slowest left- and right-moving one particle states in the scattering theory point of view) never cross any other lines at any volume, therefore, they flow to the second and the third lowest energy states in the conformal theory, namely $\vert d\rangle$ and $\vert\bar{d}\rangle$. Consequently we normalized the corresponding TCSA vectors to be real.
In the UV limit the states of the Verma modules built over the highest weight states $d$ and $\bar{d}$ are related by parity transformation while the Verma module of $D$ is parity symmetric. In case of a parity symmetric defect the only parity symmetric states in the scattering theory are the even particle states composed by particles with opposite rapidities, so these states flow to some parity symmetric state in the $D$-module. These arguments suggest us that the even particle TCSA states flow to the $D$-module while the odd particle TCSA states to either to the $d$-module or to the $\bar{d}$-module in the conformal limit. For this reason we normalized the even particle TCSA states to be purely imaginary while the odd particle states to be real. This assumption is tested **, measuring the phases (or equivalently both the real and imaginary part) of the form factors from the TCSA and comparing them to the theoretically computed finite volume form factors allow us a non-trivial check, and we found a perfect agreement for all the studied states.
For bulk operators, the extension of (\[eq:Finite\_volume\_FF\]) is simple. As noted in [@Bajnok:2009hp], the presence of an integrable defect only breaks translational invariance by having a defect momentum $p_{D}$ (in addition to a defect energy $E_{D}$). The total momentum, which includes the defect momentum as well, is conserved; therefore the space-time dependence of a bulk operator can be computed by multiplying (\[eq:Finite\_volume\_FF\]) by the phase factor $$e^{-it\Delta E+ix\Delta P}$$ where $$\begin{aligned}
\Delta E & = & \sum_{i=1}^{k}m\cosh\tilde{\theta}_{i}-\sum_{j=1}^{n}m\cosh\tilde{\theta}_{j}'\nonumber \\
\Delta P & = & \sum_{i=1}^{k}m\sinh\tilde{\theta}_{i}-\sum_{j=1}^{n}m\sinh\tilde{\theta}_{j}'\end{aligned}$$
Diagonal matrix elements
------------------------
Formula (\[eq:Finite\_volume\_FF\]) is valid if there are no disconnected terms in the matrix element. Disconnected terms arise when there is at least one rapidity value among the $\tilde{\theta}_{1},\dots,\tilde{\theta}_{m}$ which coincides with a value occurring in $\tilde{\theta}_{1}',\dots,\tilde{\theta}_{n}'$. Following two energy levels as the volume $L$ varies, this can occur at particular isolated values of $L$, but these cases are not interesting as the matrix element can be evaluated by taking the limit of (\[eq:Finite\_volume\_FF\]) in the volume. Therefore the only interesting cases are when disconnected terms are present for a continuous range of the volume $L$. Due to the presence of interactions (the $S$ terms) in (\[eq:dby1\]), (\[eq:dby2\]) this can only occur in very specific situations; the only generic class is when the matrix element is diagonal, i.e. the two states are eventually identical, in which case disconnected terms are present for all values of $L$.
In this case, we can proceed by analogy to the bulk and boundary cases. For diagonal matrix elements $$\langle\{I_{1},\dots,I_{n}\}\vert\mathcal{O}\vert\{I_{1},\dots,I_{n}\}\rangle_{L}$$ equation (\[eq:Finite\_volume\_FF\]) shows that the relevant form factor expression is $$F^{\mathcal{O}}(\theta_{n}+i\pi,...,\theta_{1}+i\pi,\theta_{1},...,\theta_{n})$$ Due to the existence of kinematical poles this must be regularized; however the end result depends on the direction of the limit. The terms that are relevant in the limit can be written in the following general form: $$\begin{aligned}
F^{\mathcal{O}}(\theta_{n}+i\pi+\epsilon_{n},...,\theta_{1}+i\pi+\epsilon_{1},\theta_{1},...,\theta_{n})=\label{mostgeneral}\\
\prod_{i=1}^{n}\frac{1}{\epsilon_{i}}\cdot\sum_{i_{1}=1}^{n}...\sum_{i_{n}=1}^{n}\mathcal{A}_{i_{1}...i_{n}}(\theta_{1},\dots,\theta_{n})\epsilon_{i_{1}}\epsilon_{i_{2}}...\epsilon_{i_{n}}+\dots\nonumber \end{aligned}$$ where $\mathcal{A}_{i_{1}...i_{n}}^{a_{1}\dots a_{n}}$ is a completely symmetric tensor of rank $n$ in the indices $i_{1},\dots,i_{n}$, and the ellipsis denote terms that vanish when taking $\epsilon_{i}\rightarrow0$ simultaneously.
The connected matrix element can be identified as the $\epsilon_{i}$ independent part of equation (\[mostgeneral\]), i.e. the part which does not diverge whenever any of the $\epsilon_{i}$ is taken to zero: $$F_{conn}^{\mathcal{O}}(\theta_{1},\theta_{2},...,\theta_{n})=n!\,\mathcal{A}_{1\dots n}(\theta_{1},\dots,\theta_{n})\label{eq:connected}$$ where the appearance of the factor $n!$ is simply due to the permutations of the $\epsilon_{i}$.
Following [@Pozsgay:2007gx; @Kormos:2007qx] , we are lead to the following expression $$\begin{aligned}
& & \,\langle\{I_{1}\dots I_{n}\}|\mathcal{O}|\{I_{1}\dots I_{n}\}\rangle_{L}=\label{eq:diaggenrulesaleur}\\
& & \frac{1}{\rho_{n}(\tilde{\theta}_{1},\dots,\tilde{\theta}_{n})_{L}}\,\sum_{A\subset\{1,2,\dots n\}}F_{conn}^{\mathcal{O}}(\{\tilde{\theta}_{k}\}_{k\in A})\tilde{\rho_{n}}(\tilde{\theta}_{1},\dots,\tilde{\theta}_{n}|A)_{L}+O(\mathrm{e}^{-\mu L})\nonumber \end{aligned}$$ where the summation runs over all subsets $A$ of $\{1,2,\dots n\}$, and $\left\{ \tilde{\theta}_{1},\dots,\tilde{\theta}_{n}\right\} $ are the Bethe-Yang rapidities corresponding to the set of quantum numbers $\left\{ I_{1},\dots,I_{n}\right\} $. For any such subset, we define the appropriate sub-determinant $$\tilde{\rho}_{n}(\tilde{\theta}_{1},\dots,\tilde{\theta}_{n}|A)=\det\mathcal{J}_{A}(\tilde{\theta}_{1},\dots,\tilde{\theta}_{n})$$ of the $n\times n$ Bethe-Yang Jacobi matrix $$\mathcal{J}(\tilde{\theta}_{1},\dots,\tilde{\theta}_{n})_{kl}=\frac{\partial Q_{k}(\theta_{1},\dots,\theta_{n})_{L}}{\partial\theta_{l}}\label{eq:jacsubmat}$$ where $\mathcal{J}_{A}$ is obtained by deleting the rows and columns corresponding to the subset of indices $A$. The determinant of the empty sub-matrix (i.e. when $A=\{1,2,\dots n\}$) is defined to equal to $1$ by convention. Note also that diagonal matrix elements have no space-time dependence at all, therefore (\[eq:diaggenrulesaleur\]) is true both for operators located on the defect and in the bulk.
We note for bulk theories on a spatial circle without defects, that there exists another class of matrix elements with disconnected contributions when there is a particle of exactly zero momentum in both states. However, due to the presence of the defect this class is absent here. For example, from (\[eq:expdybe\]) it follows that the existence of a state with a single stationary particle would require $$T_{-}(\theta=0)=1$$ but this is not satisfied for any finite value of the defect parameter $b$. The only class of matrix elements that has disconnected pieces at more than isolated values of $L$ is the diagonal one treated above.
Numerical comparison\[sec:Numerical-comparison\]
================================================
A valuable tool for investigating statistical field theories in the vicinity of the critical point was devised in [@Yurov:1989yu] and successfully tested on the scaling Lee-Yang theory. Being based on the knowledge of the Hilbert space at the conformal point, where the spectrum is discrete, and on the truncation of the constituting Verma modules at a certain level $m$, it has been dubbed truncated conformal space approach (TCSA).
In the present case, the Hilbert space is spanned by the defect-creating operators, whose corresponding hw. states are denoted by $|D\rangle$, $|d\rangle$, $|\bar{d}\rangle$ in [@Bajnok:2013], with conformal dimensions $(h,\bar{h})=(-\frac{1}{5},-\frac{1}{5}),(-\frac{1}{5},0),(0,-\frac{1}{5})$.
In order to evaluate matrix elements of the Hamiltonian, the theory is mapped on the plane by the transformation
$$z=e^{-i\frac{2\pi}{L}\zeta},\quad\bar{z}=e^{i\frac{2\pi}{L}\bar{\zeta}}\label{eq:expmap}$$
where $\zeta=x+iy$ and $\bar{\zeta}=x-iy$ are the Euclidean coordinates on the cylinder. Operators will appear as finite-size matrices on the states $\vert j\rangle$ of the conformal Hilbert space on the plane and the Hamiltonian and momentum operators read $$\begin{aligned}
\frac{H}{m} & = & \frac{2\pi}{mL}\biggl(L_{0}+\bar{L}_{0}+\frac{11}{30}+\xi\Bigr(\frac{L}{2\pi\kappa}\Bigr)^{1+\frac{1}{5}}\biggl(a\left(G^{-1}\hat{\varphi}\right){}_{jk}+\bar{a}\left(G^{-1}\hat{\bar{\varphi}}\right){}_{jk}\biggr)\label{eq:HTCSA}\\
& & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,+\Bigr(\frac{L}{2\pi\kappa}\Bigr)^{2+\frac{2}{5}}\left(G^{-1}\Phi\right){}_{jk}\biggr)\nonumber \end{aligned}$$
$$\frac{P}{m}=\frac{2\pi}{mL}\left(L_{0}-\bar{L}_{0}+\xi\left(\frac{L}{2\pi\kappa}\right)^{1+\frac{1}{5}}\left(a(G^{-1}\hat{\varphi})_{jk}-\bar{a}(G^{-1}\hat{\bar{\varphi}})_{jk}\right)\right)\label{eq:PTCSA}$$
where $G_{jk}=\langle j|k\rangle$, $\hat{\varphi}_{jk}=\langle j|\varphi(1)|k\rangle$, $\hat{\bar{\varphi}}_{jk}=\langle j|\bar{\varphi}(1)|k\rangle$, $\hat{\Phi}_{jk}=\langle j|\Phi(1,1)|k\rangle P_{jk}$ and the matrix $$P_{jk}=\begin{cases}
\begin{array}{c}
-2\pi\\
-2e^{-i\pi(h_{k}-\bar{h}_{k}-h_{j}+\bar{h}_{j})}\frac{\sin\pi(h_{k}-\bar{h}_{k}-h_{j}+\bar{h}_{j})}{(h_{k}-\bar{h}_{k}-h_{j}+\bar{h}_{j})}
\end{array} & \begin{array}{c}
\mbox{ if }h_{k}-\bar{h}_{k}-h_{j}+\bar{h}_{j}=0\\
\mbox{otherwise}
\end{array}\end{cases}$$ is obtained by performing the integration on the spatial coordinate of the bulk perturbing field. Also, there appear the parameters $\mbox{\ensuremath{\kappa}=\ensuremath{\frac{2^{19/12}\sqrt{\pi}\left(\Gamma\left(\frac{3}{5}\right)\Gamma\left(\frac{4}{5}\right)\right)^{5/12}}{5^{5/16}\Gamma\left(\frac{2}{3}\right)\Gamma\left(\frac{5}{6}\right)}}}$ , $\xi=\sqrt[4]{\frac{5}{8}+\frac{3\sqrt{5}}{8}}$, $a=e^{-i\frac{\pi}{5}(b-2)}$ and $\bar{a}=e^{i\frac{\pi}{5}(b-2)}$.
Diagonalization of the truncated Hamiltonian yields the energy levels, which can be compared with the corresponding quantities obtained from Bethe-Yang equations (\[eq:expdybe\]), finding excellent agreement in all intermediate regimes [@Bajnok:2013].
Truncation of the Hilbert space introduces an error in the computation of energy levels and matrix elements. However, at least for an operator with weight $h,\bar{h}<\frac{1}{2}$, such an error is expected to be smaller and smaller as the truncation level is increased. We found that, due to the rapid growth of the Hilbert space and the consequent processor memory usage, it was impossible for us to go beyond truncation level $n=18$, where the effects due to the finiteness of the space are still noticeable. This forced us to improve the precision by renormalization group (RG) methods.
A renormalization group for the truncation level dependence was introduced in [@Feverati:2006ni; @Konik:2007cb; @Giokas:2011ix] and extended to VEVs in [@Szecsenyi:2013gna]. Here we briefly repeat the basic steps of the derivation in [@Szecsenyi:2013gna] for matrix elements with the perturbed action (\[eq:SLY\]), and refer to the original derivation for more extensive explanation of the procedure.
The matrix element of a given local operator $\mathcal{O}$, represented on a finite-dimensional space, will be denoted by
$$O_{ab}(n)=\langle a|P_{n}\mathcal{O}(0,0)P_{n}e^{-\lambda\int d^{2}rV(\vec{r})-\eta\int dtW(t)}P_{n}|b\rangle$$
where the last factor in the expectation value represents a combined bulk and defect perturbation, as in (\[eq:SLY\]), the operator $P_{n}$ is a projector onto the states up to level $n$, and the operator products are assumed to be time ordered.
The idea is to study the difference $O_{ab}(n+1)-O_{ab}(n)$ by perturbation theory using that the couplings $\lambda$, $\eta$ are relevant, and so when the descendant states at the cutoff level $n$ have energy much higher than the mass gap $\frac{4\pi n}{L}\gg m$ their effect can be taken into account perturbatively. The bulk perturbation has already been analyzed in the original literature and the method can be straightforwardly extended to a combined bulk and defect perturbation; we concentrate here on the defect part, and only present the main differences with respect to the original derivation in [@Szecsenyi:2013gna]. It is possible to consider separately the two components of the perturbation because the (projected) parts of the evolution operators associated to the two perturbations commute to first order in $\lambda,\eta$. As a first step, we write the matrix element in the form $$Q_{ab}(n)=\langle a|U_{+}^{(n)}(\eta)P_{n}O(0)P_{n}U_{-}^{(n)}(\eta)|b\rangle$$ where $U_{\pm}^{(n)}$ are past and future evolution operators at a given cutoff. We evaluate the difference
$$\begin{aligned}
Q_{ab}(n+1)-Q_{ab}(n) & = & -2\eta\left(\frac{L}{2\pi}\right)^{1-h_{W}-\overline{h}_{W}}\intop_{0}^{1}d\rho\rho^{-1+h_{W}+\overline{h}_{W}}\\
& & \qquad\langle a|U_{+}^{(n)}(\eta)O(1)(P_{n+1}-P_{n})W(\rho)U_{-}^{(n)}(\eta)|b\rangle+O(\eta^{2})\nonumber \end{aligned}$$
where we applied the exponential map (\[eq:expmap\]), with $z=\rho e^{i\gamma}$ and put the defect at the radius $\gamma=0$. We then use the operator product expansion
$$O(0)W(z)=\sum_{A}\frac{C_{OW}^{A}A(0)}{|1-\rho|^{h_{O}+h_{W}-h_{A}+\overline{h}_{O}+\overline{h}_{W}-\overline{h}_{A}}}$$
where the summation runs over the scaling fields of the Hilbert space at the conformal point, while the $C_{OW}^{A}$ are the associated structure constants. In the above expression, the projector on the $n$-th level singles out the corresponding descendants in the sum, and after further manipulations in the limit $n\gg1$, one obtains
$$\frac{d}{dn}Q_{ab}(n)\propto\sum_{A}K_{A}n^{h_{O}+h_{W}-h_{A}+\overline{h}_{O}+\overline{h}_{W}-\overline{h}_{A}-2}$$
which implies that truncation errors, in the case of a defect perturbation, decay as
$$Q_{ab}(n)=Q_{ab}(\infty)+\sum_{A}\tilde{K}_{A}n^{h_{O}+h_{W}-h_{A}+\overline{h}_{O}+\overline{h}_{W}-\overline{h}_{A}-1}$$
Conversely, in the case in which the action contains a bulk perturbation only, it was found that
$$Q(n)=Q(\infty)+\sum_{A}\tilde{K}_{A}n^{h_{O}+h_{W}-h_{A}+\overline{h}_{O}+\overline{h}_{W}-\overline{h}_{A}-2}\label{eq:cutcorr}$$
which shows why truncation effects from the defect perturbation are generically more relevant than those resulting from the bulk perturbation.
For practical reasons, since the sub-leading terms which are not taken into account in the formula above are $O(1/n)$, one retains in the sum only the most relevant defect fields, which correspond to the leading terms for large $n$.
Understanding the behavior of the matrix elements allows to extrapolate their value when the cutoff in the state number tends to infinity and allows comparison with the quantities computed in section 3. Explicit examples can be obtained by using the OPE given in [@Bajnok:2013]: in the case of $\varphi$ and $\bar{\varphi}$, the slowest correction arises from the presence of the $\Phi_{\pm}$ channel in the OPE of $\varphi$ and $\bar{\varphi}$ and gives a power of $n^{-1}$; conversely, from the fact that the structure constants $C_{\Phi_{\pm}\varphi}^{\bar{\varphi}},\: C_{\Phi_{\pm}\bar{\varphi}}^{\varphi}$ are different from zero, one obtains that the cut-off dependence vanishes as $n^{-6/5}$ for $\Phi_{\pm}$.
Vacuum expectation values \[sub:Form-factors-and\]
--------------------------------------------------
We would like to compare the expectation values of the fields $\Phi_{\pm}$, $\varphi$, $\bar{\varphi}$ derived in section \[sub:Exact-vacuum-expectation\], to the data extracted from TCSA. Throughout this section, we work at a fixed value of $b=-3+0.5i$. As explained in [@Bajnok:2013], such a value ensures a real spectrum, thereby facilitating the comparison between the energy and momentum data computed from the Bethe-Yang equations and from TCSA. We chose a value of the defect parameter with a small imaginary part in order to avoid the occurrence of degenerate subspaces (corresponding, in the infrared, to particles traveling with opposite rapidity). In the form factor comparison we are interested not only in the eigenvalues but also in the eigenvectors of the TCSA energy and momentum. As energy levels might be degenerate for specific volumes the identification and systematic tracking of states can be problematic. To avoid this we combine the self-adjoint $H$ and $P$ into $H+i\, P$, which has nondegenrate complex spectrum, and by following its eigenstates we could identify $127$ states (up to four-particle ones) from TCSA.
In the following we present the real and imaginary parts of measured, extrapolated and theoretically calculated form factors. All of them have the same consistent legend: green color shows the real part of measured form factors at cut level 12, 14, 16, 18 $(\bullet,{\scriptstyle \blacksquare},{\scriptstyle \blacklozenge},\blacktriangle)$; red squares show the real part of extrapolated data with confidence intervals; and black line shows the real part of theoretical values. The imaginary parts have the same structure, with colors purple, blue and orange, respectively.
Concerning the confidence intervals we used Mathematica 9 to fit the leading cut dependence via (\[eq:cutcorr\]). We found that the real and imaginary parts are basically not correlated thus we fit them separately. On all the figures the confidence level is 95%. The theoretical data can be outside the confidence interval for two reasons. For small volumes the TCSA data are reliable but the exponentially supressed vacuum polarization effects of the form factor are no longer negligible. For large volume the finite volume form factors are reliable but the sub-leading TCSA cut dependence becomes relevant.
First, we present an example of how the extrapolation procedure works in the case of the defect field $\varphi$. In figure \[cutdep\], we show the TCSA data and the extrapolated value together with the theoretical value for cylinder sizes between $0.2$ and $20$.****
![Comparison between the TCSA points at different cuts, the extrapolated values and the theoretical vacuum expectation value of $\varphi$.[]{data-label="cutdep"}](VEVvarphi){width="75.00000%"}
We report below the numerical comparison between the exact expectation values of the operators living on the defect and the ones extrapolated at a large enough volume $mL=15$, such that the exponential finite-size corrections can be safely neglected.
$\varphi$ $\bar{\varphi}$ $\Phi_{+}$
----------------------- ----------- ----------------- ------------
TCSA extrapolated VEV $1.2795$ $1.1529$ $1.2368$
Theoretical VEV $1.2814$ $1.154$ $1.2394$
: Numerical comparison of the TCSA extrapolated and the theoretical values of the vacuum expectation values of different operators at volume $mL=15$.
One particle form factors
--------------------------
We now turn to one-particle form factors, which, in the presence of a defect, already carry a nontrivial dependence on the rapidity of the particle due to the transmission factors from Section \[sub:Summary-of-the\] above.
We can collect extrapolated data from states labeled by different quantization numbers and from different volumes (from $4$ to $20$). The particle rapidity in a given state is determined from the BY equations (\[eq:expdybe\]). From this, it is known how to relate the finite volume matrix elements and the infinite volume form factors, through (\[eq:Finite\_volume\_FF\]).
We now examine the one-particle form factors of the fields $\varphi$ and $\overline{\varphi}$. Following the procedure outlined above, we obtain confirmation of the expected $\theta-$dependence.
{width="45.00000%"}{width="45.00000%"}
Tho similar analysis for the one particle form factors of the operators $\Phi_{\pm}$ can be found in Appendix B.
Multiparticle form factors
--------------------------
Analogously to the one particle form factors above we can check the two or more particle form factors. However, they will generally depend on more than one rapidities separately. For this reason it is easier to analyze the volume dependence of a form factor for a given state, identified by the quantization numbers of its rapidities in the Bethe-Yang equations (\[eq:expdybe\]). Such lines are identified from the corresponding energy and momentum levels. Here below, we present some examples of different states. A more exhaustive list of data can be found in Appendix B.
![Left: one-particle form factor of the operator $\Phi_{+}$ on the state labeled by quantum number $n_{1}=3$. Right: one-particle form factor of the operator $\bar{\varphi}$ on the state $n_{1}=3$. The solid lines are computed from formula (\[eq:Finite\_volume\_FF\]), while the dots with confidence bars are obtained by extrapolated TCSA data. (The consistent legend is described in the beginning of subsection \[sub:Form-factors-and\])](FFPhiplus3 "fig:"){width="45.00000%"}![Left: one-particle form factor of the operator $\Phi_{+}$ on the state labeled by quantum number $n_{1}=3$. Right: one-particle form factor of the operator $\bar{\varphi}$ on the state $n_{1}=3$. The solid lines are computed from formula (\[eq:Finite\_volume\_FF\]), while the dots with confidence bars are obtained by extrapolated TCSA data. (The consistent legend is described in the beginning of subsection \[sub:Form-factors-and\])](FFvarphibar3 "fig:"){width="45.00000%"}
We remark that the extrapolation fails for very large values of the dimensionless volume $mL$. The reason is that for relevant perturbing fields the dimensionless couplings in the Hamiltonian depends on a positive power of the coupling, and therefore perturbation theory is only applicable when the condition $$\frac{4\pi n}{mL}\gg1$$ is satisfied [@Szecsenyi:2013gna]; if this is not the case, it is necessary to evaluate the cutoff dependence to higher orders in perturbation theory, which is a very complicated task and out of the scope of the present work. ** These extra terms appear during the extrapolation procedure as systematic errors, and this is why the confidence intervals calculated during data fittings do not always contain the theoretical data (mainly for larger volumes). However these confidence intervals were kept, because usually they correctly reflect the reliability of TCSA method.
Diagonal form factors
---------------------
To calculate the diagonal form factors in finite volume we need the connected form factors defined in equation (\[eq:connected\]). We can make use of the identities (\[eq:fidentity\]) and (\[eq:didentity\]) to simplify these expressions. For operators $\Phi_{\pm}$ we get $$F_{2n,c}^{\Phi_{\pm}}\left(\theta_{1},\ldots,\theta_{n}\right)=\langle\Phi\rangle\left(\frac{\sqrt[4]{3}i}{\sqrt{2}v(0)}\right)^{2n}f\left(i\pi\right)^{n}\frac{Q_{n}^{\Phi_{\pm}}(\theta_{1},\ldots,\theta_{n})}{\prod_{j<k}^{n}\left(\sinh^{2}(\text{\ensuremath{\theta_{j}}}-\text{\ensuremath{\theta_{k}}})+\sin^{2}\left(\frac{\pi}{3}\right)\right)}$$ with $Q_{n}^{\Phi_{\pm}}$ given in equation (\[eq:QPhipm\]).
Here we show the one particle diagonal matrix elements. The multiparticle matrix elements up to four particle number can be found in Appendix B.
![Left: one-particle diagonal form factor of the operator $\Phi_{+}$ on the state labeled by quantum number $n_{1}=1$. Right: one-particle diagonal form factor of the operator $\varphi$ on the state $n_{1}=1$. The solid lines are computed from formula (\[eq:diaggenrulesaleur\]), while the dots with confidence bars are obtained by extrapolated TCSA data. (The consistent legend is described in the beginning of subsection \[sub:Form-factors-and\])](DFFPhiplus1 "fig:"){width="48.00000%"}![Left: one-particle diagonal form factor of the operator $\Phi_{+}$ on the state labeled by quantum number $n_{1}=1$. Right: one-particle diagonal form factor of the operator $\varphi$ on the state $n_{1}=1$. The solid lines are computed from formula (\[eq:diaggenrulesaleur\]), while the dots with confidence bars are obtained by extrapolated TCSA data. (The consistent legend is described in the beginning of subsection \[sub:Form-factors-and\])](DFFvarphi1 "fig:"){width="48.00000%"}
Expectation values in finite volume/temperature \[sec:Expectation-values-in\]
=============================================================================
In this section we derive the exact finite volume vacuum expectation value of our fields. We follow the derivation in [@Pozsgay:2010xd] and indicate the slight modifications only. We start with an operator, $\mathcal{O}$, localized in the bulk at $x=-1$ and $y=0$. The defect is localized at $x=0$ and the size of our system is $L$. As usual we do the calculation on the torus by exchanging the role of space and time: $$_{L}\langle0\vert\mathcal{O}(-1,0)\vert0\rangle_{L}=\lim_{R\to\infty}\frac{\mbox{Tr}(\mathcal{O}(0,-1)e^{-H(R)L}D)}{\mbox{Tr}(e^{-H(R)L}D)}$$ In the mirror (exchanged) theory the defect acts like an operator, which we denote by $D$. As the location of the defect operator is irrelevant we can follow the derivation of the defect TBA equation [@Bajnok:2007jg] to redefine the Hamiltonian to be $$\tilde{H}(R)=H(R)-\frac{1}{L}\log D.$$ This will have no other effect then to change the dispersion relation as $$m\cosh\theta\to m\cosh\theta-\frac{1}{L}\log T_{+}\left(\frac{i\pi}{2}-\theta\right)$$ With these changes the TBA equation takes the form $$\tilde{\epsilon}(\theta)=mL\cosh\theta-\log T_{+}\left(\frac{i\pi}{2}-\theta\right)-\int_{-\infty}^{\infty}\frac{d\theta'}{2I\pi}\varphi(\theta-\theta')\log(1+e^{-\tilde{\epsilon}(\theta')})\label{eq:DTBA-1}$$ giving the ground state energy as $$E_{0}^{TBA}(L)=-m\int_{-\infty}^{\infty}\frac{d\theta}{2\pi}\cosh(\theta)\log(1+e^{-\tilde{\epsilon}(\theta)})\label{eq:DTBAE-1}$$ The only deference compared to the periodic situation is that the pseudo energy has changed. Thus the derivation of [@Pozsgay:2010xd] will lead to the result $$_{L}\langle0\vert\mathcal{O}\vert0\rangle_{L}=\sum_{n=0}^{\infty}\frac{1}{n!}\prod_{j=1}^{n}\int\frac{d\theta_{j}}{2\pi}\frac{1}{1+e^{\tilde{\epsilon}(\theta_{j})}}F_{2n}^{C}(\theta_{1},\dots,\theta_{n})\label{eq:DLM-1}$$ for the exact finite volume vacuum expectation value of any bulk operator. Clearly this results holds for the two limits $\Phi_{\pm}$ of the bulk field. The calculation of the exact finite volume vacuum expectation values of the defect fields $\varphi$ and $\bar{\varphi}$ follows from the exact ground state energy and momentum as we explained at the end of section 2.2.2.
As far as the fields $\Phi_{\pm}$ are concerned, the knowledge of the form factors of the trace of the bulk stress tensor [@Zamolodchikov:1990bk] is sufficient to exploit the connected multi-particle form factors.
The numerical comparison goes in two steps: first, we solve eq. (\[eq:DTBA-1\]) iteratively for the pseudo energy, starting from a trial solution containing only the hyperbolic term in large volume and decreasing the volume gradually. Then, we compute the series (\[eq:DLM-1\]) with the form factors given above. In Figure \[fig:DLM\], we show a comparison between the extrapolated one-point function of the field $\Phi_{\pm}$ and the corresponding defect LeClair-Mussardo series, up to three-particle contributions. Note that the outcome of the procedure is different from the bulk case.
![Comparison of the TCSA data (red dots) to the prediction from the series (\[eq:DLM-1\]) up to 3 terms (black curve) and up to 2 terms (black dashed curve) for the finite-volume vacuum expectation value of the fields $\Phi_{\pm}$*\[fig:DLM\].*](TBAFiplus){width="50.00000%"}
The numerical values of these points, for more precise comparison, are reported in Table \[tab:DLM\].
**
$mL$ DLM DTCSA
------ ----------- -----------
$1$ $1.12971$ $1.12944$
$2$ $1.21394$ $1.21368$
$3$ $1.23245$ $1.23216$
$4$ $1.23733$ $1.23705$
$5$ $1.23875$ $1.23851$
$6$ $1.23919$ $1.23901$
$7$ $1.23933$ $1.23925$
$8$ $1.23937$ $1.23942$
$9$ $1.23939$ $1.23957$
$10$ $1.23939$ $1.23972$
Conclusions
===========
We developed the theory of finite volume form factors in the presence of integrable defects. Our framework is valid for large volumes and takes into account all polynomial finite size corrections but neglects the exponentially small effects. We expressed these finite volume form factors in terms of the infinite volume form factors and the finite volume density of states, which depends on the scattering and transmission matrices.
We tested these ideas in the Lee-Yang model against the data of the truncated conformal space approach. Within this framework, we numerically diagonalized the Hamiltonian of the finite volume system on a truncated Hilbert space and evaluated the matrix elements of local operators. We performed a systematic comparison: first we compared the vacuum expectation values of all local fields. In so doing we derived exact explicit expressions for the vacuum expectation values of all of the defects fields. We then determined all form factors of the defects fields. We used these results to calculate the two point functions of defect fields, which we compared to the short distance expansion, which contains information on the conformal structure constants and the vacuum expectation values. We also compared the finite volume form factors against the TCSA data. Finally, we derived an explicit expression for the exact finite volume vacuum expectation value of any defect operator in terms of the multi-particle form factors and the defect thermodynamic Bethe Ansatz pseudo energy, which we also checked numerically. In all of these comparisons we used a renormalization group-improved version of TCSA, which we adapted to the present case, and found excellent agreement.
Our methods developed for the Lee-Yang model have a much wider application and can be directly generalized for any diagonal scattering theories. Especially, the parametrization of the defect form factors in terms of the bulk form factors and extra polynomials should be applied to other models. The generalization of our approach for non-diagonal theories is a non-trivial and rather interesting problem. As the transmission matrix bootstrap program was completed for many purely transmitting defect theories [@Bowcock:2005vs; @Corrigan:2007gt; @Corrigan:2010ph], it would be nice to formulate and solve their form factor bootstrap program, too.
Here we analyzed only the polynomial corrections in the inverse of the volume for the finite volume form factors. It is a challenging problem to calculate systematically the exponentially small finite size correction.
Acknowledgments {#acknowledgments .unnumbered}
---------------
We thank OTKA 81461 and Lendulet grants LP2012-18/2012 and LP2012-50/2012 for support.
Exact Form factor solutions\[sec:AppendixA\]
============================================
In this appendix we present all form factors of primary operators. The two limits of the bulk fields $\Phi_{\pm}$ are the simplest as they can be expressed in terms of the bulk form factors. For this reason we recall the bulk form factors first.
Bulk form factors
-----------------
Bulk form factors of the operator $\Phi$ are parametrized as
$$B_{n}\left(\theta_{1},\dots,\theta_{n}\right)=\langle\Phi\rangle H_{n}\prod_{i<j}\frac{f(\theta_{i}-\theta_{j})}{x_{i}+x_{j}}Q_{n}^{bulk}(x_{1},\dots,x_{n})$$
In the Lee-Yang model $\Phi$ is the perturbing operator itself, thus it is proportional to the trace of the energy momentum tensor. As a consequence the form factor must have the form $$\begin{aligned}
Q_{1}^{bulk}(x_{1}) & = & 1\\
Q_{2}^{bulk}(x_{1},x_{2}) & = & \sigma_{1}^{(2)}(x_{1},x_{2})\\
Q_{n}^{bulk}(x_{1},\dots,x_{n}) & = & \sigma_{1}^{(n)}(x_{1},\dots,x_{n})\sigma_{n-1}^{(n)}(x_{1},\dots,x_{n})P_{n}(x_{1},\dots,x_{n})\qquad\mathrm{if}\ n\geq3\end{aligned}$$ where the $P_{n}$ polynomials satisfy the following recurrence relations $$\begin{aligned}
P_{n+2}\left(x,-x,x_{1},\dots,x_{n}\right) & = & \frac{\left(\prod_{i=1}^{n}\left(x+\omega x_{i}\right)\left(x-\bar{\omega}x_{i}\right)-\prod_{i=1}^{n}\left(x-\omega x_{i}\right)\left(x+\bar{\omega}x_{i}\right)\right)}{2x\left(\omega-\bar{\omega}\right)}\times\nonumber \\
& & \left(-1\right)^{n+1}P_{n}(x_{1},\dots,x_{n})\\
P_{n+1}\left(\omega x,\bar{\omega}x,x_{1},\dots,x_{n}\right) & = & \prod_{i=1}^{n-1}\left(x+x_{i}\right)P_{n}\left(x,x_{1},\dots,x_{n}\right)\end{aligned}$$ Te solution of this recursion can be written in terms of a determinant: $$P_{n}\left(x_{1},\dots,x_{n}\right)=\det\Sigma^{(n)}\quad,\qquad\Sigma_{ij}^{(n)}\left(x_{1},\dots,x_{n}\right)=\sigma_{3i-2j+1}^{(n)}(x_{1},\dots,x_{n})$$
Form factors of $\Phi_{\pm}$
----------------------------
Based on our previous discussion the form factor of $\Phi_{-}$ is the bulk form factor: $$F_{n}^{\Phi_{-}}(\theta_{1},\dots,\theta_{n})=B_{n}(\theta_{1},\dots,\theta_{n})$$ while the form factor is $\Phi_{+}$ is of the form $$F_{n}^{\Phi_{+}}(\theta_{1},\dots,\theta_{n})=\prod_{n}T_{-}(\theta_{i})B_{n}(\theta_{1},\dots,\theta_{n})$$ Comparing the parametrization of the bulk and defect form factors we can conclude that $$\begin{aligned}
Q_{n}^{\Phi_{\pm}}(x_{1},\dots,x_{n}) & = & \prod_{i=1}^{n}(\nu x_{i}+\bar{\nu}x_{i}^{-1}\mp\sqrt{3})Q_{n}^{bulk}(x_{1},\dots,x_{n})\\
& & \prod_{i=1}^{n}(\nu x_{i}+\bar{\nu}x_{i}^{-1}\mp\sqrt{3})\sigma_{1}^{(n)}(x_{1},\dots,x_{n})\sigma_{n-1}^{(n)}(x_{1},\dots,x_{n})P_{n}(x_{1},\dots,x_{n})\nonumber \end{aligned}$$
Form factors of $\bar{\varphi}$ and $\varphi$
---------------------------------------------
Let’s focus on the form factor solutions for a generic defect field. Calculating explicitly the first few $Q$ polynomials we found for $\varphi$: $$\begin{aligned}
Q_{1}^{\varphi} & = & \nu\sigma_{1}\\
Q_{2}^{\varphi} & = & \sigma_{1}+\nu^{2}\sigma_{1}\sigma_{2}\nonumber \\
Q_{3}^{\varphi} & = & \bar{\nu}\sigma_{1}^{2}+\nu\sigma_{1}^{2}\sigma_{2}+\nu^{3}\sigma_{1}\sigma_{2}\sigma_{3}\nonumber \\
Q_{4}^{\varphi} & = & \bar{\nu}^{2}\sigma_{1}^{2}\sigma_{2}+\sigma_{1}^{2}\sigma_{2}^{2}+\nu^{2}\sigma_{1}\sigma_{2}^{2}\sigma_{3}+\nu^{4}\sigma_{1}\sigma_{2}\sigma_{3}\sigma_{4}\nonumber \\
Q_{5}^{\varphi} & = & \bar{\nu}^{3}\left(\sigma_{1}^{2}\sigma_{2}\sigma_{3}-\sigma_{1}^{2}\sigma_{5}\right)+\bar{\nu}\left(\sigma_{1}^{2}\sigma_{2}^{2}\sigma_{3}-\sigma_{1}^{2}\sigma_{2}\sigma_{5}\right)+\nu\left(\sigma_{1}\sigma_{5}^{2}+\sigma_{1}\sigma_{2}^{2}\sigma_{3}^{2}-2\sigma_{1}\sigma_{2}\sigma_{3}\sigma_{5}\right)+\nonumber \\
& & +\nu^{3}\left(\sigma_{1}\sigma_{2}\sigma_{3}^{2}\sigma_{4}-\sigma_{1}\sigma_{3}\sigma_{4}\sigma_{5}\right)+\nu^{5}\left(\sigma_{1}\sigma_{2}\sigma_{3}\sigma_{4}\sigma_{5}-\sigma_{1}\sigma_{4}\sigma_{5}^{2}\right)\nonumber \end{aligned}$$ where at each level $n=1,\dots,5$ we abbreviated $\sigma_{k}^{(n)}$ by $\sigma_{k}$.
These explicit solutions suggest to parametrize the general form factors as $$Q_{n}\left(n\right)=R\left(\sigma_{1}^{\left(n\right)},\bar{\sigma}_{1}^{\left(n\right)}\right)\sigma_{n}^{\left(n\right)}P_{n}^{\left(n\right)}S_{n}^{\left(n\right)}\label{eq:Qansatz}$$ for $n\geq3$, where $R$ is some polynomial which depends on the operator. Plugging this expressions into the recursive relations we obtain the recursions for $S_{n}$: $$\begin{aligned}
S_{n+1}\left(\omega x,\bar{\omega}x,x_{1},\dots,x_{n-1}\right) & = & \left(\nu x+\bar{\nu}\bar{x}\right)S_{n}\left(x,x_{1},\dots,x_{n-1}\right)\nonumber \\
S_{n+2}\left(x,-x,x_{1},\dots,x_{n}\right) & = & -\left(x^{2}\nu^{2}-1+x^{-2}\nu^{-2}\right)S_{n}\left(x_{1},\dots,x_{n}\right)\end{aligned}$$ The first few solutions are: $$\begin{aligned}
S_{3} & = & \left(\nu^{-2}\bar{\sigma}_{2}+\bar{\sigma}_{1}\sigma_{1}+\nu^{2}\sigma_{2}\right)\\
S_{4} & = & \left(\nu^{-3}\bar{\sigma}_{3}+\nu^{-1}\bar{\sigma}_{2}\sigma_{1}+\nu\bar{\sigma}_{1}\sigma_{2}+\nu^{3}\sigma_{3}\right)\nonumber \\
S_{5} & = & \left(\nu^{-4}\bar{\sigma}_{4}+\nu^{2}\bar{\sigma}_{3}\sigma_{1}+\bar{\sigma}_{2}\sigma_{2}+\nu^{2}\bar{\sigma}_{1}\sigma_{3}+\nu^{4}\sigma_{4}\right)-1\nonumber \\
S_{6} & = & \left(\nu^{-5}\bar{\sigma}_{5}+\nu^{-3}\bar{\sigma}_{4}\sigma_{1}+\nu^{-1}\bar{\sigma}_{3}\sigma_{2}+\nu\bar{\sigma}_{2}\sigma_{3}+\nu^{3}\bar{\sigma}_{1}\sigma_{4}+\nu^{5}\sigma_{5}\right)-\left(\nu^{-1}\bar{\sigma}_{1}+\nu\sigma_{1}\right)\nonumber \\
S_{7} & = & \left(\nu^{-6}\bar{\sigma}_{6}+\nu^{-4}\bar{\sigma}_{5}\sigma_{1}+\nu^{-2}\bar{\sigma}_{4}\sigma_{2}+\bar{\sigma}_{3}\sigma_{3}+\nu^{2}\bar{\sigma}_{2}\sigma_{4}+\nu^{4}\bar{\sigma}_{1}\sigma_{5}+\nu^{6}\sigma_{6}\right)-\nonumber \\
& & -\left(\nu^{-2}\bar{\sigma}_{2}+\bar{\sigma}_{1}\sigma_{1}+\nu^{2}\sigma_{2}\right)-1\nonumber \end{aligned}$$ These suggest that $S_{n}$’s are Laurent-polynomials, which are invariant under the simultaneous exchanges of $x\leftrightarrow x^{-1}$ and $\nu\leftrightarrow\nu^{-1}$. In $S_{n}$ the coefficient of $\nu^{k}$ is a homogeneous symmetric Laurent-polynomial of degree $k$, for $k\in\left\{ -n+1,-n+3,\dots,n-1\right\} .$ Let us define then $$\tau_{k}\left(x_{1},\dots,x_{n}\right)=\sum_{l\in\mathbb{Z}}\nu^{2l-k}\bar{\sigma}_{k-l}\left(x_{1},\dots,x_{n}\right)\sigma_{l}\left(x_{1},\dots,x_{n}\right)$$ Here we sum over all integers $l$, but note that this sum is always finite. These Laurent-polynomials satisfy the following useful identities: $$\begin{aligned}
\tau_{n+1}\left(x_{1},\dots,x_{n}\right) & = & \tau_{n-1}\left(x_{1},\dots,x_{n}\right)\\
\tau_{k+2}\left(x,-x,x_{1},\dots,x_{n}\right) & = & \tau_{k+2}\left(x_{1},\dots,x_{n}\right)+\tau_{k-2}\left(x_{1},\dots,x_{n}\right)\nonumber \\
& & -\left(x^{2}\nu^{2}+x^{-2}\nu^{-2}\right)\tau_{k}\left(x_{1},\dots,x_{n}\right)\\
\tau_{k}\left(x,x_{1},\dots,x_{n}\right) & = & (x\nu+x^{-1}\nu^{-1})\tau_{k-1}\left(x_{1},\dots,x_{n}\right)\nonumber \\
& & +\tau_{k-2}\left(x_{1},\dots,x_{n}\right)+\tau_{k}(x_{1},\dots,x_{n})\\
\tau_{k+1}\left(x\omega,x\omega^{-1},x_{1},\dots,x_{n-1}\right) & = & \left(x^{-1}\nu^{-1}+x\nu\right)\tau_{k}\left(x,x_{1},\dots,x_{n-1}\right)-\tau_{k-1}\left(x_{1},\dots,x_{n-1}\right)\nonumber \\
& & +\tau_{k-3}\left(x_{1},\dots,x_{n-1}\right)+\tau_{k+1}\left(x_{1},\dots,x_{n-1}\right)\end{aligned}$$ Using these properties it is easy to show by induction that the form factor solution is $$\begin{aligned}
S_{n}\left(x_{1},\dots,x_{n}\right) & = & \tau_{n-1}\left(x_{1},\dots,x_{n}\right)-\left(\tau_{n-5}\left(x_{1},\dots,x_{n}\right)+\tau_{n-7}\left(x_{1},\dots,x_{n}\right)\right)\\
& & +\left(\tau_{n-11}\left(x_{1},\dots,x_{n}\right)+\tau_{n-13}\left(x_{1},\dots,x_{n}\right)\right)-\dots\nonumber \\
& = & \tau_{n-1}\left(x_{1},\dots,x_{n}\right)+\sum_{m\geq1}\left(-1\right)^{m}\left(\tau_{n+1-6m}\left(x_{1},\dots,x_{n}\right)+\tau_{n-1-6m}\left(x_{1},\dots,x_{n}\right)\right)\nonumber \end{aligned}$$ The operator-dependent prefactors for $\varphi$ and $\bar{\varphi}$ in (\[eq:Qansatz\]) are $$R^{\bar{\varphi}}(\sigma_{1},\bar{\sigma}_{1})=\bar{\nu}\bar{\sigma}_{1}\qquad;\qquad R^{\varphi}(\sigma_{1},\bar{\sigma}_{1})=\nu\sigma_{1}$$
Form factors of descendant operators
------------------------------------
Descendant operators correspond to the kernels of the recursion equations. These kernels are the same as for the bulk theory: at level $n$ they are $$K_{n}^{kin}(x_{1},\dots,x_{n})=\prod_{1\leq i<j\leq n}(x_{i}+x_{j})\quad;\qquad K_{n}^{dyn}(x_{1},\dots,x_{n})=\prod_{1\leq i<j\leq n}(x_{i}^{2}+x_{j}^{2}+x_{i}x_{j})$$ The common kernel is $$K_{n}(x_{1},\dots,x_{n})=K_{n}^{kin}(x_{1},\dots,x_{n})K_{n}^{dyn}(x_{1},\dots,x_{n})$$ Adding formally $K_{1}=1$ all solutions of the form factor equations originates form a top representative at level $n$ of the form $$\sigma_{1}^{a_{1}}\sigma_{2}^{a_{2}}\dots\sigma_{n}^{a_{n}}K_{n}\qquad;\qquad a_{1},\dots,a_{n-1}\in\mathbb{N}\quad,\quad a_{n}\in\mathbb{Z}$$ but it is a highly nontrivial task to relate them to the space of local defect operators.
Numerical Data
==============
In this appendix we extensively present our numerical data.
Short distance expansion of the two point function
--------------------------------------------------
In this subsection we show the real and imaginary parts of various correlation function. The short distance CFT expansion is shown with continuous lines, while the form factor expansion with dots. In Figure \[fig:PhipPhim\] we compare $$\langle\Phi_{-}(r)\Phi_{+}(0)\rangle\sim C_{\Phi_{-}\Phi_{+}}^{I}r^{4/5}+C_{\Phi_{-}\Phi_{+}}^{\Phi_{+}}\langle\Phi_{+}\rangle r^{2/5}+C_{\Phi_{-}\Phi_{+}}^{\Phi_{-}}\langle\Phi_{-}\rangle r^{2/5}+C_{\Phi_{-}\Phi_{+}}^{\varphi}\langle\varphi\rangle r^{3/5}+C_{\Phi_{-}\Phi_{+}}^{\bar{\varphi}}\langle\bar{\varphi}\rangle r^{3/5}$$
where $C_{\Phi_{-}\Phi_{+}}^{I}=(1+\beta^{-2})$, $C_{\Phi_{-}\Phi_{+}}^{\varphi}=-i\alpha\eta^{-2}\sqrt{\sqrt{5}(1+\beta^{-2})}$, $C_{\Phi_{-}\Phi_{+}}^{\bar{\varphi}}=i\alpha\eta^{2}\sqrt{\sqrt{5}(1+\beta^{-2})}$, $C_{\Phi_{-}\Phi_{+}}^{\Phi_{+}}=C_{\Phi_{-}\Phi_{+}}^{\Phi_{-}}=\alpha^{2}/\beta$
![$\Phi_{-}\Phi_{+}$ correlation function: blue and purple dots show the real and imaginary part calculated from the form factor expansion up to two particle terms, while black and green curves the first order CFT perturbation results\[fig:PhipPhim\]](PhiminusPhiplus){width="70.00000%"}
Also, in Figure \[fig:Phimpertphi\], we present the correlator $$\langle\Phi_{-}(r)\varphi(0)\rangle\sim C_{\Phi_{-}\varphi}^{\Phi_{+}}\langle\Phi_{+}\rangle r^{1/5}+C_{\Phi_{-}\varphi}^{\Phi_{-}}\langle\Phi_{-}\rangle r^{1/5}+C_{\Phi_{-}\varphi}^{\bar{\varphi}}\langle\bar{\varphi}\rangle r^{2/5}$$ with $C_{\Phi_{-}\varphi}^{\Phi_{-}}=\frac{\alpha}{2}(\beta+\beta^{-1}-\frac{i}{\sqrt[4]{5}})$, $C_{\Phi_{-}\varphi}^{\Phi_{+}}=\frac{\alpha\beta}{2}(1+(\beta-\beta^{-1})\frac{i}{\sqrt[4]{5}})$, $C_{\Phi_{-}\varphi}^{\bar{\varphi}}=-\frac{1}{\eta\beta}$
![$\Phi_{+}\bar{\varphi}\equiv\Phi_{-}\varphi$ correlation function: blue and purple dots show the real and imaginary part calculated from the form factor expansion up to two particle terms, while black and green curves the first order CFT perturbation results\[fig:Phimpertphi\]](PhiplusVarphibar){width="70.00000%"}
Figure \[fig:pertphipertphi\], finally, exhibits the correlation functions $$\langle\varphi(r)\varphi(0)\rangle\sim C_{\varphi\varphi}^{I}r^{2/5}+C_{\varphi\varphi}^{\varphi}\langle\varphi\rangle r^{1/5}\qquad\quad\langle\bar{\varphi}(r)\bar{\varphi}(0)\rangle\sim C_{\bar{\varphi}\bar{\varphi}}^{I}r^{2/5}+C_{\bar{\varphi}\bar{\varphi}}^{\bar{\varphi}}\langle\bar{\varphi}\rangle r^{1/5}$$ where $C_{\varphi\varphi}^{I}=C_{\bar{\varphi}\bar{\varphi}}^{I}=-1$ and $C_{\varphi\varphi}^{\varphi}=C_{\bar{\varphi}\bar{\varphi}}^{\bar{\varphi}}=\frac{\alpha}{\beta}$. In all of these formulas, the vacuum expectation values of defect operators from section \[sub:Exact-vacuum-expectation\] are used.
![$\varphi\varphi$ correlation function: blue and purple dots show the real and imaginary part calculated by form factor expansion up to two particle terms, while black and green curves the first order CFT perturbation results\[fig:pertphipertphi\]](VarphiVarphi){width="70.00000%"}
One particle form factors
-------------------------
In this subsections we present the various particle form factors We start with the one-particle form factors of the fields $\Phi_{\pm}$. Data are collected from various Bethe-Yang quantization numbers and volumes.
{width="45.00000%"}{width="45.00000%"}
\[fig:PhipPhimFF\]
Multiparticle form factors
--------------------------
In this subsection we present the comparison of data for multiparticle form factors.
![Left: two-particle form factor of the operator $\Phi_{+}$ on the state labeled by quantum numbers $n_{1}=-2,\, n_{2}=2$. Right: two-particle form factor of the operator $\varphi$ on the state $n_{1}=-2,\, n_{2}=2$. The solid lines are computed from formula (\[eq:Finite\_volume\_FF\]), while the dots with confidence bars are obtained by extrapolated TCSA data. (The consistent legend is described in the beginning of subsection \[sub:Form-factors-and\])](FFPhiplus-22 "fig:"){width="45.00000%"}![Left: two-particle form factor of the operator $\Phi_{+}$ on the state labeled by quantum numbers $n_{1}=-2,\, n_{2}=2$. Right: two-particle form factor of the operator $\varphi$ on the state $n_{1}=-2,\, n_{2}=2$. The solid lines are computed from formula (\[eq:Finite\_volume\_FF\]), while the dots with confidence bars are obtained by extrapolated TCSA data. (The consistent legend is described in the beginning of subsection \[sub:Form-factors-and\])](FFvarphi-22 "fig:"){width="45.00000%"}
![Left: three-particle form factor of the operator $\Phi_{+}$ on the state labeled by quantum numbers $n_{1}=-2,\, n_{2}=0,\, n_{3}=1$. Right: three-particle form factor of the operator $\varphi$ on the state $n_{1}=-2,\, n_{2}=0,\, n_{3}=1$. The solid lines are computed from formula (\[eq:Finite\_volume\_FF\]), while the dots with confidence bars are obtained by extrapolated TCSA data. (The consistent legend is described in the beginning of subsection \[sub:Form-factors-and\])](FFPhiplus-201 "fig:"){width="45.00000%"}![Left: three-particle form factor of the operator $\Phi_{+}$ on the state labeled by quantum numbers $n_{1}=-2,\, n_{2}=0,\, n_{3}=1$. Right: three-particle form factor of the operator $\varphi$ on the state $n_{1}=-2,\, n_{2}=0,\, n_{3}=1$. The solid lines are computed from formula (\[eq:Finite\_volume\_FF\]), while the dots with confidence bars are obtained by extrapolated TCSA data. (The consistent legend is described in the beginning of subsection \[sub:Form-factors-and\])](FFvarphi-201 "fig:"){width="45.00000%"}
![Left: four-particle form factor of the operator $\Phi_{+}$ on the state labeled by quantum numbers $n_{1}=-1,\, n_{2}=0,\, n_{3}=1,\, n_{4}=3$. Right: four-particle form factor of the operator $\varphi$ on the state $n_{1}=-1,\, n_{2}=0,\, n_{3}=1,\, n_{4}=3$. The solid lines are computed from formula (\[eq:Finite\_volume\_FF\]), while the dots with confidence bars are obtained by extrapolated TCSA data. (The consistent legend is described in the beginning of subsection \[sub:Form-factors-and\])](FFPhiplus-1013 "fig:"){width="45.00000%"}![Left: four-particle form factor of the operator $\Phi_{+}$ on the state labeled by quantum numbers $n_{1}=-1,\, n_{2}=0,\, n_{3}=1,\, n_{4}=3$. Right: four-particle form factor of the operator $\varphi$ on the state $n_{1}=-1,\, n_{2}=0,\, n_{3}=1,\, n_{4}=3$. The solid lines are computed from formula (\[eq:Finite\_volume\_FF\]), while the dots with confidence bars are obtained by extrapolated TCSA data. (The consistent legend is described in the beginning of subsection \[sub:Form-factors-and\])](FFvarphi-1013 "fig:"){width="45.00000%"}
Diagonal form factors
---------------------
This subsection contain some data for diagonal form factors with various particle numbers.
![Left: two-particle diagonal form factor of the operator $\Phi_{+}$ on the state labeled by quantum numbers $n_{1}=0,\, n_{2}=1$. Right: two-particle diagonal form factor of the operator $\varphi$ on the state $n_{1}=0,\, n_{2}=1$. The solid lines are computed from formula (\[eq:diaggenrulesaleur\]), while the dots with confidence bars are obtained by extrapolated TCSA data. (The consistent legend is described in the beginning of subsection \[sub:Form-factors-and\])](DFFPhiplus01 "fig:"){width="48.00000%"}![Left: two-particle diagonal form factor of the operator $\Phi_{+}$ on the state labeled by quantum numbers $n_{1}=0,\, n_{2}=1$. Right: two-particle diagonal form factor of the operator $\varphi$ on the state $n_{1}=0,\, n_{2}=1$. The solid lines are computed from formula (\[eq:diaggenrulesaleur\]), while the dots with confidence bars are obtained by extrapolated TCSA data. (The consistent legend is described in the beginning of subsection \[sub:Form-factors-and\])](DFFvarphi01 "fig:"){width="48.00000%"}
![Left: three-particle diagonal form factor of the operator $\Phi_{+}$ on the state labeled by quantum numbers $n_{1}=-3,\, n_{2}=0,\, n_{3}=1$. Right: three-particle diagonal form factor of the operator $\varphi$ on the state $n_{1}=-3,\, n_{2}=0,\, n_{3}=1$. The solid lines are computed from formula (\[eq:diaggenrulesaleur\]), while the dots with confidence bars are obtained by extrapolated TCSA data. (The consistent legend is described in the beginning of subsection \[sub:Form-factors-and\])](DFFPhiplus-301 "fig:"){width="48.00000%"}![Left: three-particle diagonal form factor of the operator $\Phi_{+}$ on the state labeled by quantum numbers $n_{1}=-3,\, n_{2}=0,\, n_{3}=1$. Right: three-particle diagonal form factor of the operator $\varphi$ on the state $n_{1}=-3,\, n_{2}=0,\, n_{3}=1$. The solid lines are computed from formula (\[eq:diaggenrulesaleur\]), while the dots with confidence bars are obtained by extrapolated TCSA data. (The consistent legend is described in the beginning of subsection \[sub:Form-factors-and\])](DFFvarphi-301 "fig:"){width="48.00000%"}
![Left: four-particle diagonal form factor of the operator $\Phi_{+}$ on the state labeled by quantum numbers $n_{1}=0,\, n_{2}=1,\, n_{3}=2,\, n_{4}=3$. Right: four-particle diagonal form factor of the operator $\bar{\varphi}$ on the state $n_{1}=0,\, n_{2}=1,\, n_{3}=2,\, n_{4}=3$. The solid lines are computed from formula (\[eq:diaggenrulesaleur\]), while the dots with confidence bars are obtained by extrapolated TCSA data. (The consistent legend is described in the beginning of subsection \[sub:Form-factors-and\])](DFFPhiplus0123 "fig:"){width="48.00000%"}![Left: four-particle diagonal form factor of the operator $\Phi_{+}$ on the state labeled by quantum numbers $n_{1}=0,\, n_{2}=1,\, n_{3}=2,\, n_{4}=3$. Right: four-particle diagonal form factor of the operator $\bar{\varphi}$ on the state $n_{1}=0,\, n_{2}=1,\, n_{3}=2,\, n_{4}=3$. The solid lines are computed from formula (\[eq:diaggenrulesaleur\]), while the dots with confidence bars are obtained by extrapolated TCSA data. (The consistent legend is described in the beginning of subsection \[sub:Form-factors-and\])](DFFvarphibar0123 "fig:"){width="48.00000%"}
[10]{}
Jean Avan and Anastasia Doikou. . , 1211:008, 2012.
Z. Bajnok and A. George. . , A21:1063–1078, 2006.
Z. Bajnok and Zs. Simon. . , B802:307–329, 2008.
Zoltan Bajnok and Omar el Deeb. . , B832:500–519, 2010.
Zoltan Bajnok, Laszlo Hollo, and Gerard Watts. , July 2013.
P. Bowcock, Edward Corrigan, and C. Zambon. . , 0401:056, 2004.
P. Bowcock, Edward Corrigan, and C. Zambon. . , 0508:023, 2005.
E. Corrigan and C. Zambon. . , B848:545–577, 2011.
Edward Corrigan and C. Zambon. . , 0707:001, 2007.
G. Delfino, G. Mussardo, and P. Simonetti. . , B432:518–550, 1994.
Anastasia Doikou and Nikos Karaiskos. . , B867:872–886, 2013.
P. Dorey, M. Pillin, R. Tateo, and G.M.T. Watts. . , B594:625–659, 2001.
Giovanni Feverati, Kevin Graham, Paul A. Pearce, Gabor Zs. Toth, and Gerard Watts. . 2006.
Philip Giokas and Gerard Watts. . 2011.
Robert M. Konik and Yury Adamov. . , 98:147205, 2007.
M. Kormos and G. Takacs. . , B803:277–298, 2008.
M. Luscher. . , 104:177, 1986.
M. Luscher. . , 105:153–188, 1986.
Martin Luscher. . , B354:531–578, 1991.
L. Maiani and M. Testa. . , B245:585–590, 1990.
B. Pozsgay. . , B802:435–457, 2008.
B. Pozsgay. . 2013.
B. Pozsgay and G. Takacs. . , B788:167–208, 2008.
B. Pozsgay and G. Takacs. . , B788:209–251, 2008.
Balazs Pozsgay. . , 1101:P01011, 2011.
I.M. Szecsenyi, G. Takacs, and G.M.T. Watts. . , 1308:094, 2013.
G. Takacs. . , 1111:113, 2011.
V.P. Yurov and A.B. Zamolodchikov. . , A5:3221–3246, 1990.
A.B. Zamolodchikov. . , B342:695–720, 1990.
A.B. Zamolodchikov. . , B348:619–641, 1991.
[^1]: To match with the TCSA calculation we choose a different normalization for $\varphi$ and $\bar{\varphi}$ than in [@Bajnok:2009hp]
|
---
abstract: |
Charged particle transverse momentum ($p_{\rm T}$) spectra have been measured by CMS for pp and PbPb collisions at the same $\sqrt{s_{_{\rm NN}}}=2.76$ TeV collision energy per nucleon pairs. Calorimeter-based jet triggers are employed to enhance the statistical reach of the high-$p_{\rm
T}$ measurements. The nuclear modification factor ($R_{\rm AA}$) is obtained in bins of collision centrality for the PbPb data sample dividing by the measured pp reference spectrum. In the range $p_{\rm T} = 5-10$ GeV/c, the charged particle yield in the most central PbPb collisions is suppressed by up to a factor of 7. At higher $p_{\rm T}$, this suppression is significantly reduced, approaching a factor of 2 for particles with $p_{\rm T} = 40 - 100$ GeV/c.
address: 'Massachusetts Institute of Technology, Cambridge, MA 02139, USA'
author:
- Krisztián Krajczár on behalf of the CMS Collaboration
title: 'Measurement of charged particle $R_{\rm AA}$ at high $p_{\rm T}$ in PbPb collisions at $\sqrt{s_{_{\rm NN}}}=2.76$ TeV with CMS'
---
Nuclear modification factor, particle suppression, CMS, LHC
Introduction
============
The inclusive charged particle $p_{\rm T}$ spectrum in nucleus-nucleus (AA) collisions is an important tool for studying high-$p_{\rm T}$ particle suppression in the dense QCD medium produced in high-energy AA collisions [@DdE:JetQuenching]. The suppression (or enhancement) of high-$p_{\rm T}$ particles can be quantified by the ratio of charged particle $p_{\rm T}$ spectra in AA collisions to those in pp collisions scaled by the number of binary nucleon-nucleon collisions ($N_{\rm
coll}$), known as the nuclear modification factor $R_{\rm
AA}$ [@DdE:JetQuenching]:
$$R_{\rm AA}(p_{\rm T}) = \frac{1}{T_{\rm AA}}\frac{d^{2}N^{\rm AA}/dp_{\rm T}d\eta}{d^{2}\sigma^{NN}/dp_{\rm T}d\eta},
\label{eqn:def_raa}$$
where $T_{\rm AA}$ = $\langle N_{\rm coll} \rangle /\sigma^{\rm NN}_{\rm inel}$ can be calculated from a Glauber model accounting for the nuclear collision geometry [@Glauber].
Data samples and analysis procedure
===================================
The measurement presented here is based on $\sqrt{s_{_{NN}}}=2.76$ TeV PbPb data samples corresponding to an integrated luminosities of 7 $\mathrm{\mu
b}^{-1}$ and 150 $\mathrm{\mu b}^{-1}$, collected by the CMS experiment in 2010 and 2011, respectively. The pp reference spectrum measured at the same nucleon-nucleon collision energy corresponds to an integrated luminosity of 230 $\mathrm{\mu b}^{-1}$.
A detailed description of the CMS detector can be found in Ref. [@JINST]. The central feature of the CMS apparatus is a superconducting solenoid, providing a magnetic field of 3.8 T. Immersed in the magnetic field are the silicon pixel and strip tracker, which are designed to provide a transverse momentum resolution of about 0.7 (2.0)% for 1 (100) GeV/c charged particles at normal incidence, the lead-tungstate electromagnetic calorimeter, the brass/scintillator hadron calorimeter, and the gas ionization muon detectors.
In this analysis the coincidence signals of the beam scintillator counters ($3.23<|\eta|<4.65$) or the hadron forward calorimeters (HF; $2.9<|\eta|<5.2$) were used for triggering on minimum bias events. In order to extend the statistical reach of the $p_{\rm T}$ spectra, single-jet triggers with calibrated transverse energy thresholds were applied. The collision event centrality, specified as a fraction of the total inelastic cross section, is determined from the event-by-event total energy deposition in the HF calorimeters.
Results
=======
The inclusive charged particle invariant differential yield averaged over the pseudorapidity $|\eta|<1$ in pp collisions is shown in Fig. \[fig:spectra\](a) [@CMS_raa]. Also shown are the ratios of the data to various generator-level predictions from the <span style="font-variant:small-caps;">pythia</span> MC [@pythia]. The PbPb spectrum is shown in Fig. \[fig:spectra\](b) [@CMS_raa] for six centrality bins compared to the measured pp reference spectrum scaled by $T_{\rm AA}$. By comparing the PbPb measurements to the dashed lines representing the scaled pp reference spectrum, it is clear that the charged particle spectrum is strongly suppressed in central PbPb events compared to pp, with the most pronounced suppression at around 5–10 GeV/c.
The computed nuclear modification factor $R_{\rm AA}$ is shown in Fig. \[fig:raa\] [@CMS_raa]. The yellow boxes around the points show the systematic uncertainties, including those from the pp reference spectrum. An additional systematic uncertainty from the $T_{\rm AA}$ normalization, common to all points, is displayed as the shaded band around unity. In case of the peripheral 70–90% centrality bin, a moderate suppression of about a factor of 2 is observed at low $p_{\rm T}$, with $R_{\rm AA}$ rising slightly with increasing transverse momentum. The suppression becomes more pronounced with increasing collision centrality. In the 0–5% most central centrality bin, $R_{\rm AA}$ reaches a minimum value of about 0.13 at $p_{\rm T}$ = 6–7 GeV/c, corresponding to a suppression factor of 7. At higher $p_{\rm T}$, the value of $R_{\rm AA}$ rises approaching roughly a suppression factor of 2 between 40 and 100 GeV/c.
{width="49.00000%"} {width="49.00000%"}
{width="98.00000%"}
Acknowledgements
================
The author wishes to thank the Hungarian Scientific Research Fund (K 81614 and NK 81447) and the Swiss National Science Foundation (128079) for their support.
[10]{}
D. d’Enterria, v.23: Relativistic Heavy Ion Physics of [*Springer Materials - The Landolt-Brönstein Database*]{}, ch 6.4., 2010
B. Alver et al., [*Phys. Rev. C*]{} **77** (2008) 014906
CMS Collaboration, [*JINST*]{} **3** (2008) S08004
CMS Collaboration, [*Eur. Phys. J. C*]{} **72** (2012) 1945
CMS Collaboration, [*JHEP*]{} **08** (2011) 086
T. Sjöstrand et al., [*JHEP*]{} **05** (2006) 026
|
---
address:
- |
Faculty of Mathematics and Physics, Charles University\
Sokolovská 83, Praha 8 – Karlín, CZ 18675, Czech Republic
- |
Institute of Applied Mathematics, Faculty of Mathematics and Computer Sciences, Heidelberg University\
Im Neuenheimer Feld 205, Heidelberg, DE 69120, Germany
- |
Faculty of Mathematics and Physics, Charles University\
Sokolovská 83, Praha 8 – Karlín, CZ 18675, Czech Republic
- |
Institute of Applied Mathematics, Faculty of Mathematics and Computer Sciences, Heidelberg University\
Im Neuenheimer Feld 205, Heidelberg, DE 69120, Germany
author:
- Karel Tma
- Judith Stein
- Vít Prša
- Elfriede Friedmann
bibliography:
- 'vit-prusa.bib'
- 'literature.bib'
title: 'Motion of the vitreous humour in a deforming eye – fluid-structure interaction between a nonlinear elastic solid and a nonlinear viscoleastic fluid'
---
Introduction {#sec:introduction}
============
Outline {#sec:outline}
=======
Problem description {#sec:problem-description}
===================
Constitutive relations {#sec:const-relat}
======================
Full system of governing equations {#sec:full-syst-govern}
==================================
Numerical solution of the fluid-structure interaction problem {#sec:numer-solut-fluid}
=============================================================
Results {#sec:results}
=======
Conclusion {#sec:conclusion}
==========
|
---
abstract: 'The [*Kepler*]{} mission, launched in March 2009, has revolutionized asteroseismology, providing detailed observations of thousands of stars. This has allowed in-depth analysis of stars ranging from compact hot subdwarfs to red giants, and including the detection of solar-like oscillations in hundreds of stars on or near the main sequence. Here I mainly consider solar-like oscillations in red giants, where [*Kepler*]{} observations are yielding results of a perhaps unexpected richness. In addition to giving a brief overview of the observational and numerical results for these stars, I present a simple analysis which captures some of the properties of the observed frequencies.'
author:
- 'J[ø]{}rgen Christensen-Dalsgaard$^{1,2}$'
title: '[*Kepler*]{} asteroseismology of red-giant stars'
---
Introduction {#sec:intro}
============
High-precision photometry from space is emerging as an extremely important astrophysical tool, particularly in two areas: asteroseismology and the search for extra-solar planets using the transit technique. The small Canadian MOST satellite [e.g., @Walker2003] has shown the way, providing very interesting results on stellar oscillations but also, remarkably, detecting the transit of a planet of just twice the size of the Earth in orbit around a star of magnitude 6 [@Winn2011]. The CoRoT mission [e.g., @Baglin2009] was designed with the dual purpose of asteroseismology and exo-planet research. The mission, launched in late 2006, has been highly successful in both areas. The results include the detection of solar-like oscillations in several main-sequence stars [e.g., @Michel2008] and the detection of a ‘super-earth’ with a radius around $1.7 \,R_\oplus$ and a mass around $5 \,M_\oplus$ [e.g., @Leger2009; @Queloz2009].
The [*Kepler*]{} mission [@Boruck2009] uses a telescope with a diameter more that three times as big as that of CoRoT. Also, unlike CoRoT which is in a low-Earth orbit, [*Kepler*]{} is in an Earth-trailing heliocentric orbit and hence is able to observe the same field almost continuously throughout the mission. The main goal of the mission is to characterize extra-solar planets, particularly planets of roughly earth-size in the habitable zone. This is done through photometric detection of transits of a planet in front of the host star. The transit detections of exo-planet candidates must be followed up by extensive additional analyses and observations to eliminate possible false positives [e.g., @Brown2003]. To ensure satisfactory detection statistics [*Kepler*]{} continuously observes around 150,000 stars in a field of 110 square degrees in the Cygnus-Lyra region. Further details on the spacecraft and mission were provided by @Koch2010.
The mission was launched in March 2009 and has been spectacularly successful. Amongst the remarkable results have been the statistics of a large number of exo-planet candidates [@Boruck2011], the detection of the first confirmed rocky exo-planet [@Batalh2011] and the detection of a system with 6 planets, with masses determined from the planet-planet interactions [@Lissau2011].
The photometric requirements of the detection of the transit of an Earth-size planet in front of a star like the Sun, with a reduction in intensity of around $10^{-4}$, make the mission ideally suited for asteroseismology. This has led to the establishment of the [*Kepler*]{} Asteroseismic Investigation [KAI, @Christ2008]. Most objects are observed at a cadence of around 30 min, but a subset of up to 512 stars, the selection of which can be changed on a monthly basis, are observed at a one-minute cadence. A large fraction of these short-cadence slots have been made available to asteroseismology of stars near the main sequence, while the long-cadence data can be used for asteroseismology of larger and more evolved stars, particularly red giants. To make full use of the huge amount of data from the mission, the Kepler Asteroseismic Science Consortium (KASC)[^1] has been set up [@Kjelds2010]; at the time of writing (September 2011) this has more than 500 members, organized into 13 working groups dealing with different types of objects. The asteroseismic data are made available to the members of the KASC through the Kepler Asteroseismic Science Operations Centre (KASOC) in Aarhus, which also deals with the management and internal refereeing of the papers resulting from the KAI. In the early part of the mission a large number of stars were observed for typically one month each, in a survey phase designed to characterize the properties of stars in the field and select targets for the now ongoing detailed studies of specific objects.
The analysis of the asteroseismic data benefits greatly from additional information about the stars. Basic data for a very large number of stars in the [*Kepler*]{} field, obtained from multi-colour photometry, are available in the Kepler Input Catalogue [KIC, @Brown2011]. However, extensive efforts are under way to secure more accurate data for selected objects [e.g., @Molend2010; @Molend2011; @Uytter2010].
A review of the early [*Kepler*]{} asteroseismic results was given by @Gillil2010, while more recent reviews have been provided by @ChristT2011 and @Christ2011a [@Christ2011b]. Here I give a very brief overview of some of the key findings; also, I discuss in somewhat more detail [*Kepler*]{} asteroseismology of red giants, which has turned out to be perhaps the most fascinating area.
Properties of stellar oscillations {#sec:oscprop}
==================================
As a background for the discussion of individual types of stars below, it is probably useful to provide a brief overview of the properties of stellar oscillations. More detailed expositions have been provided, for example, by @Unno1989, @Christ2004, @Aerts2010 and @Christ2011a.
We consider only small-amplitude adiabatic oscillations of spherically symmetric stars. The dependence of a mode of oscillation on co-latitude $\theta$ and longitude $\phi$ can be written as a spherical harmonic $Y_l^m(\theta, \phi)$, where the degree $l$ provides a measure of the total number of nodal lines on the stellar surface and the azimuthal order $m$ measures the number of nodal lines crossing the equator. In the absence of rotation and other departures from spherical symmetry the frequencies are independent of $m$. From a physical point of view there are two dominant restoring forces at work: pressure perturbations and gravity working on density perturbations. The former case dominates in acoustic waves and leads to modes of oscillation characterized as p modes, while the latter corresponds to internal gravity waves and leads to modes characterized as g modes.
A very good understanding of the properties of the modes can be obtained from simple asymptotic analysis which in fact turns out to have surprisingly broad applicability in the study of asteroseismically interesting stars. By generalizing an analysis by @Lamb1909, @Deubne1984 showed that the oscillations approximately satisfy $${\dd^2 X \over \dd r^2} = -K(r) X \; ;
\label{eq:asymp}$$ here $r$ is distance to the centre and $X = c^2 \rho^{1/2} \div \bolddelr$, where $\bolddelr$ is the displacement vector, $c$ is the adiabatic sound speed and $\rho$ is density. Also $$K = {1 \over c^2} \left[ S_l^2 \left({N^2 \over \omega^2} - 1\right)
+ \omega^2 - \omegaac^2 \right] \; ,
\label{eq:kasymp}$$ $\omega$ being the frequency of oscillation, is determined by three characteristic frequencies of the star: the [*Lamb frequency*]{} $S_l$, with $$S_l^2 = {l(l+1) c^2 \over r^2} \; ,
\label{eq:lamb}$$ the [*buoyancy frequency*]{} (or [*Brunt-Väisälä frequency*]{}) $N$,[^2] $$N^2 = g \left( {1 \over \Gamma_1} {\dd \ln p \over \dd r}
- {\dd \ln \rho \over \dd r} \right) \; ,
\label{eq:buoy}$$ where $g$ is the local gravitational acceleration, and the [*acoustic cut-off frequency*]{} $\omegaac$, $$\omegaac^2 = {c^2 \over 4 H^2} \left(1 - 2 {\dd H \over \dd r} \right)
\; ,$$ where $H = - (\dd \ln \rho / \dd r)^{-1}$ is the density scale height. The properties of a mode are largely determined by the regions in the star where the eigenfunction oscillates as a function of $r$, i.e., where $K > 0$. In regions where $K < 0$ the eigenfunction locally increases or decreases exponentially with $r$. There may be several oscillatory regions, but typically the amplitude is substantially larger in one of these than in the rest, defining the region where the mode is said to be trapped.
The acoustic cut-off frequency is generally large near the stellar surface and small in the interior. For the low-degree modes relevant here the term in $S_l^2$ in Eq. (\[eq:kasymp\]) is small near the surface, and the properties of the oscillations are determined by the magnitude of $\omega$ relative to the atmospheric value of $\omegaac$: when $\omega$ is less than $\omegaac$ the eigenfunction decreases exponentially in the atmosphere, and the mode is trapped in the stellar interior; otherwise, the mode is strongly damped by the loss of energy through running waves in the atmosphere.
The properties of the oscillations in the stellar interior are controlled by the behaviour of $S_l$ and $N$ (see Fig. \[fig:charfreq\]). In unevolved stars $N$ is typically small compared with the characteristic frequencies of p modes. For these, therefore, $K \simeq (\omega^2 - S_l^2)/c^2$ (neglecting $\omegaac$), and the modes are trapped in the region where $\omega > S_l$ in the outer parts of the star, with a lower turning point, $r = r_{\rm t}$, such that $${c(r_{\rm t}) \over r_{\rm t}} = {\omega \over \sqrt{l(l+1)}} \; .$$ At low degree the cyclic frequencies of p modes approximately satisfy $$\nu_{nl} = {\omega_{nl} \over 2 \pi} \simeq
\Delta \nu \left(n + {l \over 2} + \epsilon \right) - d_{nl} \;
\label{eq:pasymp}$$ [@Vandak1967; @Tassou1980; @Gough1993]; here $n$ is the radial order of the mode, $$\Delta \nu = \left(2 \int_0^R {\dd r \over c} \right)^{-1}
\label{eq:delnu}$$ is the inverse sound travel time across a stellar diameter, $R$ being the surface radius, $\epsilon$ is a frequency-dependent phase that reflects the behaviour of $\omegaac$ near the stellar surface and $d_{nl}$ is a small correction that in main-sequence stars predominantly depends on the sound-speed gradient in the stellar core. On the other hand, g modes have frequencies below $N$ and typically such that $\omega \ll S_l$ in the relevant part of the star; in this case the modes are trapped in a region defined by $\omega < N$. Here the oscillation [*periods*]{} satisfy a simple asymptotic relation: $$\Pi_{nl} = {2 \pi \over \omega_{nl}}
\simeq \Delta \Pi_l (n + \epsilon_{\rm g}) \;
\label{eq:gasymp}$$ [@Vandak1967; @Tassou1980], where $\epsilon_{\rm g}$ is a phase and $$\Delta \Pi_l = {2 \pi^2 \over \sqrt{l(l+1)}}
\left(\int_{r_1}^{r_2} N {\dd r \over r} \right)^{-1} \; ,
\label{eq:gperspac}$$ the integral being over the region where the modes are trapped.
As discussed in Section \[sec:redgiant\] (see also Bedding, these proceedings) the situation is considerably more complicated in evolved stars with a compact core; this leads to a high value of $g$ and hence $N$ in the interior of the star, such that modes may have a g-mode character in the deep interior, and a p-mode character in the outer, parts of the star. Such [*mixed modes*]{} have a very substantial diagnostic potential.
Solar-like oscillations {#sec:solarlike}
=======================
The solar oscillations are most likely intrinsically damped [@Balmfo1992] and excited stochastically by the turbulent near-surface convection [e.g., @Goldre1977]. Thus such oscillations can be expected in all stars with significant outer convection zones [@Christ1983; @Houdek1999]. An interesting example is the very recent detection, based on [*Kepler*]{} data, of solar-like oscillations in a $\delta$ Scuti star [@Antoci2011]. The combination of damping and excitation leads to a well-defined, bell-shaped envelope of power with a maximum at a cyclic frequency $\nu_{\rm max} = \omega_{\rm max}/2 \pi$ which approximately scales as the acoustic cut-off frequency $\omegaac$ in the atmosphere [@Brown1991]. Assuming adiabatic oscillations in an isothermal atmosphere, this yields $$\nu_{\rm max} \propto \omegaac \propto M R^{-2} T_{\rm eff}^{-1/2} \; ,
\label{eq:numax}$$ where $M$ is the mass, and $T_{\rm eff}$ the effective temperature, of the star. This scaling has substantial observational support [e.g., @Beddin2003; @Stello2008], while it is still not fully understood from a theoretical point of view [but see @Belkac2011]. For stars on or near the main sequence the frequencies approximately satisfy the asymptotic relation (\[eq:pasymp\]). Here the large frequency separation roughly scales, in accordance with homology scaling, as the square root of the mean stellar density, i.e., $$\Delta \nu \propto M^{1/2} R^{-3/2} \; ,
\label{eq:dnuscale}$$ although with some departures from strict homology that largely depend on $T_{\rm eff}$ [@White2011]. A detailed discussion of the observational properties of solar-like oscillations was provided by @Beddin2011.
In the early phases of the [*Kepler*]{} project a survey of potential asteroseismic targets was carried out, each typically observed for one month. This led to the detection of a huge number of main-sequence and subgiant stars showing solar-like oscillations, increasing the number of the known cases by more than a factor of 20, relative to earlier ground-based observations and the observations with the CoRoT mission. As discussed by @Chapli2011a this provides information about basic stellar properties, through the application of the scaling laws for $\nu_{\rm max}$ and $\Delta \nu$ (Eqs \[eq:numax\] and \[eq:dnuscale\]), yielding a very interesting overview of the distribution, in mass and radius, of stars in the solar neighbourhood. These extensive data also allow a calibration of scaling relations that can be used to predict the detectability of solar-like oscillations for a given star [@Chapli2011b]; this is important for the scheduling of further [*Kepler*]{} observations and will, suitably generalized, be very valuable for the definition of other observing campaigns.
A selected set of these stars are now being observed for much longer, to improve the precision and level of detail of the observed frequencies and hence allow detailed characterization of the stellar properties and internal structure, through fits to the individual observed frequencies. Interesting examples of such analyses were carried out by @Metcal2010 and @DiMaur2011 (see also Di Mauro et al., these proceedings), in both cases for stars evolved somewhat beyond the main sequence where modes of mixed p- and g-mode character provided more sensitive information about the stellar properties (see also Bedding, these proceedings). A very interesting analysis of the diagnostic potentials of mixed modes, based on CoRoT observations, was presented by @Deheuv2011.
An important aspect of the asteroseismology of solar-like stars is the application to stars found by [*Kepler*]{} to be planet hosts. Here the analysis allows the precise determination of the stellar mass and radius, essential for characterizing the properties of the planets detected through the transit technique and follow-up radial velocity measurements, and determination of the stellar age through model fits to the observed frequencies provides a measure of the age of the planetary system. An early example of such analyses, for the previously known planet host HAT-P-7, was provided by @Christ2010, and asteroseismic results were important for the characterization of the first rocky planet detected outside the solar system, Kepler-10b [@Batalh2011].
Asteroseismology of red giants {#sec:redgiant}
==============================
Perhaps the most striking aspect of [*Kepler*]{} asteroseismology has been the study of solar-like oscillations in red-giants stars. These have evolved beyond the phase of central hydrogen burning, developing a helium core surrounded by a thin region, the hydrogen-burning shell, providing the energy output of the star. The core contracts while the outer layers expand greatly, accompanied by a reduction of the effective temperature, until the star reaches and evolves up the Hayashi track, at nearly constant $T_{\rm eff}$. At the tip of the red-giant branch the temperature in the core reaches around 100MK, enough to start efficient helium burning. Helium ignition is followed by a readjustment of the internal structure, leading to some reduction in the surface radius, and maintaining the hydrogen-burning shell as a substantial contributor to the total energy output of the star. This central helium-burning phase is rather long-lived, with the stars changing relatively little; in stellar clusters this leads to an accumulation of stars in this phase, which is therefore known as the ‘clump phase’.
![Lamb frequency $S_l$ for $l = 1$ (dashed line) and buoyancy frequency $N$ (solid line), against fractional radius, in a $1\,{\rm M}_\odot$ red-giant model with heavy-element abundance $Z = 0.02$, radius $7\,{\rm R}_\odot$ and luminosity $18.4\,{\rm L}_\odot$. The horizontal dotted line is at $0.7 \omegaac(R)/2 \pi$, corresponding roughly to the predicted frequency $\nu_{\rm max}$ of maximum power. The vertical dot-dashed lines mark the outer limit $r_1$ of the g-mode, and the lower limit $r_2$ of the p-mode, propagating regions at this frequency. \[fig:charfreq\] ](\fig/jcd_fig_1.eps){width="9cm"}
-0.8cm
A red giant has a very compact core, containing a substantial fraction of the stellar mass, and a very extensive convective envelope. This is reflected in the characteristic frequencies, shown in Fig. \[fig:charfreq\] for a typical red-giant model. As is clear from the behaviour of the buoyancy frequency $N$, the outer convective envelope extends over more than 90 % of the stellar radius. The border of the helium core, with a mass of $0.21 \, M$, is marked by the small local maximum in $N$ near $r = 0.0044 R$; the large mass within a small region leads to a very large local gravitational acceleration, causing the huge buoyancy frequency (cf. Eq. \[eq:buoy\]). At a typical oscillation frequency, marked by the horizontal dotted line, the mode behaves as a p mode in the region outside $r = r_2$, marking the bottom of the outer acoustic cavity, and as a g mode inside $r = r_1$. This causes a mixed character of the modes (see also Bedding, these proceedings).
Properties of red-giant oscillations
------------------------------------
Early predictions of solar-like oscillations in red giants were made by @Christ1983. This was followed by detections in ground-based observations [e.g., @Frands2002]; however, it was only with the CoRoT observations by @DeRidd2009 that the presence of nonradial solar-like oscillations in red giants was definitely established. Very extensive observations of large numbers of stars have been made with both CoRoT and [*Kepler*]{} [e.g., @Hekker2009; @Beddin2010; @Mosser2010; @Kallin2010; @Hekker2011]. This has allowed the determination of global stellar properties from fits to the large frequency separation $\Delta \nu$ (cf. Eq. \[eq:pasymp\]) and the frequency $\nu_{\rm max}$ at maximum power, possibly supplemented by additional observations in analyses based on stellar model grids, allowing population studies of the red giants [e.g., @Miglio2009; @Miglio2011]. Particularly interesting are the possibilities for studying the open clusters in the [*Kepler*]{} field [@Stello2010; @Stello2011a; @Basu2011]. The large number and variety of stars observed have allowed the study of the overall properties of the oscillation frequencies, along the lines of the p-mode asymptotic relation (Eq. \[eq:pasymp\]), [@Huber2010; @Mosser2011a], in what the latter call the universal pattern of the frequencies, and the diagnostic potential of this pattern [@Montal2010]. Similarly, it has been possible to test scaling relations for mode properties, including their amplitudes, over a broad range of stellar parameters [@Huber2011; @Mosser2011b; @Stello2011b]. @DiMaur2011 demonstrated the diagnostic potential of analysing the individual frequencies in a relatively unevolved red giant, while @Jiang2011 carried out a detailed analysis of a somewhat more evolved star.
![ Computed frequencies for a short segment of a $1\,{\rm M}_\odot$ evolution sequence, plotted against luminosity in solar units. The age goes from 11.765 to 11.777Gyr. The vertical dotted line indicates the model illustrated in Figs \[fig:charfreq\], \[fig:inertia\] and \[fig:perspac\]. \[fig:freqs\] ](\fig/jcd_fig_2.eps){width="9cm"}
-0.8cm
To understand the observed oscillation spectra it is informative to consider in more detail the oscillation properties of red-giant models; here I consider models in the vicinity of the model illustrated in Fig. \[fig:charfreq\]. The huge buoyancy frequency in the core of the model leads to very small g-mode period spacings (cf. Eq. \[eq:gperspac\]): for $l = 1$ the result is $\Delta \Pi_1 = 1.23$min. The full range of radial modes, up to the acoustical cut-off frequency, extends in cyclic frequency from 11 to $107 \muHz$. In that interval we therefore expect around 1100 g modes of degree $l = 1$, the number scaling like $\sqrt{l(l+1)/2}$ for higher degree.[^3]
Most of these modes have their highest amplitude in the core of the model and hence predominantly have the character of g modes. However, there are frequencies where the eigenfunctions decrease with increasing depth in the region between $r_2$ and $r_1$ (cf. Fig. \[fig:charfreq\]). These frequencies define [*acoustic resonances*]{} where the modes have their largest amplitude in the outer acoustic cavity and the modes predominantly have the character of p modes.
The properties of the frequencies are illustrated in Fig. \[fig:freqs\], showing the evolution of $l = 1$ modes with age, in a relatively limited range in frequency. The extremely high density of modes is evident, as are the branches of acoustic resonances, with frequency decreasing roughly as $R^{-3/2}$, proportional to the mean density of the star. As discussed by Bedding (these proceedings) these undergo avoided crossings with the g modes, whose frequencies increase somewhat with age as the buoyancy frequency in the core increases. The acoustic resonances, together with the frequencies of radial modes, approximately satisfy the asymptotic relation given in Eq. (\[eq:pasymp\]).
![ Power spectra of two red giants observed by [*Kepler*]{} for 13 and 10 months, respectively; the stars are identified by the Kepler Input Catalog (KIC) numbers. As indicated in the figure, the stars have similar $\Delta \nu$ and $\nu_{\rm max}$ and hence similar overall properties. The horizontal bars marked ‘1’ indicate the dipole forests from which were inferred rather different observed period spacings, $\Delta P_{\rm obs}$, of 52 and 96s for the upper and lower spectrum. Further analysis determined the asymptotic g-mode spacings as 74 and 147s, respectively. KIC6779699 is in the ascending red-giant phase, whereas KIC4902641 is in the clump phase, with central helium burning. Adapted from @Beddinetal2011. \[fig:obsspec\] ](\fig/jcd_fig_3.eps){width="9cm"}
-0.8cm
A major breakthrough in the asteroseismic study of red giants was the detection of ’dipole forests’ at the acoustic resonances, i.e., groups of $l = 1$ modes excited to observable amplitudes [@Beck2011; @Beddinetal2011; @Mosser2011c]. These showed an approximately uniform period spacing, as expected for g modes from Eq. (\[eq:gasymp\]), and in some cases it was possible from the observed spacings to extrapolate to the period spacing $\Delta \Pi_1$ for the pure g modes. Strikingly, @Beddinetal2011 demonstrated a substantial difference in the period spacing between stars in the shell hydrogen burning phase and ‘clump stars’ which in addition had core helium burning. This is illustrated in the observed spectra shown in Fig. \[fig:obsspec\]. It was argued by @Christ2011a that the larger period spacing in the helium-burning stars is a natural consequence of the very temperature-dependent energy generation near the centre in these stars: this leads to a convective core which restricts the range of the integral in Eq. (\[eq:gperspac\]), thus reducing its value and hence increasing $\Delta \Pi$.
![ Mode inertias (cf. Eq. \[eq:inertia\]) for the $1\,{\rm M}_\odot$ model illustrated in Fig. \[fig:charfreq\]. Radial modes, with $l = 0$, are shown with circles connected by a line, $l = 1$ modes are shown with triangles and $l = 2$ modes with squares. \[fig:inertia\] ](\fig/jcd_fig_4.eps){width="9cm"}
-0.8cm
![ The pluses connected by solid lines show period spacings for modes with $l = 1$ in the $1\,{\rm M}_\odot$ model illustrated in Fig. \[fig:charfreq\]. The heavy horizontal dashed line shows the asymptotic period spacing (cf. Eq. \[eq:gperspac\]). The thin dashed curve shows, on an arbitrary scale, the logarithm of the mode inertia divided by the inertia of radial modes (see text). \[fig:perspac\] ](\fig/jcd_fig_5.eps){width="9cm"}
-0.8cm
An important quantity characterizing the properties of the modes is the normalized mode inertia $$E = {\int_V \rho |\bolddelr|^2 \dd V \over M |\bolddelr_{\rm s}|^2 } \; ,
\label{eq:inertia}$$ where $\bolddelr_{\rm s}$ is the surface displacement, and the integral is over the volume $V$ of the star. Modes of predominantly acoustic nature, including the radial modes, have their largest displacement in the outer parts of the star and hence a relatively low inertia, whereas g-dominated modes have a large amplitude in the deep interior and hence a high inertia. This is illustrated in Fig. \[fig:inertia\] for modes of degree $l = 0$, 1 and 2. The acoustic resonances, the inertia decreasing to close to the value for a radial mode of corresponding frequency, are evident, while for the intermediate g-dominated modes the inertia is obviously much higher.
On the assumption of damped modes excited stochastically by near-surface convection, the mode inertia is important for understanding the observed amplitudes. A very illuminating discussion of this was provided by @Dupret2009. In the frequent case where the damping is predominantly near the surface, the mode lifetime is proportional to $E$, with the g-dominated modes having lifetimes of years. Also, the mean square amplitude is inversely proportional to $E$. However, the visibility of the modes in the power spectrum is determined with the [*peak height*]{} which under these circumstances is independent of $E$, assuming observations extending for substantially more than the mode lifetimes [see also @Chapli2005]. The present observations are much shorter than the typical lifetimes of the g-dominated modes but still allow several modes to be seen close to the acoustic resonances, particularly for $l = 1$, leading to the dipolar forests of peaks.
As noted by @Beck2011 and @Beddinetal2011 the signature of modes affected by the g-mode behaviour is an approximately uniform period spacing. This is illustrated for the model in Fig. \[fig:perspac\]. For the bulk of the modes, which are predominantly g modes, the spacing is close to the asymptotic value, indicated by the horizontal dashed line. Near the acoustic resonances the avoided crossings cause a reduction in the period spacing, of a characteristic ’V’-shaped pattern. This is closely related to the behaviour of the mode inertia, illustrated in figure by the scaled mode inertia $Q_{nl} = E_{nl}/ \bar E_0(\omega_{nl})$ where $\bar E_0(\omega_{nl})$ is the inertia of the radial modes, interpolated to the frequency $\omega_{nl}$ of the mode considered.
A simple asymptotic analysis
----------------------------
To understand the properties of the modes it is useful to illustrate them with a crude approximation to the asymptotic description provided by Eqs (\[eq:asymp\]) and (\[eq:kasymp\]). We neglect $\omegaac^2$ whose main effect is the ensure reflection of the mode at the stellar surface; this causes a phase shift which can easily be included. From the behaviour of the characteristic frequencies illustrated in Fig. \[fig:charfreq\] we can approximately divide the star into three regions, separated at $r = r_1$ and $r_2$ (see figure):
1. $r < r_1$: here $S_l^2 \gg \omega^2$ and $N^2 \gg \omega^2$, except near the turning points. Here we approximate $K$ by $$K \simeq {l(l+1) \over r^2} \left( {N^2 \over \omega^2} - 1 \right)
\simeq {l(l+1) \over r^2} {N^2 \over \omega^2} \; ,$$ where the last approximation is valid except near the turning points where $\omega = N$.
2. $r_1 < r < r_2$: here $S_l^2 \gg \omega^2 \gg N^2$. Thus $K$ is simply approximated by $$K \simeq - {l(l+1) \over r^2} \; .$$
3. $r_2 < r$: Here $N^2 \ll \omega^2$ and $S_l^2 \ll \omega^2$ except near the turning point where $\omega = S_l$. Thus we approximate $K$ by $$K \simeq {1 \over c^2} \left( \omega^2 - S_l^2 \right)
\simeq {\omega^2 \over c^2} \; ,$$ where the last approximation is valid except near the turning point.
With these approximations we can write down the solutions in the different regions based on the JWKB approximation, neglecting, however, the details of the behaviour near the turning points.[^4] In region 1) the solution can be written as $$X(r) \simeq A^{\rm (g)} (r) \cos (\Psi^{\rm (g)}(r) + \phi^{\rm (g)}) \; ,
\label{eq:gapprox}$$ where $$\Psi^{\rm (g)} = \int_0^r {L \over r'} {N \over \omega} \dd r' \; ,$$ with $L = \sqrt{l(l+1)}$, and $\phi^{\rm (g)}$ is a phase that depends on the behaviour near the centre and the turning points. In region 2) the approximate solution is simply $$X(r) \simeq a_+ \left( {r \over r_1} \right)^L
+ a_- \left( {r \over r_1} \right)^{-L} \; ,$$ where $a_+$ and $a_-$ are integration constants determined by fitting the solutions. Finally, in region 3) the solution can be written as $$X(r) \simeq A^{\rm (p)} (r) \cos (\Psi^{\rm (p)}(r) + \phi^{\rm (p)}) \; ,
\label{eq:papprox}$$ where $$\Psi^{\rm (p)} = \int_r^R \omega {\dd r' \over c} \; ,$$ and $\phi^{\rm (p)}$ is a phase that depends on the behaviour near the surface (thus including the contribution from the reflection caused by $\omegaac$) and the turning point. In Eqs (\[eq:gapprox\]) and (\[eq:papprox\]) the amplitude functions $A^{\rm (g)}(r)$ and $A^{\rm (p)}(r)$ can be obtained from the JWKB analysis; since they are irrelevant for the subsequent analysis, I do not present them explicitly.
The full solution is obtained by requiring continuity of $X$ and $\dd X/\dd r$ at $r_1$ and $r_2$. Neglecting the derivatives of $A^{\rm (g)}(r)$ and $A^{\rm (p)}(r)$ this yields $$\begin{aligned}
&& \sin(\Psi^{\rm (g)}(r_1) + \phi^{\rm (g)} - \pi/4)
\sin(\Psi^{\rm (p)}(r_2) + \phi^{\rm (p)} - \pi/4) + \nonumber \\
&& \zeta \cos(\Psi^{\rm (g)}(r_1) + \phi^{\rm (g)} - \pi/4)
\cos(\Psi^{\rm (p)}(r_2) + \phi^{\rm (p)} - \pi/4) = 0 \; ,
\label{eq:fulldisp}\end{aligned}$$ where $\zeta = (r_1/r_2)^{2 L}$ is a measure of the coupling between regions 1) and 3). If $\zeta =0$, so that the regions are completely decoupled, the eigenfrequencies obviously satisfy $$\Psi^{\rm (g)}(r_1) = \int_0^{r_1} {L \over r} {N \over \omega} \dd r
= n \pi - \tilde \phi^{\rm (g)} \; ,
\label{eq:pureg}$$ or $$\Psi^{\rm (p)}(r_2) = \int_{r_2}^R \omega {\dd r \over c}
= k \pi - \tilde \phi^{\rm (p)} \; ,
\label{eq:purep}$$ where $n$ and $k$ are integers and $\tilde \phi^{\rm (g)} = \phi^{\rm (g)} - \pi/4$, $\tilde \phi^{\rm (p)} = \phi^{\rm (p)} - \pi/4$. Equation (\[eq:pureg\]) leads to Eqs (\[eq:gasymp\]) and (\[eq:gperspac\]), while Eq. (\[eq:purep\]) essentially corresponds to Eq. (\[eq:pasymp\]), taking into account that the behaviour near the lower turning point leads to the dependence on $l$.[^5]
![ The upper panel shows solutions to the dispersion relation in Eq. (\[eq:fulldisps\]), as a function of $\omega^{\rm (g)}$, keeping $\omega^{\rm (p)} = 0.3$ and $\zeta = 0.01$ fixed. The lower panel shows a small segment of the solution; the solid curves show the approximate solution near the avoided crossing, given by Eq. (\[eq:avcross\]). \[fig:avcross\] ](\fig/jcd_fig_6.eps){width="9cm"}
-0.8cm
To analyse the full dispersion relation, Eq. (\[eq:fulldisp\]), we neglect the dependence of the turning points $r_1$ and $r_2$ on frequency and write the relation as $$\CD =
\sin(\omega^{\rm (g)}/\omega + \tilde \phi^{\rm (g)})
\sin(\omega/\omega^{\rm (p)} + \tilde \phi^{\rm (p)})
+ \zeta \cos(\omega^{\rm (g)}/\omega + \tilde \phi^{\rm (g)})
\cos(\omega/\omega^{\rm (p)} + \tilde \phi^{\rm (p)}) = 0 \; ,
\label{eq:fulldisps}$$ with[^6] $$\omega^{\rm (g)} = L \int_0^{r_1} {N \over r} \dd r \; , \quad
\omega^{\rm (p)} = \left( \int_{r_2}^R {\dd r \over c} \right)^{-1} \; .$$ To illustrate the properties of the solution, Fig. \[fig:avcross\] shows the result of increasing $\omega^{\rm (g)}$, keeping $\omega^{\rm (p)}$ fixed; for simplicity I assume that $\tilde \phi^{\rm (g)} = \tilde \phi^{\rm (p)} = 0$. It is obvious that the increase in $\omega^{\rm (g)}$ leads to avoided crossings between frequencies satisfying Eq. (\[eq:pureg\]), increasing with $\omega^{\rm (g)}$, and the constant frequencies satisfying Eq. (\[eq:purep\]). Details of an avoided crossing are shown in the lower panel of Fig. \[fig:avcross\]. At the corresponding crossing of the uncoupled modes, we have both $\omega \equiv \omega_{\rm c} = k \pi \omega^{\rm (p)}$ and $\omega^{\rm (g)}/ \omega_{\rm c} = n \pi$ for suitable integers $k$ and $n$; thus the crossing is defined by $\omega^{\rm (g)} \equiv \omega^{\rm (g)}_{\rm c} =n k \pi^2$. With $$\omega^{\rm (g)} = \omega^{\rm (g)}_{\rm c} + \delta \omega^{\rm (g)} \; ,
\quad
\omega = \omega_{\rm c} + \delta \omega \; ,$$ and expanding Eq. (\[eq:fulldisps\]) to second order in $\delta \omega$, $\delta \omega^{\rm (g)}$ and $\zeta$ we obtain $$\delta \omega^2 -
{\omega_{\rm c} \over \omega^{\rm (g)}_{\rm c}} \delta \omega^{\rm (g)} \,
\delta \omega
- \zeta {\omega_{\rm c}^2 \omega^{\rm (p)} \over \omega^{\rm (g)}_{\rm c}}
=0 \; ,$$ with the solution $$\delta \omega =
{1 \over 2} {\omega_{\rm c} \over \omega^{\rm (g)}_{\rm c}}
\delta \omega^{\rm (g)}
\pm \left[
{1 \over 4} \left( {\omega_{\rm c} \over \omega^{\rm (g)}_{\rm c}}
\delta \omega^{\rm (g)}\right)^2
+ \zeta {\omega_{\rm c}^2 \omega^{\rm (p)} \over \omega^{\rm (g)}_{\rm c}}
\right]^{1/2} \;.
\label{eq:avcross}$$ This is also shown in Fig. \[fig:avcross\] for one of the avoided crossings and clearly provides an excellent fit to the numerical solution. The minimum separation between the two branches, for $\delta \omega^{\rm (g)} = 0$, is $$\Delta \omega_{\rm min} = 2 \omega_{\rm c}
\left( \omega^{\rm (p)} \over \omega^{\rm (g)} \right)^{1/2} \zeta^{1/2} \; .$$
![ Period spacings for solutions to the asymptotic dispersion relation, Eq. (\[eq:fulldisps\]), with $\omega^{\rm (p)} = 0.3$, $\omega^{\rm (g)} = 5000$ and $\zeta = 0.04$. The period spacing $2 \pi^2 / \omega^{\rm (g)}$ for pure g modes is shown by the horizontal dotted line. \[fig:asperspac\] ](\fig/jcd_fig_7.eps){width="9cm"}
-0.8cm
![ Detailed properties of the case illustrated in Fig. \[fig:asperspac\]. a) The frequency-dependent phase defined by Eq. (\[eq:phasedef\]), to within an arbitrary multiple of $2 \pi$. b) The pluses and solid curve show a blow-up of Fig. \[fig:asperspac\]. The dot-dashed curve shows the period spacing determined from Eq. (\[eq:asperspac\]), using the phase $\Phi$ shown in panel a). The dashed curve shows the approximation in Eq. (\[eq:asperspac1\]). \[fig:asperspac1\] ](\fig/jcd_fig_8.eps){width="9cm"}
-0.8cm
As in the case of the full solution (cf. Fig. \[fig:perspac\]) the avoided crossings cause dips in the otherwise uniform period spacings. This is illustrated in Fig. \[fig:asperspac\]. To understand this behaviour in more detail we write Eq. (\[eq:fulldisps\]) as $$\CD = C(\omega) \sin(\omega^{\rm (g)}/\omega + \Phi(\omega)) \; ,$$ where (assuming again $\tilde \phi^{\rm (g)} = \tilde \phi^{\rm (p)} = 0$) $C(\omega) = \sqrt{\sin^2(\omega/\omega^{\rm (p)})
+ \zeta^2 \cos^2(\omega/\omega^{\rm (p)})}$ and $\Phi(\omega)$ satisfies $$\begin{aligned}
C(\omega) \cos(\Phi) &=& \sin(\omega / \omega^{\rm (p)}) \nonumber \\
C(\omega) \sin(\Phi) &=& \zeta \cos(\omega / \omega^{\rm (p)}) \; .
\label{eq:phasedef}\end{aligned}$$ It is obvious that the eigenfrequencies satisfy $$\CG(\omega) = \omega^{\rm (g)}/\omega + \Phi(\omega) = n \pi \; ,$$ for integer $n$. $\Phi$ is illustrated in Fig. \[fig:asperspac1\]a; except near acoustic resonances, where $\omega/\omega^{\rm (p)} \simeq k \pi$ for integer $k$, $\Phi$ is almost constant and the period spacing is determined by the first term in $\CG$, essentially corresponding to the pure g-mode case defined by Eq. (\[eq:pureg\]). Near the acoustic resonances $\Phi$ changes rapidly, causing a strong variation in the period spacing.
This behaviour can be made more precise by noting that the frequency spacing between adjacent modes approximately satisfies $\Delta \omega \simeq \pi (\dd \CG / \dd \omega)^{-1}$ and hence the period spacing is, approximately, given by $$\Delta \Pi \simeq - {2 \pi^2 \over \omega^2}
\left( {\dd \CG \over \dd \omega} \right)^{-1}
= {2 \pi^2 / \omega^{\rm (g)} \over
\displaystyle 1 - {\omega^2 \over \omega^{\rm (g)}}
{\dd \Phi \over \dd \omega}} \; .
\label{eq:asperspac}$$ Here the numerator is the period spacing for pure g modes, and the denominator causes a reduction in $\Delta \Pi$ near an acoustic resonance. This is illustrated in Fig. \[fig:asperspac1\]b where the actual period spacings near an acoustic resonance are compared with Eq. (\[eq:asperspac\]).
At the centre of an acoustic resonance $\omega/\omega^{\rm (p)} = k \pi$ and $\Phi = \pi/2$, up to an irrelevant multiple of $\pi$. In the vicinity of this point we can expand $\Phi$ as $\delta \Phi = \Phi - \pi/2 $ in terms of $\delta x = \omega/\omega^{\rm (p)} - k \pi$, to obtain $$\delta \Phi \simeq - {\delta x \over \sqrt{\zeta^2 + \delta x^2}} \; ,$$ and hence the period spacing $$\Delta \Pi \simeq
{2 \pi^2 / \omega^{\rm (g)} \over
\displaystyle 1 + {\omega^2 \over \omega^{\rm (g)} \omega^{\rm (p)}}
{\zeta^2 \over (\zeta^2 + \delta x^2)^{3/2}}} \; .
\label{eq:asperspac1}$$ This approximation is also shown in Fig. \[fig:asperspac1\]b. In particular, we obtain the minimum period spacing $$\Delta \Pi_{\rm min} \simeq
{2 \pi^2 / \omega^{\rm (g)} \over
\displaystyle 1 + {\omega^2 \over \zeta \omega^{\rm (g)} \omega^{\rm (p)}}
} \; .$$ It is interesting that the maximum reduction in the period spacing increases with decreasing $\zeta$ and hence weaker coupling. On the other hand, the width of the decrease in $\Delta \Pi$ also decreases and so therefore do the chances of finding a g mode in the vicinity of the minimum.
Although the present analysis is highly simplified (and consequently quite simple), it does appear to capture many of the aspects of the numerical solutions for stellar models. It would probably not be difficult to generalize it to take the behaviour near the turning points properly into account, making reasonable a more detailed comparison with the numerical results. A comparison with the behaviour of the mode inertia (Fig. \[fig:inertia\]) would similarly have been interesting. Such efforts are beyond the scope of the present paper, however. I also note that there is clearly some symmetry between the treatment of the g- and p-dominated modes in how the frequency-dependent phase function $\Phi$ is introduced. In the present case, with a dense spectrum of g modes undergoing avoided crossings with a smaller number of p modes, it was natural to regard the effect of the latter as a modification, in terms of $\Phi$, to the dispersion relation for the former. For subgiants, on the other hand, with a comparatively dense spectrum of p modes and few g modes, a similar phase could be introduced in the p-mode dispersion relation. This might be interesting in connection with the analysis presented by Bedding (these proceedings).
Future prospects
================
It is obvious that [*Kepler*]{} has already been an overwhelming success, resulting in observations that will be analysed for years to come. Even so, it is crucial to ensure that the mission continues beyond the nominal end in late 2012. For the exo-planet research this is required to reach the planned sensitivity to Earth-like planets, given a stellar background noise that has been found to be somewhat higher than expected. For asteroseismology the gains will be numerous. A longer mission will allow further searches for rare types of oscillating stars, exemplified by the recent detection in the [*Kepler*]{} field of an oscillating white dwarf [@Ostens2011]. The possibilities for detecting frequency variations associated with stellar cycles in solar-like stars [@Karoff2009] will be much improved. Also, longer timeseries will allow reaching further peaks, for modes with longer lifetimes, in the dipolar forests in red giants and hence greatly strengthen the possibilities for investigating the properties of their cores.
However, there is clearly a need to consider observational facilities beyond [*Kepler*]{}. The proposed ESA PLATO mission [@Catala2011] for transit search for exo-planets would allow asteroseismic characterization of a large fraction of the stars where planet systems are detected; in addition, it would cover a much larger area on the sky than [*Kepler*]{}, and hence provide a greater variety of asteroseismic targets.[^7] Also, ground-based observations of stellar oscillations in radial velocity still present great advantages over the photometric observations by [*Kepler*]{}. This is particularly important for solar-like stars, where the ‘noise’ from other processes in the stellar atmosphere provides a background that is much higher, relative to the oscillations, in intensity than in Doppler velocity [@Harvey1988; @Grunda2007]. This is the background for establishing the SONG[^8] network [@Christ2011b; @Grunda2011] of one meter telescopes, with a distribution to ensure almost continuous observations anywhere in the sky. The first node in the network, built largely with Danish funding, will start operations in 2012, a Chinese node is under construction, and partners are being sought to establish additional nodes.
I wish to express my deep sympathy with the Japanese people for the losses sustained in the disaster immediately preceding this conference. I am very grateful to the organizers for carrying through, in these difficult circumstances, with the conference, with great success and in beautiful surroundings. I thank Tim Bedding for providing Fig. \[fig:obsspec\], and Travis Metcalfe and Dennis Stello for comments on earlier versions of the manuscript. NCAR is supported by the National Science Foundation.
Aerts, C., Christensen-Dalsgaard, J. & Kurtz, D. W. 2010, [Asteroseismology]{}, (Heidelberg: Springer) Antoci, V., Handler, G., Campante, T. L., et al. 2011, . [Nature]{}, [477]{}, 570 Baglin, A., Auvergne, M., Barge, P., Deleuil, M., Michel, E. and the CoRoT Exoplanet Science Team 2009, . in [Proc. IAU Symp. 253, Transiting Planets]{}, edited by F. Pont, D. Sasselov and M. Holman, (Cambridge: IAU and Cambridge University Press), 71 Balmforth, N. J. 1992, . [MNRAS]{}, [255]{}, 603 Basu, S., Grundahl, F., Stello, D., et al. 2011, . [ApJ]{}, [729]{}, L10 Batalha, N. M., Borucki, W. J., Bryson, S. T., et al. 2011, . [ApJ]{}, [729]{}, 27 Beck, P. G., Bedding, T. R., Mosser, B., et al. 2011, . [Science]{}, [332]{}, 205. Bedding, T. R. 2011, . to appear in [Asteroseismology]{}, Canary Islands Winter School of Astrophysics, Volume XXII, edited by P. L. Pall[é]{}, (Cambridge: Cambridge University Press)
Bedding, T. R., Huber, D., Stello, D., et al. 2010, . [ApJ]{}, [713]{}, L176 Bedding, T. R. & Kjeldsen, H. 2003, . [PASA]{}, [20]{}, 203 Bedding, T. R., Mosser, B., Huber, D., et al. 2011, . [Nature]{}, [471]{}, 608 Belkacem, K., Goupil, M. J., Dupret, M. A., Samadi, R., Baudin, F., Noels, A. & Mosser, B. 2011, . [A&A]{}, [530]{}, A142 Borucki, W. J., Koch, D. G., Basri, G., et al. 2011, . [ApJ]{}, [736]{}, 19 Borucki, W., Koch, D., Batalha, N., Caldwell, D., Christensen-Dalsgaard, J., Cochran, W. D., Dunham, E., Gautier, T. N., Geary, J., Gilliland, R., Jenkins, J., Kjeldsen, H., Lissauer, J. J. & Rowe, J. 2009, . in [Proc. IAU Symp. 253, Transiting Planets]{}, edited by F. Pont, D. Sasselov and M. Holman, (Cambridge: IAU and Cambridge University Press), 289 Brown, T. M. 2003, . [ApJ]{}, [593]{}, L125 Brown, T. M., Gilliland, R. L., Noyes, R. W. & Ramsey, L. W. 1991, . [ApJ]{}, [368]{}, 599 Brown, T. M., Latham, D. W., Everett, M. E. & Esquerdo, G. A. 2011, . [AJ]{}, [142]{}, 112 Catala, C., Appourchaux, T. and the PLATO Mission Consortium 2011, . [J. Phys.: Conf. Ser.]{}, [271]{}, 012084 Chaplin, W. J., Houdek, G., Elsworth, Y., Gough, D. O., Isaak, G. R. & New, R. 2005, . [MNRAS]{}, [360]{}, 859 Chaplin, W. J., Kjeldsen, H., Christensen-Dalsgaard, J., et al. 2011a, . [Science]{}, [332]{}, 213 Chaplin, W. J., Kjeldsen, H., Bedding, T. R., et al. 2011b, . [ApJ]{}, [732]{}, 54 Christensen-Dalsgaard, J. 2004, . [Solar Phys.]{}, [220]{}, 137 Christensen-Dalsgaard, J. 2011a, . to appear in [Asteroseismology]{}, Canary Islands Winter School of Astrophysics, Volume XXII, edited by P. L. Pall[é]{}, (Cambridge: Cambridge University Press)
Christensen-Dalsgaard, J. 2011b, . in [Proc. MEARIM II. The 2nd Middle-East and Africa Regional IAU Meeting]{}, edited by P. Charles, [African Skies / Cieux Africains]{}, in the press
Christensen-Dalsgaard, J. & Frandsen, S. 1983, . [Solar Phys.]{}, [82]{}, 469 Christensen-Dalsgaard, J. & Thompson, M. J. 2011, . in [Proc. IAU Symposium 271: Astrophysical dynamics: from stars to planets]{}, edited by N. Brummell, A. S. Brun, M. S. Miesch and Y. Ponty, (Cambridge: IAU and Cambridge University Press), 32 Christensen-Dalsgaard, J., Arentoft, T., Brown, T. M., Gilliland, R. L., Kjeldsen, H., Borucki, W. J. & Koch, D. 2008, . [J. Phys.: Conf. Ser.]{}, [118]{}, 012039
Christensen-Dalsgaard, J., Kjeldsen, H., Brown, T. M., Gilliland, R. L., Arentoft, T., Frandsen, S., Quirion, P.-O., Borucki, W. J., Koch, D. & Jenkins, J. M. 2010, . [ApJ]{}, [713]{}, L164 Deheuvels, S. & Michel, E. 2011, . [A&A]{}, in the press
De Ridder, J., Barban, C., Baudin, F., et al. 2009, . [Nature]{}, [459]{}, 398 Deubner, F.-L. & Gough, D. O. 1984, . [ARA&A]{}, [22]{}, 593 Di Mauro, M. P., Cardini, D., Catanzaro, G., et al. 2011, . [MNRAS]{}, [415]{}, 3783 Dupret, M.-A., Belkacem, K., Samadi, R., Montalban, J., Moreira, O., Miglio, A., Godart, M., Ventura, P., Ludwig, H.-G., Grigahc[è]{}ne, A., Goupil, M.-J., Noels, A. & Caffau, E. 2009, . [A&A]{}, [506]{}, 57 Frandsen, S., Carrier, F., Aerts, C., Stello, D., Maas, T., Burnet, M., Bruntt, H., Teixeira, T. C., de Medeiros, J. R., Bouchy, F., Kjeldsen, H., Pijpers, F. & Christensen-Dalsgaard, J. 2002, . [A&A]{}, [394]{}, L5 Gilliland, R. L., Brown, T. M., Christensen-Dalsgaard, J., et al. 2010, . [PASP]{}, [122]{}, 131 Goldreich, P. & Keeley, D. A. 1977, . [ApJ]{}, [212]{}, 243 Gough, D. O. 1993, . in [Astrophysical fluid dynamics, Les Houches Session XLVII]{}, edited by J.-P. Zahn and J. Zinn-Justin, (Amsterdam: Elsevier), 399 Gough, D. O. 2007, . [AN]{}, [328]{}, 273 Grundahl, F., Kjeldsen, H., Christensen-Dalsgaard, J., Arentoft, T. & Frandsen, S. 2007, . [Comm. in Asteroseismology]{}, [150]{}, 300 Grundahl, F., Christensen-Dalsgaard, J., J[ø]{}rgensen, U. G., Kjeldsen, H., Frandsen, S. & Kj[æ]{}rgaard Rasmussen, P. 2011, . [J. Phys.: Conf. Ser.]{}, [271]{}, 012083 Harvey, J. W. 1988, . in [Proc. IAU Symposium No 123, Advances in helio- and asteroseismology]{}, edited by J. Christensen-Dalsgaard and S. Frandsen, (Dordrecht: Reidel), 497 Hekker, S., Kallinger, T., Baudin, F., De Ridder, J., Barban, C., Carrier, F., Hatzes, A. P., Weiss, W. W. & Baglin, A. 2009, . [A&A]{}, [506]{}, 465 Hekker, S., Gilliland, R. L., Elsworth, Y., Chaplin, W. J., De Ridder, J., Stello, D., Kallinger, T., Ibrahim, K. A., Klaus, T. C. & Li, J. 2011, . [MNRAS]{}, [414]{}, 2594 Houdek, G., Balmforth, N. J., Christensen-Dalsgaard, J. & Gough, D. O. 1999, . [A&A]{}, [351]{}, 582 Huber, D., Bedding, T. R., Stello, D., et al. 2010, . [ApJ]{}, [723]{}, 1607 Huber, D., Bedding, T. R., Stello, D., et al. 2011, . [ApJ]{}, in the press
Jiang, C., Jiang, B. W., Christensen-Dalsgaard, J., et al. 2011, [ApJ]{}, in the press
Kallinger, T., Mosser, B., Hekker, S., et al. 2010, . [A&A]{}, [522]{}, A1 Karoff, C., Metcalfe, T. S., Chaplin, W. J., Elsworth, Y., Kjeldsen, H., Arentoft, T. & Buzasi, D. 2009, . [MNRAS]{}, [399]{}, 914 Kjeldsen, H., Christensen-Dalsgaard, J., Handberg, R., Brown, T. M., Gilliland, R. L., Borucki, W. J. & Koch, D. 2010, . [AN]{}, [331]{}, 966 Koch, D. G., Borucki, W. J., Basri, G., et al. 2010, . [ApJ]{}, [713]{}, L79 Lamb, H. 1909, . [Proc. London Math. Soc.]{}, [7]{}, 122 L[é]{}ger, A., Rouan, D., Schneider, J., et al. 2009, . [A&A]{}, [506]{}, 287 Lissauer, J. J., Fabrycky, D. C., Ford, E. B., et al. 2011, . [Nature]{}, [470]{}, 53 Metcalfe, T. S., Monteiro, M. J. P. F. G., Thompson, M. J., et al. 2010, . [ApJ]{}, [723]{}, 1583 Michel, E., Baglin, A., Auvergne, M., et al. 2008, . [Science]{}, [322]{}, 558 Miglio, A., Montalb[á]{}n, J., Baudin, F., Eggenberger, P., Noels, A., Hekker, S., De Ridder, J., Weiss, W. & Baglin, A. 2009, . [A&A]{}, [503]{}, L21 Miglio, A., Brogaard, K., Stello, D., et al. 2011, . [MNRAS]{}, in the press .
Molenda-[Ż]{}akowicz, J., Bruntt, H., Sousa, S., et al. 2010, . [AN]{}, [331]{}, 981 Molenda-Żakowicz, J., Latham, D. W., Catanzaro, G., Frasca, A. & Quinn, S. N. 2011, . [MNRAS]{}, [412]{}, 1210 Montalb[á]{}n, J., Miglio, A., Noels, A., Scuflaire, R. & Ventura, P. 2010, . [ApJ]{}, [721]{}, L182 Mosser, B., Belkacem, K., Goupil, M.-J., et al. 2010, . [A&A]{}, [517]{}, A22 Mosser, B., Belkacem, K., Goupil, M. J., et al. 2011a, . [A&A]{}, [525]{}, L9 Mosser, B., Elsworth, Y., Hekker, S., et al. 2011b, . [A&A]{}, in the press
Mosser, B., Barban, C., Montalb[á]{}n, J., et al. 2011c, . [A&A]{}, [532]{}, A86 stensen, R. H., Bloemen, S., Vu[č]{}kovi[ć]{}, M., Aerts, C., Oreiro, R., Kinemuchi, K., Still, M. & Koester, D. 2011, . [ApJ]{}, [736]{}, L39 Queloz, D., Bouchy, F., Moutou, C., et al. 2009, . [A&A]{}, [506]{}, 303 Stello, D., Bruntt, H., Preston, H. & Buzasi, D. 2008, . [ApJ]{}, [674]{}, L53 Stello, D., Basu, S., Bruntt, H., et al. 2010, . [ApJ]{}, [713]{}, L182 Stello, D., Meibom, S., Gilliland, R. L., et al. 2011a, . [ApJ]{}, [739]{}, 13 Stello, D., Huber, D., Kallinger, T., et al. 2011b, . [ApJ]{}, [737]{}, L10 Tassoul, M. 1980, . [ApJS]{}, [43]{}, 469 Unno, W., Osaki, Y., Ando, H., Saio, H. & Shibahashi, H. 1989, [Nonradial Oscillations of Stars, 2nd Edition]{} (Tokyo: University of Tokyo Press)
Uytterhoeven, K., Briquet, M., Bruntt, H., et al. 2010, . [AN]{}, [331]{}, 993 Vandakurov, Yu. V. 1967, . [AZh]{}, [44]{}, 786 (English translation: [Soviet Ast.]{}, [11]{}, 630)
Walker, G., Matthews, J., Kuschnig, R., et al. 2003, . [PASP]{}, [115]{}, 1023 White, T. R., Bedding, T. R., Stello, D., Christensen-Dalsgaard, J., Huber, D. & Kjeldsen, H. 2011, . [ApJ]{}, in the press
Winn, J. N., Matthews, J. M., Dawson, R. I., et al. 2011, . [ApJ]{}, [737]{}, L18
[^1]: see [http://astro.phys.au.dk/KASC/]{}.
[^2]: Note the relation of $N$ to convective instability: in convectively unstable regions $N^2 < 0$.
[^3]: This also places heavy constraints on the numerical precision of the calculation of the oscillations. In the numerical calculations illustrated here I used 19,200 mesh points, distributed to match the asymptotic properties of the eigenfunctions, which secured reasonable precision of the results (Christensen-Dalsgaard et al., in preparation).
[^4]: These can be treated with a proper JWKB analysis [e.g., @Gough2007]; the effect, however, would essentially just be to change the phases of the solution, with no qualitative change in the behaviour found here.
[^5]: For purely acoustic modes extending over the whole star, this follows from an analysis of the oscillations near the centre [e.g., @Gough1993]. In the present case of a red giant, where the lower turning point is in a region outside the compact core, this behaviour is in fact not fully understood.
[^6]: Since the integral in Eq. (\[eq:delnu\]) is dominated by the outer layers, $\omega^{\rm (p)} \simeq 2 \Delta \nu$.
[^7]: Regrettably, PLATO failed to get selected in October 2011; it remains to be seen whether it will enter into the competition for a later selection.
[^8]: Stellar Observations Network Group
|
---
abstract: 'Agent-based models are a powerful tool for studying the behaviour of complex systems that can be described in terms of multiple, interacting “agents”. However, because of their inherently discrete and often highly non-linear nature, it is very difficult to reason about the relationship between the state of the model, on the one hand, and our observations of the real world on the other. In this paper we consider agents that have a discrete set of states that, at any instant, act with a probability that may depend on the environment or the state of other agents. Given this, we show how the mathematical apparatus of quantum field theory can be used to reason probabilistically about the state and dynamics the model, and describe an algorithm to update our belief in the state of the model in the light of new, real-world observations. Using a simple predator-prey model on a 2-dimensional spatial grid as an example, we demonstrate the assimilation of incomplete, noisy observations and show that this leads to an increase in the mutual information between the actual state of the observed system and the posterior distribution given the observations, when compared to a null model.'
author:
- |
Daniel Tang\
Leeds Institute for Data Analytics\
University of Leeds\
Leeds, UK\
`[email protected]`\
bibliography:
- 'references.bib'
title: 'Data assimilation in Agent-based models using creation and annihilation operators'
---
Introduction
============
When modelling real-world phenomena we often use the model to make forecasts, only to be later faced with new observations that weren’t available at the time of the forecast. This leads to the problem of how to “assimilate” the new observations into an updated forecast and, perhaps, another forecast for a later time. There exists an extensive literature on this problem as applied to weather forecasting models[@kalnay2003atmospheric][@carrassi2018data] which describes how the problem can be reduced using Bayes’ rule: $$P(s|\Omega) \propto P(\Omega|s)P(s)
\label{Bayes}$$ Here, $P(s)$ is our original forecast (this is a probability distribution since there is uncertainty in the forecast) $P(\Omega|s)$ is the likelihood of the observation, $\Omega$, given a forecast, $s$, and $P(s|\Omega)$ is our updated forecast. However, the algorithms developed to do this in weather forecasting models depend on the model’s governing equations having some degree of smoothness, a condition not generally satisfied by agent-based models.
If there are only a very small number of observations, particle filtering or Markov Chain Monte Carlo methods may have some success[@russell2009artificial], but as the number of observations increases, these methods tend to quickly fail. This is because as the number of observations increases, the proportion of the model’s phase-space that has non-zero likelihood often shrinks exponentially, so if the dimensionality of the phase-space is not trivially small, finding a non-zero start point for a Markov chain becomes exponentially harder. Similarly for particle filters; the number of particles needed to prevent all particles ending up with zero probability, increases exponentially with the number of observations.
For these reasons, the problem of data assimilation in agent-based models remains an unsolved problem, and this imposes a severe restriction on the utility of these models when applied to real world problems.
Quantum field theory applied to agent-based models
==================================================
In order to apply Bayes’ rule to agent-based models (ABMs) we must first devise a way to represent a probability distribution over the possible states of an agent-based model. The task is complicated somewhat by the fact that, in general, the exact number of agents in the model is unknown and variable, so the set of model states must span models with any number of agents. Luckily, we can use the mathematical formalism of quantum field theory to do this. In a quantum field there are an unknown number of interacting particles and the quantum field is represented as a quantum amplitude over the possible configurations of particles; to apply this to ABMs, we just need to interpret the *particles* as agents and replace the *quantum amplitudes* with probabilities.
The creation operator
---------------------
Let $\emptyset$ represent the empty model state (i.e. the state with no agents present), and let $a^\dag_i$ be an operator on model states that has the effect of adding an agent in state $i$ to the model state, this is called the *creation operator*. So, for example, $a_i^\dag\emptyset$ is the state with a singe agent in state $i$. Since the result of a creation operator is itself a state, we can apply another creation operator, for example, $a_j^\dag a_i^\dag\emptyset$ to represent a state with two agents, one in state $i$ and one in state $j$, or even $a_i^{\dag 2}\emptyset$ to represent a state with two agents in state $i$. We can go on applying operators in this way to construct any desired state.
If we now allow states to be multiplied by real numbers and added together, a vector space is induced which can represent probability distributions over model states. For example $0.5a_i^\dag\emptyset + 0.5a_j^\dag\emptyset$ would represent a probability distribution with $0.5$ probability that there’s a single agent in state $i$ and $0.5$ probability that there’s a single agent in state $j$ (with no probability that both agents are present at the same time).
Since each term in a probability distribution always ends in the empty state, $\emptyset$, we can just as easily consider the additions and multiplications to be part of the operator, so the example distribution above could be equivalently written $0.5(a_i^\dag + a_j^\dag)\emptyset$. Given this, composure of operators works in the expected way, for example, $a_z^\dag\left( 0.5(a_i^\dag + a_j^\dag)\right)\emptyset = 0.5(a_z^\dag a_i^\dag + a_z^\dag a_j^\dag)\emptyset$.
With this representation, we need a way to express the fact that it doesn’t matter in what order the creation operators are applied, so $a_j^\dag a_i^\dag \emptyset = a_i^\dag a_j^\dag \emptyset$. Since this is true for all states, not just $\emptyset$, we can drop the $\emptyset$ and say[^1] $$a_i^\dag a_j^\dag - a_j^\dag a_i^\dag = 0$$ This form of equation is known as a *commutation relation* or *commutator* and has a shorthand notation: $$[a_i^\dag, a_j^\dag] = 0$$
The annihilation operator
-------------------------
Now let $a_i$ be the *annihilation operator* which has the following properties: $$a_i\emptyset = 0
\label{annihilation0}$$ $$[a_i,a_j] = 0$$ and $$[a_i,a_j^\dag] = \delta_{ij}
\label{annihilationcommute}$$ where $\delta_{ij}$ is the Kronecker delta function.
Given just these properties, we can show that $$a_ia_i^{\dag n}\emptyset = a_ia_i^\dag a_i^{\dag n-1}\emptyset = (a_i^\dag a_i + [a_i,a_i^\dag])a_i^{\dag n-1}\emptyset$$ This is a recurrence relationship which, using equations \[annihilation0\] and \[annihilationcommute\], can be solved to give $$a_ia_i^{\dag n}\emptyset = na_i^{\dag n-1}\emptyset$$ So, the annihilation operator has the effect of multiplying by the number of agents in a state, then removing one of those agents.
Notice that, using the commutation relations in this way we can transform any sequence of operations on $\emptyset$ into an equivalent form that contains no annihilation operators. We’ll call this the *reduced form* of the operator.
Equations of motion of a probabilistic ABM
==========================================
Armed with just the creation and annihilation operators we now show that if the behaviour of agents in an ABM can be described as a set of propensities to act and interact at any instant then we can transform the ABM into a “Hamiltonian” operator, containing only addition, multiplication, creation and annihilation operators. The Hamiltonian operator turns a probability distribution over ABM states into a rate of change of that distribution. This is significant in that it allows us to express the equations of motion of a probability distribution as an operator, and so describe probabilistically how the ABM evolves through time. That is, we’ve defined a “probabilistic ABM” that captures all the behaviour of the original ABM in a smooth, differentiable way which, as we’ll show, makes it much easier to perform inference on the ABM.
To do this, start with the definition of an agent’s behaviour: this must be representable as a set of actions and interactions. An action consists of a pair of functions, $(\rho(i), A(i))$ where $\rho$ is a rate per unit time and $A$ is a set of agents. An agent in state $i$ is said to have behaviour $(\rho(i), A(i))$ if in any infinitesimal time-slice, $\delta t$, it replaces itself by the agents in $A(i)$ with probability $\rho(i) \delta t$. An interaction consists of a pair of functions, $(\rho(i,j), A(i,j))$ such that an agent in state $i$, in the presence of another agent in state $j$ replaces both itself and the other agent with $A(i,j)$ with probability $\rho(i,j)\delta t$. Although this is not a common way to think about agent-based models, a very large class of models can easily be described in this way.
Without loss of generality, we assume all agents in a model have the same set of behaviours (different behaviours among agents can be achieved by putting an “agent type” marker in the agent’s state and making the behaviour rates zero for certain agent types).
Consider now an action, $(\rho(i), A(i))$ and let $[k_{i1}\dots k_{in_i}] = A(i)$ be the results of the action. The Hamiltonian for this action would be $$H_{(\rho, A)} = \rho(i)\left(\prod_{k\in A(i)} a_k^\dag - a_i^\dag\right)a_i$$ The product term expresses the rate of increase in probability in the state resulting from the action, while the $a_i^\dag$ term expresses the (equal in magnitude) rate of decrease in probability in the initial state due to the agent being removed when it acts. Because the annihilation operator, $a_i$, multiplies by the number of agents in a given state, the rate of change is also multiplied if there are multiple agents to which the behaviour applies (or multiplied by zero if there are no applicable agents).
Now consider an interaction $(\rho(i,j), I(i,j))$. The Hamiltonian for this interaction would be $$H_{(\rho, I)} = \rho(i,j)\left(\left(\prod_{k\in I(i,j)} a_k^\dag\right) - a_i^\dag a_j^\dag\right)a_ja_i$$ The double annihilation operator, $a_ja_i$, has the effect of multiplying by the number of $i,j$ agent pairs so that each agent in state $i$ gets a chance to interact with each agent in state $j$. This also works when $j=i$, since $a_i^2a_i^{\dag n}\emptyset = n(n-1)a_i^{\dag n-2}\emptyset$, and the number of $i,i$ pairs is $n(n-1)$.
This pattern can easily be extended to interactions between three or more agents, but in this paper we’ll consider only binary interactions.
If an agent has more than one behaviour, the Hamiltonian is simply the sum of the individual behaviours, so in this way we can build a Hamiltonian for a whole ABM. So, an ABM whose agents have a set of action behaviours $\left\{(\rho_1(i), A_1(i)) \dots (\rho_\alpha(i), A_\alpha(i))\right\}$ and a set of interaction behaviours $\left\{(\rho_1(i,j), I_1(i,j)) \dots (\rho_\beta(i,j), I_\beta(i,j))\right\}$ has the Hamiltonian $$H = \sum_{n=1}^\alpha\sum_i \rho_n(i)\left(\left(\prod_{k\in A_n(i)} a_k^\dag\right) - a_i^\dag\right)a_i
+ \sum_{m=1}^\beta \sum_i \sum_j \rho_m(i,j)\left(\left(\prod_{k\in I_m(i,j)} a_k^\dag\right) - a_i^\dag a_j^\dag\right)a_ja_i$$ Although the number of terms in the Hamiltonian can become very large, we’ll see later that we only ever need to deal computationally with the much smaller commutation relations $[X,H]$.
Given the Hamiltonian, the time evolution of a probability distribution over ABM states, $\psi$, is by definition $$\frac{d\psi}{dt} = H\psi$$ This has the analytical solution $$\psi_t = e^{tH}\psi_0
\label{timeoperator}$$ where $\psi_0$ is the initial state, $\psi_t$ is the state at time $t$ and the exponential of an operator, $tH$, can be defined as $$e^{tH} = \sum_{n=0}^\infty \frac{t^nH^n}{n!}$$
Equation \[timeoperator\] gives us a prior forecast which can be inserted into Bayes’ rule (equation \[Bayes\]) in order to do data assimilation: $$P(s_{t}|\Omega_{t}) = \frac{P(\Omega_{t}|s_{t})P(e^{tH}\psi_0 = s_{t})}{\sum_{s}P(\Omega_{t}|s)P(e^{tH}\psi_0 = s)}
\label{assimilation}$$ where $s_{t}$ is a state at time $t$, $\Omega_{t}$ are the observations at time $t$ and $\psi_0$ is the probability distribution of the ABM at time $t=0$ given all observations up to that time.
So, we have a succinct, mathematical solution to the problem of data assimilation in an ABM. We now need to translate equation \[assimilation\] into a numerical algorithm that will execute in a reasonable time.
The Deselby distribution
========================
As a first step towards a practical numerical algorithm, we introduce the *Deselby distribution* which we define as $$D_{i\lambda\Delta} = \sum_{k=0}^\infty \frac{(k)_\Delta \lambda^{k-\Delta} e^{-\lambda}}{k!} a_i^{\dag k}\emptyset$$ where $\Delta$ is an integer, $\lambda$ is a real number and $(k)_\Delta$ denotes the $\Delta^{th}$-order falling factorial of $k$.
The Deselby distributions have the useful property that $$a_i^\dag D_{i\lambda\Delta} = D_{i\lambda(\Delta+1)}$$ and so $$a_i^{\dag \Delta}D_{i\lambda 0} = D_{i\lambda\Delta}
\label{deselbycreate}$$
It can also be seen that $$a_i D_{i\lambda0} = \lambda D_{i\lambda0}
\label{deselbyannihilate}$$ Now imagine a product of Deselby distributions over all agent states $$D_\Lambda = \prod_i D_{i\lambda_i0}$$ so that, for any $i$ $$a_iD_\Lambda = \lambda_iD_\Lambda$$ So, in the same way as with $\emptyset$, any sequence of operations on $D_\Lambda$ can be reduced to a sequence of creation operations on $D_\Lambda$, i.e. a reduced form, and we can consider $D_\Lambda$ as a “ground” state in just the way we have been using $\emptyset$ so far.
So, if we express a probability distribution in terms of operators on $D_\Lambda$ we can apply the Hamiltonian to this to get a rate of change in in terms of $D_\Lambda$. This will be much more computationally efficient than dealing directly with operators on $\emptyset$.
Note also that when $\Delta=0$, the Deselby distribution is just the Poisson distribution, so it is particularly convenient if we can express our prior beliefs about the number of agents in each state of an ABM in terms of Poisson distributions.
Data assimilation with Deselby distributions
--------------------------------------------
As a concrete example, suppose there are a number of identical agents moving around a 2-dimensional grid, and we observe the number of agents in the $i^{th}$ grid-square. To make this slightly more general, suppose our method of observation may be imperfect and there is a probability, r, that an agent is detected given that it is there, such that the likelihood function is just the Binomial distribution $$P(m|k) = {k \choose m}r^m(1-r)^{k-m}$$ where $m$ is the observed number of agents and $k$ is the actual number present. Appendix \[binomialderivation\] shows that, if our prior is a Deselby distribution $D_{i\lambda\Delta}$, then the posterior is given by $$P \propto a_i^{\dag m}a_i^m Lg_{ir} D_{i\lambda\Delta}
\label{binomialposterior}$$ Where we introduce the operator $Lg_{ir}$ which has the properties $$[Lg_{ir}, a_i^\dag] = -ra_i^\dag Lg_{ir}$$ $$[Lg_{ir}, a_i] = \frac{ra_i}{1-r} Lg_{ir}$$ and $$Lg_{ir}D_\Lambda = D_{\Lambda'}$$ where $\Lambda'$ is $\Lambda$ with the $i^{th}$ component, $\lambda_i$, replaced by $(1-r)\lambda_i$.
Since euqation \[binomialposterior\] is true for any Deselby distribution, it is also true for any state described as an operator on $D_\Lambda$. So, given $\psi_0$ (a distribution at time $t=0$ expressed as an operator on $D_\Lambda$) and an observation of $m$ agents in state $i$ at time $t$, the postrior distribution at time $t$ is given by $$\psi_t \propto a_i^{\dag m}a_i^m Lg_{ir}e^{tH}\psi_0
\label{deselbyposterior}$$
Multiple observations $\Omega = \left\{(i_1,m_1)...(i_n,m_n)\right\}$ can be treated by applying operators for each observation $$\psi_t \propto \left(\prod_{(i,m)\in\Omega} a_i^{\dag m}a_i^m Lg_{ir}\right)\,e^{tH}\psi_0
\label{deselbyposterior}$$
Deselby state approximation
---------------------------
Even when working with Deselby distributions, as time progresses the size of the reduced form of the distribution will become large and sooner or later. In order to maintain computational tractability, we’ll need to somehow approximate the distribution. There are many ways of doing this, but a common way is to minimise the Kulback-Leibler divergence between the original distribution and the approximation[@murphy2012machine] over a family of possible approximations. This has the effect of minimising the loss of information due to the approximation.
If $P\emptyset$ is the original distribution and $A\emptyset$ the approximation, and we choose to minimise $D_{KL}(P\emptyset||A\emptyset)$ the resulting approximation has a particularly simple form.
By definition the KL-divergence is given by $$D_{KL}(P\emptyset||A\emptyset) = \emptyset\cdot P^* \left(\log(P\emptyset) - \log(A\emptyset)\right)$$ where we define $$\log\left((c_1a^{\dag l_1} + \dots + c_na^{\dag l_n})\emptyset\right) = (\log(c_1)a_1^{\dag l_1} + \dots + \log(c_n)a_n^{\dag l_n})\emptyset$$ and $$\emptyset \cdot a^{\dag n} \emptyset = \delta_{n0}$$ where $\delta_{n0}$ is the Kronecker delta function.
$P^*$ is defined to be the *conjugate* of $P$, where the conjugation operator is defined as having the properties $$(a_i^{\dag n})^* = \frac{a_i^n}{n!}$$ $$(A+B)^* = A^* + B^*$$ and $$(AB)^* = A^*B^*$$ and $c^* = c$ if $c$ is a constant.
If we choose $A$ to be a Deselby distribution, and minimise the KL divergence with respect to $\Lambda$ we have, for all $i$ $$\forall_i: 0 = \emptyset\cdot P^*\frac{\partial \log(D_{i\lambda_i\Delta_i}\emptyset)}{\partial \lambda_i}$$ Solving this for $\lambda_i$ gives
$$\forall_i: \lambda_i = \emptyset \cdot e^{\sum_j a_j} a_iP\emptyset - \Delta_i
\label{mean}$$
Where the operator $$\emptyset\cdot e^{\sum_j a_j} = \emptyset\cdot \prod_j\sum_{n=0}^\infty \frac{a_j^n}{n!}$$ has the effect of summing over a distribution (i.e. summing the coefficients of an operator expressed in reduced form on $\emptyset$).
This can be interpreted as saying that the Deselby distribution, $D$, that minimises $D_{KL}(P\emptyset||D)$ is the one whose mean number of agents in state $i$ matches the mean of $P\emptyset$ for all $i$.
This gives us values for $\lambda_i$ given $\Delta_i$. To calculate $\Delta_i$, notice that for the KL-divergence to be finite, the support of $P\emptyset$ must be a subset of the support of $D$. So if $\Delta_i$ has value $n$, the probability that there are any less than $n$ agents in state $i$ in $P\emptyset$ must be zero. So, we can choose $\Delta_i$ to be the highest value for which there are definitely at least that many agents in state $i$ in $P\emptyset$.
A practical algorithm for data assimilation
===========================================
Using equations \[deselbyposterior\] and \[mean\] we can perform a data assimilation cycle that consists of the following steps: take an initial state $\psi_0$, integrate forward in time, apply the set of observations $\Omega = \left\{(i_1, m_1)\dots(i_n,m_n)\right\}$, then finally approximate to the Deselby distribution that minimises the KL-divergence.
Since we’ve just observed $m$ agents in state $i$ for each $(i,m)\in\Omega$ we can define the $\Delta$ of the approximation to be $$\Delta_i =
\begin{cases}
m & \text{if }(i,m)\in \Omega \\
0 & \text{otherwise}
\end{cases}
\label{deltaassimilate}$$ given this, $\lambda_i$ is given by $$\forall_i: \lambda_i = \frac{ \emptyset \cdot e^{\sum_j a_j} a_i \left(\prod_{(j,m)\in\Omega} a_j^{\dag m}a_j^{m} Lg_{jr}\right)e^{tH}\psi_0}{\emptyset \cdot e^{\sum_j a_j} \left(\prod_{(j,m)\in\Omega} a_j^{\dag m}a_j^{m} Lg_{jr}\right)e^{tH}\psi_0}
- \Delta_i
\label{lambdaassimilate}$$
To calculate this, we need an algorithm to calculate expressions of the form $$y = \emptyset \cdot e^{\sum_j a_j} X e^{tH}\psi_0$$ To increase numerical stability, we transform this to $$y = e^{-t}\emptyset \cdot e^{\sum_j a_j} X e^{t(I + H)}\psi_0$$ where $I$ is the identity operator. Expanding the exponential in $H$ gives $$y = e^{-t}\sum_{n=0}^\infty \frac{t^n}{n!} \emptyset \cdot e^{\sum_j a_j} Z_n \psi_0
\label{expansion}$$ where $Z_n = X(I+H)^n$. However, $$Z_{n+1} = Z_n(I+H) = Z_n + HZ_n + [Z_n,H]$$ but since $H$ is the Hamiltonian, and the Hamiltonian preserves probability mass, $$\emptyset \cdot e^{\sum_j a_j} HZ_n\psi_0=0$$ for all $Z_n$ and $\psi_0$ so we can replace $Z$ by $Z'$ in equation \[expansion\] where $Z'_0 = X$ and $$Z'_{n+1} = Z'_n + [Z'_n,H]
\label{zrecurrence}$$ without affecting the value of $y$.
This is very convenient computationally, because although $H$ is in general very large, $[Z'_n,H]$ is considerably smaller so we have a much more efficient way to calculate the terms of the expansion.
The computation can be made even more efficient by noting that $$\emptyset \cdot e^{\sum_j a_j} a_i^\dag X = \emptyset \cdot e^{\sum_j a_j} X$$ for all $i$. So when calculating each $Z'_n$ we can use the commutation relations to remove all creation operators by moving them to the left, leaving only annihilation operators.
In practice, we only need to calculate the first few terms in the expansion in \[expansion\] so we end up with $$y = e^{-t}\sum_{n=0}^N \frac{t^n}{n!} \emptyset \cdot e^{\sum_j a_j} Z'_n \psi_0
\label{sumexpansion}$$ The $Z'_n$ terms, can be calculated consecutively using equation \[zrecurrence\] with $$Z'_0 = a_i \prod_{(j,m)\in\Omega} a_j^{\dag m}a_j^{m} Lg_{jr}$$ for the numerator in equation \[lambdaassimilate\] and $$Z'_0 = \prod_{(j,m)\in\Omega} a_j^{\dag m}a_j^{m} Lg_{jr}$$ for the denominator.
We then just need to multiply $Z'_n$ by $\psi_0$ (i.e. the Deselby distribution from the previous assimilation cycle), convert to reduced form and sum the coefficients (in order to perform the $\emptyset \cdot e^{\sum_j a_j}$ operation)[^2].
Numerical experiments showed that, when calculating the terms in equation \[sumexpansion\], a good estimate of the error obtained by truncating the series at the $N^{th}$ term can be calculated as $$\epsilon \approx e^{-t}\left( e^{\bar{x}} - \sum_{n=0}^N\frac{\bar{x}^n}{n!}\right)
\label{truncation}$$ where $$\bar{x} = \frac{t}{N}\sum_{n=1}^N \left|\emptyset \cdot e^{\sum_j a_j} Z'_n \psi_0\right|^{\frac{1}{n}}$$ This can be calculated along with the terms of the expansion until the error estimate falls below some desired threshold.
A final computational simplification can be made by noticing that when calculating the numerator of equation \[lambdaassimilate\], observations far away from the $i^{th}$ grid-square will have very little influence on its posterior (depending on $t$ and the speed of information flow through the ABM).
This intuition is formalised in Appendix \[separationderivation\] where we show that if we let $\Phi(X)$ be the set of all states operated on by operator $X$, and let $$[^n X,H] = [...[[X,H],H]...,H]$$ denote the $n$-fold application of a commutation, then if the observations in $\Omega$ can be partitioned into two sets $\Omega_A$ and $\Omega_B$ in such a way that $$\Phi\left(\sum_{n=0}^N[^na_i\prod_{(m,j)\in\Omega_A}a_j^{\dag m}a_j^m,H]\right) \cap \Phi\left(\sum_{n=0}^N[^n\prod_{(m,j)\in\Omega_B}a_j^{\dag m}a_j^m,H]\right) = \emptyset$$ (where $\emptyset$ above is the empty set, not the ground state) we can factorise out the effect of all observations in $\Omega_B$ (to $N^{th}$-order accuracy) in both the numerator and denominator so that they cancel and we can effectively remove the observations in $\Omega_B$ (note that in the above equation we’ve removed the $Lg_{jr}$ operator as it has no effect under the $\Phi$ operation).
Spatial predator-prey example model
===================================
In order to provide a numerical demonstration of the above techniques, we simulated a 32x32 grid of agents. Each agent could be either a “predator” or a “prey”. At any time a prey had a propensity to move to an adjacent grid-square (i.e. up, down, left or right), to die or to give birth to another prey on the an adjacent grid-square. Predators also had propensities to move to adjacent grid-squares or die, but in the presence of a prey on an adjacent grid-square, they also had a propensity to eat the prey and reproduce into an adjacent grid-square. Perhaps surprisingly, even a model with such a simple set of behaviours exhibits quite complex emergent properties that are difficult to predict. The rate per unit time of each behaviour is shown in table \[rates\].
Aget type Description Rate per unit time
----------- ----------------------------------- --------------------
Prey death 0.1
reproduction 0.15
movement 1.0
Predator death 0.1
predation/reproduction (per prey) 0.5
movement 1.0
: The rates of each behaviour in the predator-prey model[]{data-label="rates"}
A non-probabilistic, forward simulation of the ABM was performed in order to create simulated observation data. The initial state of the simulation was constructed by looping over all grid-squares and choosing the number of predators in that square from a Poisson distribution with rate parameter $\lambda = 0.03$ and the number of prey from a Poisson distribution with $\lambda = 0.06$. The model was then simulated forward in time and at time intervals of 0.5 units an observation of the state of the simulation was recorded. An observation was performed as follows: for each grid-square, with 0.02 probability, the number of prey was observed. If an observation was made, each prey in that grid-square was counted with a 0.9 probability. The same procedure was then repeated for predators.
Once the observations were generated, a sequence of data assimilation cycles were performed as described in equations \[deltaassimilate\] and \[lambdaassimilate\], with a starting state of the Deselby ground state $D_\Lambda$ with $\lambda_i = 0.03$ for all $i$ representing a predator and $\lambda_i = 0.06$ for all $i$ representing a prey. The terms in the expansion were calculated until the expected truncation error, calculated using equation \[truncation\] fell below 0.2% of the sum of terms up that point.
Results
-------
The mutual information between the assimilated state and the real state (where the real state is represented by a Dirac delta distribution) was calculated at the end of each assimilation cycle. In order to distinguish between the information accumulated by the data assimilation cycles and the information contained in the current time’s observation and the prior at the start of the simulation, we also calculated a reference value equal to the mutual information between $$\frac{\left(\prod_{(j,m)\in\Omega_t} a^{\dag m}_ja_j^mLg_{ir}\right)D_\Lambda}{\emptyset \cdot e^{\sum_i a_i}\left(\prod_{(j,m)\in\Omega_t} a^{\dag m}_ja_j^mLg_{ir}\right)D_\Lambda}$$ and the real state (i.e. the posterior given only the initial prior and the current window’s observations).
Figure \[information\] shows the difference between the mutual information of the assimilated state and the reference value for each assimilation cycle of 10 separate simulations. The upward trend in this plot shows that the assimilation cycle is successfully accumulating information about the real state.
Figure \[snapshots\] shows snapshots of the real state of the non-probabilistic simulation superimposed on the assimilated state, for three of the simulations after 64 assimilation cycles. Inspection of these snapshots seems to suggest that the mutual information tends to be higher when agents are arranged in clusters rather than being more scattered, which is not surprising. Also, when a grid-square containing an agent is observed but (because of the 0.9 probability of observation) the number of observed agents happens to be zero, this will reduce the mutual information.
Previous work
=============
Applying the methods of quantum field theory to other domains is not new; for example Dodd and Ferguson[@dodd2009many] suggest applying it to epidemiology, O’Dwyer and Green[@o2010field] apply it to simulations of biodiversity while Santos et.al.[@santos2015fock] applies it to enzymatic reations. Abarbanel[@abarbanel2013predicting] comes closest to our work in that he talks about quantum field theory in the context of data assimilation. However, to our knowledge, nobody has before developed quantum field theory to apply to such a general class of ABMs, nor have they developed it into an algorithm for data assimilation. Also, the various numerical strategies presented here that make the technique computationally tractable, including our presentation of the Deselby distribution is, we believe, new.
Discussion and Conclusion
=========================
This paper presents some early results of a promising new technique in data assimilation for agent based models. However, there’s plenty of work left to be done and many opportunities for further development. For example, more research into flexible ways of approximating the state or computationally efficient ways of representing a state as a data structure, would be worthwhile. The example model presented here is very small and simple, the next step would be to apply this technique to larger and more complex models. In addition, in this paper we only consider agents with a discrete set of states, the extension to states including continuous variables is fairly straightforward and work on this is currently under way.
At present there are very few techniques that can be used to successfully perform data assimilation in agent based models. We believe the results presented here provide a significant first step towards filling that gap and hope that it will form the foundation of a fruitful approach to the problem.
The code used to create the results in this paper can be found at <https://github.com/deselby-research/ProbabilisticABM>, where you’ll also find a library of code we’re developing in order to experiment with new techniques in this area.
Appendix: Binomial observation {#binomialderivation}
==============================
The product of a Deselby distribution and a binomial distribution is given by $$P(k|m) \propto {k \choose m}r^m(1-r)^{k-m}\frac{(k)_\Delta \lambda^{k-\Delta} e^{-\lambda}}{k!}$$ rearranging and dropping constants $$P(k|m) \propto (1-r)^\Delta \, (k)_m \, \frac{(k)_\Delta ((1-r)\lambda)^{k-\Delta} e^{-(1-r)\lambda}}{k!}
\label{posterior}$$ expanding the product of falling factorials $$P(k|m) = (1-r)^\Delta \sum_{q=0}^{\min(m,\Delta)} \frac{m!\Delta!\left((1-r)\lambda\right)^{(m-q)}}{q!(m-q)!(\Delta-q)!} \frac{(k)_{m + \Delta - q} ((1-r)\lambda)^{k-(m + \Delta - q)} e^{-(1-r)\lambda}}{k!}$$ Expressing this as an operator on $\emptyset$ $$P \propto \sum_{k=0}^\infty (1-r)^\Delta \sum_{q=0}^{\min(m,\Delta)} \frac{m!\Delta!\left((1-r)\lambda\right)^{(m-q)}}{q!(m-q)!(\Delta-q)!} \frac{(k)_{m + \Delta - q} ((1-r)\lambda)^{k-(m + \Delta - q)} e^{-(1-r)\lambda}}{k!}a^{\dag k}\emptyset$$ so $$P \propto (1-r)^\Delta \sum_{q=0}^{\min(m,\Delta)} \frac{m!\Delta!\left((1-r)\lambda\right)^{(m-q)}}{q!(m-q)!(\Delta-q)!} a^{\dag (m+\Delta-q)}D_{i((1-r)\lambda)\,0}$$ But, from the commutation relations, we have the identity $$a^m a^{\dag \Delta} = \sum_{q=0}^{\min(m,\Delta)} \frac{m!\Delta!}{q!(m-q)!(\Delta-q)!} a^{\dag \Delta-q}a^{m-q}$$ so $$P \propto (1-r)^\Delta a^{\dag m}a^m a^{\dag\Delta}D_{((1-r)\lambda)\,0}$$ Let $Lg_r$ be an operator that has the properties $$[Lg_r, a^\dag] = -ra^\dag Lg_r$$ $$[Lg_r, a] = \frac{ra}{1-r} Lg_r$$ and $$Lg_rD_{\lambda0} = D_{(1-r)\lambda0}$$ then $$Lg_ra^{\dag \Delta} = (1-r)^\Delta a^{\dag \Delta}Lg_r$$ So $$P \propto a_i^{\dag m}a_i^m Lg_{ir} D_{i\lambda\Delta}$$
Appendix: Factorisation of compound observations {#separationderivation}
================================================
Consider an expression of the form $$\emptyset\cdot e^{\sum_j a_j} ABe^{tH} = \sum_{n=0}^\infty\frac{t^n}{n!}\emptyset\cdot e^{\sum_j a_j}ABH^n$$ where $A$, $B$ and $H$ are operators, and $H$ is a Hamiltonian.
Due to the form of a Hamiltonian, $\emptyset\cdot e^{\sum_j a_j}HX = 0$ for all X, so $$\emptyset\cdot e^{\sum_j a_j}ABH^n = \emptyset\cdot e^{\sum_j a_j}[^n AB, H]$$ Where we introduce the notation $$[^n X,H] = [...[[X,H],H]...,H]$$ to denote the $n$-fold application of a commutation.
So $$\emptyset\cdot e^{\sum_j a_j} ABe^{tH} = \sum_{n=0}^\infty\frac{t^n}{n!}\emptyset\cdot e^{\sum_j a_j}[^nAB,H]$$ Now let $$C_n(A,B) = \sum_{m=0}^n {n \choose m}[^mA,H][^{n-m}B,H]$$ so $$[C_n(A,B),H] = \sum_{m=0}^n {n \choose m}\left([^mA,H][^{n-m+1}B,H] + [^{m+1}A,H][^{n-m}B,H]\right)$$ $$= \sum_{m=0}^n {n \choose m}[^mA,H][^{n+1-m}B,H] + \sum_{m'=1}^{n+1}{n \choose m'-1}[^{m'}A,H][^{n+1-m'}B,H]$$ $$= \sum_{m=0}^{n+1} {n+1 \choose m}\frac{n+1-m}{n+1}[^mA,H][^{n+1-m}B,H] + \sum_{m'=0}^{n+1}{n+1 \choose m'}\frac{m'}{n+1}[^{m'}A,H][^{n+1-m'}B,H]$$ $$= \sum_{m=0}^{n+1} {n+1 \choose m}[^mA,H][^{n+1-m}B,H]$$ So $$[C_n(A,B),H] = C_{n+1}(A,B)$$ But $$[AB,H] = A[B,H] + [A,H]B = \sum_{m=0}^1 {1 \choose m}[^mA,H][^{1-m}B,H] = C_1(A,B)$$ so $$[^n AB,H] = \sum_{m=0}^n {n \choose m}[^mA,H][^{n-m}B,H]$$ So $$\sum_{n=0}^\infty \frac{[^nAB,H]t^n}{n!}
= \sum_{n=0}^\infty \sum_{m=0}^n {n \choose m} \frac{[^mA,H][^{n-m}B,H]t^n}{n!}$$ $$= \sum_{n=0}^\infty \sum_{m=0}^n \frac{[^mA,H]t^m[^{n-m}B,H]t^{n-m}}{m!(n-m!)}
= \sum_{j=0}^\infty \frac{[^j A,H]t^j}{j!} \sum_{k=0}^\infty \frac{[^k B,H]t^k}{k!}$$ So $$\emptyset\cdot e^{\sum_j a_j} ABe^{tH}\psi D_\Lambda \approx \emptyset\cdot e^{\sum_j a_j}\sum_{j=0}^N \frac{[^j A,H]t^j}{j!} \sum_{k=0}^N \frac{[^k B,H]t^k}{k!}\psi D_\Lambda$$ where $\psi$ is a sequence of creation operators and $D_\Lambda$ is the Deselby ground state.
If we let $\Phi(X)$ be the set of all states operated on by operator $X$, then if $$\Phi\left(\sum_{n=0}^N[^nA,H]\right) \cap \Phi\left(\sum_{n=0}^N[^nB,H]\right) = \emptyset
\label{nointersect}$$ where $\emptyset$ above is the empty set (not the ground state) then $\psi$ can easily be factorised into $\psi_A\psi_B$ such that $$\emptyset\cdot e^{\sum_j a_j}\sum_{j=0}^N \frac{[^j A,H]t^j}{j!} \sum_{k=0}^N \frac{[^k B,H]t^k}{k!}\psi D_\Lambda = \emptyset\cdot e^{\sum_j a_j}\sum_{j=0}^N \frac{[^j A,H]t^j}{j!}\psi_A \sum_{k=0}^N \frac{[^k B,H]t^k}{k!}\psi_B D_\Lambda$$
But if $X$ consists only of annihilation operators (which we can arrange by using commutation relations to move all creation operators to the left and then removing them), then $$\emptyset\cdot e^{\sum_j a_j}XYD_0 = \left(\emptyset\cdot e^{\sum_j a_j}XD_0\right) \left(\emptyset\cdot e^{\sum_j a_j}YD_0\right) + \emptyset\cdot e^{\sum_j a_j}[X,Y]D_0$$ but if equation \[nointersect\] is satisfied then $$\left[\sum_{n=0}^N[^nA,H]\psi_A, \sum_{n=0}^N[^nB,H]\psi_B\right] = 0$$ irrespective of whether we strip the first term of creation operators, so, if \[nointersect\] is satisfied $$\emptyset\cdot e^{\sum_j a_j} ABe^{tH}\psi D_\Lambda \approx \left(\emptyset\cdot e^{\sum_j a_j}\sum_{j=0}^N \frac{[^j A,H]t^j}{j!}\psi D_\Lambda\right) \left(\emptyset\cdot e^{\sum_j a_j}\sum_{k=0}^N \frac{[^k B,H]t^k}{k!}\psi D_\Lambda\right)$$ where we’ve removed the factorisation of $\psi$ as it doesn’t affect the sum.
[^1]: to be correct, the 0 is interpreted as the “multiply by zero” operator or alternatively $0I$ where $I$ is the identity operator
[^2]: We can also sum coefficients when working with a Deselby ground state since the sum of the coefficients of a Deselby distribution, expressed as an operator on $\emptyset$, is 1
|
---
abstract: |
We present the results of an extensive campaign of coordinated X-ray () and UV () observations of the symbiotic star AG Dra during a long period of quiescence followed recently by a remarkable phase of activity characterized by two optical outbursts. The major optical outburst in June 1994 and the secondary outburst in July 1995 were covered by a number of target of opportunity observations (TOO) with both satellites. Optical photometry is used to establish the state of evolution along the outburst.
Our outburst observations are supplemented by a substantial number of X-ray observations of AG Dra during its quiescent phase between 1990–1993. Near-simultaneous observations at the end of 1992 are used to derive the spectral energy distribution from the optical to the X-ray range. The X-ray flux remained constant over this three year quiescent phase. The hot component (i.e. X-ray emitting compact object) turns out to be very luminous: a blackbody fit to the X-ray data in quiescence with an absorbing column equal to the total galactic N$_{\rm H}$ in this direction gives (9.5$\pm$1.5)$\times$10$^{36}$ (D/2.5 kpc)$^2$ erg/s. This suggests that the compact object is burning hydrogen-rich matter on its surface even in the quiescent (as defined optically) state at a rate of (3.2$\pm$0.5)$\times$10$^{-8}$ (D/2.5 kpc)$^2$ /yr. Assuming a steady state, i.e. burning at precisely the accretion supply rate, this high rate suggests a Roche lobe filling cool companion though Bondi-Hoyle accretion from the companion wind cannot be excluded.
With observations we have discovered a remarkable decrease of the X-ray flux during both optical maxima, followed by a gradual recovering to the pre-outburst flux. In the UV these events were characterized by a large increase of the emission line and continuum fluxes, comparable to the behaviour of AG Dra during the 1980-81 active phase. The anticorrelation of X-ray/UV flux and optical brightness evolution is very likely due to a temperature decrease of the hot component. Such a temperature decrease could be the result of an increased mass transfer to the burning compact object, causing it to slowly expand to about twice its original size during each optical outburst.
author:
- 'J. Greiner, K. Bickert, R. Luthardt, R. Viotti, A. Altamore, R. González-Riestra[^1], R.E. Stencel'
date: 'Received May 3, 1996; accepted November 22, 1996'
title: |
The UV/X-ray emission of the symbiotic star AG Draconis\
during quiescence and the 1994/1995 outbursts[^2]
---
3.cm
*[**]{} = ==1=1=0pt =2=2=0pt =cmr7*
Introduction
============
Symbiotic stars are characterized by the simultaneous occurrence in an apparently single object of two temperature regimes differing by a factor of 30 or more. The spectrum of a symbiotic star consists of a late type (M-) absorption spectrum, highly excited emission lines and a blue continuum. Generally, symbiotic stars are interpreted as interacting binary systems consisting of a cool luminous visual primary and a hot compact object (white dwarf, subdwarf) as secondary component. Because of mass loss of the giant there is often a common nebulous envelope. Mass transfer from the cool to the hot component is expected. An extensive review on the properties of symbiotic stars can be found in Kenyon (1986).
Many symbiotic stars show outburst events at optical and UV wavelengths. Though several models have been proposed to explain these outbursts, it is the combination of quiescent properties and these outburst properties which make it difficult to derive a consistent picture for some symbiotics.
The symbiotic star AG Draconis (BD +67922) plays an outstanding role inside this group of stars because of its high galactic latitude, its large radial velocity of v$_{\rm r}$=–148 km/s and its relatively early spectral type (K). AG Dra is probably a metal poor symbiotic binary in the galactic halo.
Here, we use all available data to document the X-ray light curve of AG Dra over the past 5 years. In addition, we report on the results of the coordinated / campaign during the 1994/1995 outbursts. Preliminary results on the and optical observations were given by Viotti (1994a, 1994b). After a description of the relevant previous knowledge of the AG Dra properties in the optical, UV and X-ray range in the remaining part of this paragraph, we present our observational results in paragraph 2–4, discuss the various implications on the AG Dra system parameters in paragraph 5 and end with a summary in paragraph 6.
AG Dra: The optical picture
---------------------------
Like in most symbiotic stars, the historical light curve of AG Dra is characterized by a sequence of active and quiescent phases (e.g. Robinson 1969, Viotti 1993). The activity is represented by 1–2 mag light maxima (currently called [*outbursts*]{} or [*eruptions*]{}) frequently followed by one or more secondary maxima. It has been noted (Robinson 1969, Iijima 1987) that the major outbursts occur in $\approx$15 yr intervals. This recurrence period has continued with the recent major outbursts of 1981 and 1994. Iijima (1987) suggest that the major outbursts seem to occur at about the same orbital phase, shortly after the photometric maximum, i.e. shortly after the spectroscopic conjunction of the companion (hot component in front of the cool component). Occasionally, AG Dra also undergoes smaller amplitude outbursts, such as those in February 1985 and January 1986.
Between the active phases AG Dra is spending long periods (few years to decades) at minimum light (V$\approx$9.8-10.0, Mattei 1995), with small (0.1 mag) semiregular photometric variations in B and V with pseudo-periods of 300-400 days (Luthardt 1990). However, in the U band regular variations with amplitudes of 1 mag and a period of 554 days have been discovered by Meinunger (1979), and confirmed by later observations (e.g. Kaler 1987, Hric 1994, Skopal 1994). This periodicity is associated with the orbital motion of the system, as confirmed by the radial velocity observations of Kenyon & Garcia (1986), Mikolajewska (1995) and Smith (1996). Because of the color dependence these U band variations certainly reflect the modulation of the Balmer continuum emission. The photometric variability is continuous, suggesting periodic eclipses of an extended hot region by the red giant.
In June 1994 Graslo (1994) announced that AG Dra was starting a new active phase which was marked by a rapid brightening from V=9.9 to V=8.4 on June 14th, and to 8.1 on July 6-10, 1994. After July 1994 the brightness gradually declined reaching the quiescent level (V$\approx$9.8) in November 1994. Like the 1981–82 and 1985–86 episodes, AG Dra underwent a secondary outburst in July 1995, reaching the light maximum of V=8.9 by the end of the month.
The optical spectrum of AG Dra was largely investigated especially in recent years, during both the active phases and quiescence. The spectrum is typical of a symbiotic star, with a probably stable cool component which dominates the yellow-red region, and a largely variable “nebular" component with a strong blue-ultraviolet continuum and a rich emission line spectrum (e.g. Boyarchuk 1966). According to most authors the cool component is a K3 giant, which together with its large radial velocity (–148 km s$^{-1}$) and high galactic latitude (b$^{\rm II}$=+41$^{\circ}$), would place AG Dra in the halo population at a distance of about 1.2 kpc, 0.8 kpc above the galactic plane. While Huang (1994) suggested that the cool component of AG Dra may be of spectral type K0Ib, which would place it at a distance of about 10 kpc, Mikolajewska (1995) from a comparison with the near-infrared colors of M3 and M13 giants concluded that the cool component is a bright giant with M$_{\rm bol}$ –3.5, placing AG Dra at about 2.5 kpc. Most recently, Smith (1996) performed a detailed abundance analysis and found a low metallicity (\[Fe/H\]=–1.3), a temperature of T$_{\rm eff}$=4300$\pm$100 K and log g = 1.6$\pm$0.3 consistent with the classification of AG Dra’s cool component as an early K giant. In the following we shall assume a distance of 2.5 kpc for the AG Dra system.
AG Dra: The UV picture
----------------------
The hot dwarf companion is a source of intense ultraviolet radiation which produces a rich high-temperature emission line spectrum and a strong UV continuum. AG Dra is in fact a very bright UV target which has been intensively studied with (e.g. Viotti 1983, Lutz 1987, Kafatos 1993, Mürset 1991, Mikolajewska 1995).
Ultraviolet high-resolution spectra with the satellite revealed high-ionization permitted emission lines such as the resonance doublets of N, C and Si. The strongest emission line is He $\lambda$1640 which is composed of narrow and broad (FWHM = 6 Å, or equivalently $\approx$1000 km/s) components. The N line has a P Cygni absorption component displaced 120 km/s from the emission peak (Viotti 1983).
The origin of this feature is unclear. It can either arise in the red-giant wind ionized by the hot dwarf radiation, or in some low-velocity regions of the hot component’s wind.
The UV continuum and line flux is largely variable with the star’s activity. Viotti (1984) studied the IUE spectra of AG Dra during the major 1980–1983 active phase, and found that the outburst was most energetic in the ultraviolet with an overall rise of about a factor 10 in the continuum, much larger than in the visual, and of a factor 2–5 in the emission line flux. A large UV variation was also a characteristics of the minor 1985 and 1986 outbursts (e.g. Mikolajewska 1995).
AG Dra: The X-ray picture
-------------------------
First X-ray observations of AG Dra during the quiescent phase were done with the HEAO-2 satellite ( Observatory) before the 1981-1985 series of eruptions (0.27 IPC counts/sec). The spectrum was found to be very soft (Anderson 1981). The data are consistent with a blackbody source of kT=0.016 keV (Kenyon 1988) in addition to the bremsstrahlung source (kT=0.1 keV) suggested by Anderson (1981). The X-ray temperature of $\approx$ 200 000 K is in fair agreement with the source temperature of $\approx$100000–150000 K inferred from observations, although the blackbody radius deduced from data is nearly a factor of 10 larger than the X-ray value (Kenyon 1988). Unfortunately, the major 1980 outburst was “lost" by the HEAO-2 satellite because of a failure of the high voltage power.
A comparison of the X-ray flux with the observed He $\lambda$4686 flux indicates that the luminosity in absorbed He$^+$ photons is larger by a factor of two (Kenyon 1988). It has been concluded that the X-rays are degraded by the surrounding nebula thus causing an underestimate of the actual X-ray luminosity.
was pointed on AG Dra four times during the 1985–86 minor active phase, which was characterized by two light maxima in February 1985 and January 1986. These observations revealed a large X-ray fading with respect to quiescence (Piro 1986), the source being at least 5–6 times weaker in the EXOSAT thin Lexan filter in March 1985, and not detected in February 1986 (Viotti 1995). Simultaneous IUE observations have on the contrary shown an increase of the continuum and emission line flux, especially of the high temperature N 1240 A and He 1640 A lines at the time of the light maxima. According to Friedjung (1988) this behaviour might be due to a temperature drop of a non-black body component, or to a continuous absorption of the X-rays shortwards of the N$^{+3}$ ionization limit. No eclipse of the X-ray source was found at phase 0.5 (beginning of November 1985) of the orbital motion of the AG Dra system, implying a limit in the orbit inclination. The weakness of the countrate in the Boron (filter \#6) EXOSAT filter during quiescence implies a low (2–3$\times$10$^5$ K) temperature of the source (Piro 1985).
[cccrcr]{} HJD & V & HJD & B & HJD & U \
&$\!$(mag)$\!$ & &$\!$(mag)$\!$ & & $\!$(mag)$\!$\
8127.3876 & 9.76 & 8127.3913 & 11.15 & 8127.3991 & 11.77\
8179.3537 & 9.85 & 8179.3592 & 11.25 & &\
8179.3537 & 9.85 & 8179.3613 & 11.26 & 8179.3696 & 11.66\
8274.6330 & 9.83 & 8274.6384 & 11.20 & 8274.6432 & 11.37\
8329.6163 & 9.76 & 8329.6212 & 11.10 & 8329.6265 & 11.32\
8332.5781 & 9.76 & 8332.5840 & 11.11 & 8332.5908 & 11.34\
8353.5254 & 9.69 & 8353.5317 & 11.06 & 8353.5380 & 11.36\
8359.5031 & 9.70 & 8359.5076 & 11.07 & 8359.5135 & 11.31\
8362.5344 & 9.72 & 8362.5425 & 11.07 & 8362.5498 & 11.33\
8409.4444 & 9.73 & 8409.4491 & 11.08 & 8409.4544 & 11.28\
8440.4711 & 9.71 & 8440.4755 & 11.09 & 8440.4809 & 11.43\
8494.3874 & 9.77 & 8494.3916 & 11.15 & 8494.3978 & 11.67\
8500.4155 & 9.74 & 8500.4208 & 11.11 & 8500.4275 & 11.64\
8681.4692 & 9.71 & 8681.4732 & 11.08 & 8681.4785 & 11.54\
8691.6060 & 9.73 & 8691.6099 & 11.08 & 8691.6153 & 11.59\
8747.4630 & 9.72 & 8747.4699 & 11.02 & 8747.4767 & 11.33\
8801.4260 & 9.75 & 8801.4301 & 11.08 & 8801.4362 & 11.35\
8803.4410 & 9.74 & 8803.4480 & 11.08 & 8803.4540 & 11.33\
8830.3997 & 9.85 & 8830.4040 & 11.19 & 8830.4107 & 11.41\
8843.3743 & 9.83 & 8843.3781 & 11.18 & 8843.3831 & 11.31\
9213.3667 & 9.41 & 9213.3746 & 10.40 & 9213.3818 & 9.89\
9214.4254 & 9.48 & 9214.4333 & 10.53 & 9214.0000 & 10.42\
9215.3892 & 9.54 & 9215.3981 & 10.62 & 9215.4061 & 10.28\
9217.3517 & 9.49 & 9217.3587 & 10.57 & 9217.3648 & 10.15\
9249.3329 & 9.63 & 9249.3385 & 10.83 & 9249.3448 & 10.56\
9250.3517 & 9.64 & 9250.3581 & 10.82 & 9250.3639 & 10.54\
9534.4792 & 8.89 & 9534.4863 & 9.56 & 9534.4931 & 8.60\
9535.4286 & 8.77 & 9535.4350 & 9.34 & 9535.4419 & 8.43\
9537.4377 & 8.67 & 9537.4436 & 9.22 & 9537.4494 & 8.32\
9541.4588 & 8.46 & 9541.4740 & 8.96 & 9541.4790 & 8.04\
9545.4204 & 8.55 & 9545.4266 & 9.02 & 9545.4327 & 8.15\
9568.4060 & 8.61 & 9568.4117 & 9.09 & 9568.4182 & 8.30\
9569.4107 & 8.62 & 9569.4172 & 9.11 & 9569.4229 & 8.28\
9599.3582 & 8.85 & 9599.3688 & 9.50 & 9599.3760 & 8.72\
9623.4056 & 9.06 & 9623.4124 & 9.78 & 9623.4187 & 9.07\
9625.4310 & 9.07 & 9625.4376 & 9.80 & 9625.4438 & 9.09\
9638.3671 & 9.17 & 9638.3732 & 9.92 & 9638.3791 & 9.19\
9639.3749 & 9.15 & 9639.3824 & 9.96 & 9639.3885 & 9.25\
9644.4397 & 9.19 & 9644.4462 & 9.97 & 9644.4544 & 9.20\
\[phot\]
[cccr]{} HJD & m$_{\rm pg}$ & HJD & m$_{\rm pg}$ \
& (mag) & & (mag)$\!$\
9002.317 & 10.6 & 9476.471 & 10.6\
9005.310 & 10.6 & 9476.491 & 10.5\
9027.258 & 10.6 & 9476.491 & 10.4\
9028.259 & 10.5 & 9480.480 & 10.6\
9029.263 & 10.7 & 9480.480 & 10.5\
9030.298 & 10.7 & 9480.480 & 10.5\
9031.303 & 10.5 & 9481.488 & 10.5\
9032.300 & 10.8 & 9481.488 & 10.4\
9041.276 & 10.8 & 9481.488 & 10.6\
9066.531 & 11.0 & 9482.487 & 10.5\
9094.472 & 11.0 & 9482.487 & 10.5\
9094.472 & 10.5 & 9482.487 & 10.4\
9098.500 & 11.0 & 9484.494 & 10.4\
9098.500 & 10.7 & 9484.494 & 10.4\
9099.486 & 11.0 & 9484.494 & 10.5\
9099.486 & 10.9 & 9486.456 & 10.5\
9101.507 & 11.0 & 9486.456 & 10.3\
9101.507 & 10.9 & 9486.456 & 10.4\
9154.458 & 11.0 & 9488.493 & 10.5\
9154.458 & 10.9 & 9488.493 & 10.5\
9249.496 & 10.8 & 9488.493 & 10.5\
9250.487 & 10.6 & 9504.419 & 9.4\
9278.519 & 10.5 & 9511.453 & 9.2\
9279.468 & 10.5 & 9518.458 & 9.1\
9310.486 & 10.7 & 9535.417 & 9.0\
9416.304 & 10.5 & 9536.422 & 9.0\
9422.549 & 10.7 & 9537.423 & 8.9\
9422.549 & 10.7 & 9541.415 & 8.9\
9422.549 & 10.6 & 9574.422 & 8.7\
9457.489 & 10.5 & 9836.477 & 10.0\
9457.499 & 10.5 & 9839.479 & 10.0\
9457.499 & 10.5 & 9840.483 & 10.2\
9458.517 & 10.5 & 9841.511 & 10.3\
9458.517 & 10.4 & 9842.522 & 10.1\
9458.517 & 10.6 & 9862.443 & 10.1\
9462.550 & 10.5 & 9888.422 & 10.3\
9462.550 & 10.4 & 9889.422 & 10.0\
9462.550 & 10.6 & 9894.422 & 9.9\
9475.478 & 10.5 & 9895.422 & 9.8\
9475.478 & 10.4 & 9896.417 & 9.9\
9475.478 & 10.6 & &\
\[shue\]
Optical observations
====================
AG Dra is included in a long-term programme of optical photometry at the 60 cm Cassegrain telescope of Sonneberg Observatory. A computer controlled, photoelectric photometer with a diaphragm of 20 diameter is used to obtain consecutive UBV images of AG Dra, the comparison star BD+67 952 and a third star (SAO 16935) used for check purposes. Typical integration times were 20 sec and the brightnesses were reduced to the international system according to Johnson. The mean errors in the brightness determination are $\pm$0.01 mag in V and B and $\pm$0.03 mag in U. The photometric observations since June 1990 (for which near-simultaneous X-ray measurements with are available) are tabulated in Tab. \[phot\] and the U and B magnitudes are also plotted in Fig. \[light\]. Earlier Sonneberg photometric data of AG Dra are included in Hric (1993).
In addition to the photoelectric data we have also used the photographic sky patrol of Sonneberg Observatory to fill up gaps in the photoelectric coverage. The sky patrol is performed simultaneously in two colors (red and blue). The brightness measurements of AG Dra from the blue plates covering the time interval of the two optical outbursts are given in Tab. \[shue\] and are also plotted in the top panel of Fig. \[light\].
IUE observations
================
[*IUE*]{} observations were mostly obtained as a Target-of-Opportunity (TOO) programme in coordination with the ROSAT observations. The last observations were made within the approved programme SI047 for the 19th IUE observing episode. Tab. \[IUEobs\] summarizes the [*IUE*]{} low resolution observations made in the period June 1994 – September 1995.
[lcccc]{} Image$^a$ & Disp & Aper$^b$ & Date & T$_{\rm exp}^c$\
& & & dd/mm/yy & min:sec\
SWP51315 & L & L,S & 04/07/94 & 15:00,5:00\
LWP28542 & L & L,S & 04/07/94 & 7:00,3:00\
SWP51331 & L & L,S & 07/07/94 & 2:00,2:00\
LWP28556 & L & L,S & 07/07/94 & 1:30,1:00\
SWP51415 & L & L,S & 12/07/94 & 1:00,1:00\
LWP28628 & L & L,S & 12/07/94 & 1:00,1:00\
SWP51416 & L & L,S & 12/07/94 & 1:35,1;35\
SWP51632 & L & L,S & 28/07/94 & 2:00,1:00\
LWP28752 & L & L,S & 28/07/94 & 2:00,1:00\
SWP52143 & L & L & 19/09/94 & 1:30\
LWP29192 & L & L & 19/09/94 & 1:00\
SWP52145 & L & L & 19/09/94 & 3:00\
SWP52994 & L & L,S & 06/12/94 & 3:00,2:00\
LWP29654 & L & L,S & 06/12/94 & 2:00,2:00\
SWP54632 & L & L,S & 08/05/95 & 3:00,1:30\
LWP30642 & L & L,S & 08/05/95 & 2:00,1:00\
SWP55371 & L & L,S & 28/07/95 & 1:00,1:00\
LWP31175 & L & L,S & 28/07/95 & 2:00,1:00\
SWP55929 & L & L,S & 14/09/95 & 2:00,1:00\
LWP31476 & L & L,S & 14/09/95 & 3:00,1:30\
\[IUEobs\]
[ $^a$ $SWP$: 1200-1950 Å. $LWP$: 1950-3200 Å.\
$^b$ $L, S$: both large and small apertures were used. $L$: only the large aperture was used.\
$^c$ Two exposure times correspond to Large and Small aperture, respectively. ]{}
[cclrccr]{} Date$^b$ & N& 1400$^c$ & C& He& F$_{1340}$ & F$_{2860}$\
quiescence & 10 & 8 & 9 & 26 & 29 & 12\
9537 & 52 & 32 & 114 & 237 & 607 & 368\
9538 & 41 & 26 & 99 & 277 & 535 & 367\
9545 & 35 & 25 & 80 & 221 & 656 & 407\
9560 & 19 & 17 & 49 & 197 & 526 & 303\
9612 & 39 & 23 & 50 & 150 & 299 & 191\
9692 & 23 & 11 & 32 & 121 & 166 & 83\
9844 & 29 & 11 & 25 & 195 & 144 & 64\
9926 & 36 & 61$^d$ & 59 & 183 & 137 & 158\
9975 & 36 & 31 & 68 & 172 & 155 & 55\
[ $^a$ Emission line (in 10$^{-12}$ erg cm$^{-2}$ s$^{-1}$) and continuum fluxes (in 10$^{-14}$ erg cm$^{-2}$ s$^{-1}$ Å$^{-1}$), dereddened for E$_{\rm B-V}$=0.06.\
$^b$ JD – 2440000.\
$^c$ Blend of O \] and Si . O \] $\lambda$1401.16 is the main contributor to the blend.\
$^d$ Broad feature.]{} \[iueflux\]
In general, priority was given to the low resolution spectra for line and continuum fluxes. High resolution spectra were also taken at all dates, but they will not be discussed in this paper. For most of the low resolution images, both IUE apertures were used in order to have in the Large Aperture the continuum and the emission lines well exposed and in the Small Aperture (not photometric) the strongest emission lines (essentially He 1640 Å) not saturated. The fluxes from the Small Aperture spectra were corrected for the smaller aperture transmission by comparison with the non-saturated parts of the Large Aperture data. Some representative IUE spectra are shown in Fig. \[iue2\].
The flux of the strongest emission lines and of the continuum flux at 1340 and 2860 Å is given in Tab. \[iueflux\]. All the fluxes are corrected for an interstellar absorption of E(B-V)=0.06 (Viotti 1983). The feature at 1400 Å is a blend, unresolved at low resolution of the Si doublet at 1393.73 and 1402.73 Å, the O\] multiplet (1399.77, 1401.16, 1404.81 and 1407.39 Å), and S\] 1402.73 Å. O\] 1401.16 Å is the dominant contributor to the blend. Because of the variable intensity of the lines, at low resolution the 1400 Å feature shows extended wings with variable extension and strength (Fig. \[iue2\]). The two continuum regions were chosen, as in previous works, for being less affected by the emission lines, and for being the 1340 Å representative of the continuum of the hot source (nearly the Rayleigh-Jeans tail of a 10$^{5}$ K blackbody), and the 2860 Åregion representative of the H Balmer continuum emission (see for instance Fernández-Castro 1995).
The behaviour of the continuum and line intensities is shown in Fig. \[iue1\]. In July 1994 the UV continuum flux increase was larger than that of the He 1640 Å line, suggesting a decrease of the Zanstra temperature at the time of the outburst, and a recovering of the high temperature in September 1994. In addition, the N/continuum (1340 Å) and the He/continuum (1340 Å) ratios reached a maximum at the time of the second outburst (July 1995). The He Zanstra temperatures are given in Fig. \[light\].
ROSAT observations
==================
Observational details
---------------------
### All-Sky-Survey
AG Dra was scanned during the All-Sky-Survey over a time span of 10 days. The total observation time resulting from 95 individual scans adds up to 2.0 ksec. All the data analysis described in the following has been performed using the dedicated EXSAS package (Zimmermann 1994).
Due to the scanning mode the source has been observed at all possible off-axis angles with its different widths of the point spread function. For the temporal and spectral analysis we have used an 5 extraction radius to ensure that no source photons are missed. No other source down to the 1$\sigma$ level is within this area. Each photon event has been corrected for its corresponding effective area. The background was determined from an equivalent area of sky located circle 13 south in ecliptic latitude from AG Dra.
### Pointed Observations
[lrcrr]{} ROR & Date & P/H & T$_{\rm Nom}$ & N$_{\rm cts}$\
No. & & & (sec) &\
Survey & Nov.24–Dec.3, 1990 & P & 2004 & 1988\
200686 & April 16, 1992 & P & 2178 & 2260\
200687 & May 16, 1992 & P & 1836 & 1761\
200688 & June 13, 1992 & P & 2210 & 2224\
201041 & June 16, 1992 & P & 1829 & 1893\
201042 & Sep. 20, 1992 & P & 1596 & 1676\
201043 & Dec. 16, 1992 & P & 1296 & 1344\
200689 & March 13, 1993 & P & 1992 & 2029\
201044 & March 16, 1993 & P & 1677 & 1761\
200690 & April 15, 1993 & P & 2450 & 2324\
200691 & May 12, 1993 & P & 2064 & 1669\
180063 & Aug. 28, 1994 & H & 1273 & 14\
180063F & Sep. 9, 1994 & P(B) & 1237 & 19\
180073 & Dec. 7, 1994 & H & 2884 & 209\
180073-1 & Mar. 7, 1995 & H & 2330 & 272\
180081 & Jul. 31, 1995 & H & 1191 & 22\
180081-1 & Aug. 23, 1995 & H & 916 & 5\
180081-2 & Sep. 14, 1995 & H & 1383 & 0\
180081-3 & Feb. 6, 1996 & H & 1551 & 88\
\[poin\]
Several dedicated pointings on AG Dra have been performed in 1992 and 1993 with the ROSAT PSPC (Tab. \[poin\] gives a complete log of the observations). All pointings were performed with the target on-axis. During the last observation of AG Dra with the PSPC in the focal plane (already as a TOO) the Boron filter was erroneously left in front of the PSPC after a scheduled calibration observation.
For each pointed observation with the PSPC in the focal plane, X-ray photons have been extracted within 3 of the centroid position. This relatively large size of the extraction circle was chosen because the very soft photons (below channel 20) have a much larger spread in their measured detector coordinates. As usual, the background was determined from a ring well outside the source (there are no other X-ray sources within 5 of AG Dra). Before subtraction, the background photons were normalized to the same area as the source extraction area.
When AG Dra was reported to go into outburst (Granslo 1994) we immediately proposed for a target of opportunity observation (TOO) with ROSAT. AG Dra was scheduled to be observed during the last week of regular PSPC observations on July 7, 1994, but due to star tracker problems no photons were collected. For all the later observations only the HRI could be used after the PSPC gas has been almost completely exhausted. Consequently, no spectral information is available for these observations. The first HRI observation took place on August 28, 1994, about 4 weeks after the optical maximum. All the following HRI observations and the single PSPC observation with the Boron filter (described above) were performed as TOO to determine the evolution of the X-ray emission after the first outburst. With the knowledge of the results of the first outburst the frequency of observations was increased for the second optical outburst.
Source photons of the HRI observations have been extracted within 1 and were background and vignetting corrected in the standard manner using EXSAS tasks. In order to compare the HRI intensities of with those measured with the PSPC we use a PSPC/HRI countrate ratio for AG Dra with its supersoft X-ray spectrum of 7.8 as described in Greiner (1996).
The X-ray position of AG Dra
----------------------------
We derive a best-fit X-ray position from the on-axis HRI pointing 180073 of R.A. (2000.0) = 1601408, Decl. (2000.0) = 664808 with an error of $\pm$8. This position is only 2 off the optical position of AG Dra (R.A. (2000.0) = 16014094, Decl. (2000.0) = 6648097).
The X-ray lightcurve of AG Dra
------------------------------
The mean ROSAT PSPC countrate of AG Dra during the all-sky survey was determined (as described in paragraph 4.1.1) to (0.99$\pm$0.15) cts/sec. Similar countrates were detected in several PSPC pointings during the quiescent time interval 1991–1993.
The X-ray light curve of as deduced from the All-Sky-Survey data taken in 1990, and 11 PSPC pointings (mean countrate over each pointing) as well as 7 HRI pointings taken between 1991 and 1996 is shown in Fig. \[light\]. The countrates of the HRI pointings have been converted with a factor of 7.8 (see paragraph 4.1.2) and are also included in Fig. \[light\].
This 5 yrs X-ray light curve displays several features:
1. The X-ray intensity has been more or less constant between 1990 and the last observation (May 1993) before the optical outburst. Using the mean best fit blackbody model with kT = 14.5 eV and the galactic column density N$_{\rm H}$ = 3.15$\times$10$^{20}$ cm$^{-2}$ (fixed during the fit, see below), the unabsorbed intensity in the ROSAT band (0.1–2.4 keV) is 1.2$\times$10$^{-9}$ erg cm$^{-2}$ s$^{-1}$.
2. During the times of the optical outbursts the observed X-ray flux drops substantially. The observed maximum amplitude of the intensity decrease is nearly a factor of 100. With the lowest intensity measurement being an upper limit, the true amplitude is certainly even larger. Due to the poor sampling we can not determine whether the amplitudes of the two observed X-ray intensity drops are similar.
3. Between the two X-ray minima the X-ray intensity nearly reached the pre-outburst level, i.e. the relaxation of whatever parameter caused these drops was nearly complete. We should, however, note that the relaxation to the pre-outburst level was faster at optical wavelength than at X-rays, and also was faster in the B band than in the U band. In December 1994 the optical V brightness ($\approx$95) was nearly back to the pre-outburst magnitude, while the X-ray intensity was still a factor of 2 lower than before the outburst.
4. The quiescent X-ray light curve shows two small, but significant intensity drops in May 1992 and April/May 1993. The latter and deeper dip coincides with the orbital minimum in the U lightcurve (Meinunger 1979, Skopal 1994) and might suggest a periodic, orbital flux variation.
The X-ray spectrum of AG Dra in quiescence
------------------------------------------
For spectral fitting of the all-sky-survey data the photons in the amplitude channels 11–240 (though there are almost no photons above channel 50) were binned with a constant signal/noise ratio of 9$\sigma$. The fit of a blackbody model with all parameters left free results in an effective temperature of kT$_{\rm bb}$ = 11 eV (see Tab. \[xfitpar\]).
Since the number of counts detected during the individual PSPC pointings allows high signal-to-noise spectra, we investigated the possibility of X-ray spectral changes with time. First, we kept the absorbing column fixed at its galactic value and determined the temperature being the only fit parameter. We find no systematic trend of a temperature decrease (lower panel of Fig. \[light\]). Second, we kept the temperature fixed (at 15 eV in the first run and at the best fit value of the two parameter fit in the second run) and checked for changes in N$_{\rm H}$, again finding no correlation. Thus, no variations of the X-ray spectrum could be found along the orbit. The rather small degree of observed variation of temperature and N$_{\rm H}$ (Tab. \[xfitpar\]) over more than three years during quiescence (including the All-Sky-Survey data) are not regarded to be significant due to the correlation of these quantities (Fig. \[xspec\]).
The independent estimate of the absorbing column towards AG Dra from the X-ray spectral fitting indicates that the detected AG Dra emission experiences the full galactic absorption. While fits with N$_{\rm H}$ as free parameter (see Tab. \[xfitpar\]) systematically give values slightly higher than the galactic value (which might led to speculations of intrinsic absorption), we assess the difference to be not significant due to the strong interrelation of the fit parameters (see lower right panel of Fig. \[xspec\]) given the energy resolution of the PSPC and the softness of the X-ray spectrum. We will therefore use the galactic N$_{\rm H}$ value (3.15$\times$10$^{20}$ cm$^{-2}$ according to Dickey and Lockman 1990) in the following discussion.
[ccccccc]{} ROR & Date & &\
No. & & N$_{\rm H}$ & Flux & kT & Flux & kT\
Survey & 901129 & 5.0 & 7040 & 10.6 & 172.5 & 14.6\
200686 & 920416 & 5.1 & 4680 & 11.0 & 144.2 & 15.0\
200687 & 920516 & 4.6 & 1280 & 11.7 & 147.0 & 15.0\
200688 & 920613 & 4.9 & 2750 & 11.2 & 145.0 & 15.0\
201041 & 920616 & 4.7 & 1520 & 11.6 & 146.7 & 15.0\
201042 & 920920 & 3.5 & 860 & 13.2 & 304.1 & 14.0\
201043 & 921216 & 4.2 & 1360 & 11.3 & 421.1 & 13.6\
200689 & 930313 & 5.2 & 9060 & 10.6 & 160.1 & 14.9\
201044 & 930316 & 4.5 & 1240 & 11.6 & 205.8 & 14.5\
200690 & 930415 & 5.3 & 9760 & 10.6 & 147.7 & 14.9\
200691 & 930512 & 5.1 & 9750 & 10.4 & 227.9 & 14.1\
\[xfitpar\]
With N$_{\rm H}$ fixed at its galactic value the mean temperature during quiescence is about 14–15 eV, corresponding to 160000–175000 K. These best fit temperatures are plotted in a separate panel below the X-ray intensity (Fig. \[light\]). The small variations in temperature during the quiescent phase are consistent with a constant temperature of the hot component of AG Dra.
The X-ray spectrum of AG Dra in outburst
----------------------------------------
As noted already earlier (e.g. Friedjung 1988), the observed fading of the X-ray emission during the optical outbursts of AG Dra can be caused either by a temperature decrease of the hot component or an increased absorbing layer between the X-ray source and the observer. In order to evaluate the effect of these possibilities, we have performed model calculations using the response of the HRI. In a first step, we assume a 15 eV blackbody model and determine the increase of the absorbing column density necessary to reduce the HRI countrate by a factor of hundred. The result is a factor of three increase. In a second step we start from the two parameter best fit and determine the temperature decrease which is necessary to reduce the HRI countrate at a constant absorbing column (3.15$\times$10$^{20}$ cm$^{-2}$). We find that the temperature of the hot component has to decrease from 15 to 10 eV, or correspondingly from 175000 K to 115000 K.
The only ROSAT PSPC observation (i.e. with spectral resolution) during optical outburst is the one with Boron filter. The three parameter fit as well as the two parameter fit give a consistently lower temperature. But since the Boron filter cuts away the high-end of the Wien tail of the blackbody, and we have only 19 photons to apply our model to, we do not regard this single measurement as evidence for a temperature decrease during the optical outburst.
What seems to be excluded, however, is any enhanced absorbing column during the Boron filter observation. The best fit absorbing column of the three parameter fit is 4.4$\times$10$^{20}$ cm$^{-2}$, consistent with the best fit absorbing column during quiescence. Since the low energy part of the spectrum in the PSPC is not affected by the Boron filter except a general reduction in efficiency by roughly a factor of 5, any increase of the absorbing column would still be easily detectable. For instance, an increase of the absorbing column by a factor of two (to 6.3$\times$10$^{20}$ cm$^{-2}$) would absorb all photons below 0.2 keV and would drop the countrate by a factor of 50 contrary to what is observed.
It is interesting to note that the decrease of the X-ray flux is similarly strong in both, the 1994 and 1995 outbursts, while the optical amplitude of the secondary outburst in 1995 was considerably smaller than the first outburst. We note in passing that the intensity of the He and N lines also showed a comparable large increase in the main 1981/1982 outburst and the minor outburst in 1985/1986 while the optical and the short wavelength UV continuum amplitudes again were smaller in the latter outburst (Mikolajewska 1995). Since the short wavelength UV continuum is the Rayleigh-Jeans tail of the hot (blackbody) component, this behaviour suggests that a temperature decrease is the cause of the reduced X-ray intensity during outburst rather than increased absorption with the temperature decrease being smaller in the secondary outbursts.
Discussion
==========
Quiescent X-ray emission
------------------------
### The X-ray spectrum and luminosity
Previous analyses of X-ray emission of symbiotic stars have been interpreted either with blackbody (Kenyon and Webbink 1984) or NLTE atmosphere (Jordan 1994) models representing the hot component, or with bremsstrahlung emission from a hot, gaseous nebula (Kwok and Leahy 1984). Our ROSAT PSPC spectra of AG Dra show no hint for any hard X-ray emission. The soft spectrum is well fitted with a blackbody model. A thermal bremsstrahlung fit is not acceptable to this soft energy distribution. There is also no need for a second component as proposed by Anderson (1981): a fit of a blackbody and thermal bremsstrahlung model with the bremsstrahlung temperature limited to values above 0.1 keV results in an unabsorbed flux ratio between blackbody and bremsstrahlung of 85000:1 in the 0.1–2.4 keV band. A hard X-ray component could be expected from an accretion disk both for radial and disk accretion (Livio 1988) at the relatively low accretion rate implied for this system. This absence of any hard emission also rules out the possibility of interpreting the soft spectrum as arising in the boundary layer. The most extreme ratios for soft to hard emission observed sofar in magnetic cataclysmic variables are of the order of 1000:1. An example of the blackbody fit is given in Fig. \[xspec\] using the observation of April 1993 (having the largest number of photons).
The simple model of explaining the soft X-ray emission by an accretion disk is ruled out not only due to the soft spectrum, but also due to the observed luminosity which is much higher (see below) than can be produced by an accretion disk around a compact dwarf and the extremely low accretion rate. If the X-ray spectrum were to explained by a standard accretion disk model (Shakura & Sunyaev 1973) then the necessary accretion rate would be of the order of 1.7$\times$10$^{-7}$ [$\dot M$]{}$_{\rm edd}$ (or 4.5$\times$10$^{-15}$ /yr) for a 1 compact object. This is unreasonably small, basically due to the low effective temperature (T$_{\rm max}^{\rm eff}$ = 24 eV) of the X-ray emission coupled with a small mass compact object.
Using the blackbody fit parameters while N$_{\rm H}$ was fixed at its galactic value, i.e. kT = 14.5 eV and a normalization parameter of 206 photons/cm$^2$/s corresponding to observation 201044 (from experience with soft ROSAT spectra we know that this procedure helps to avoid overpredicting the flux using blackbody models), the unabsorbed bolometric luminosity of the hot component in AG Dra during quiescence (1990–1993) is (9.5$\pm$1.5)$\times$10$^{36}$ (D/2.5 kpc)$^2$ erg/s (or equivalently 2500$\pm$400 (D/2.5 kpc)$^2$ ) with an uncertainty of a factor of a few due to the errors in the absorbing column and the temperature (see Fig. \[qnufnu\] for a visualization of the fact that the ROSAT measurement actually covers only the tail of the spectral energy distribution). The blackbody radius is derived to be R$_{\rm bb}$ = (4.1$\pm$1.5)$\times$10$^9$ cm (D/2.5 kpc). (Previous estimates arrived at much lower values due to the overestimate of the temperature.)
This high luminosity during quiescence and the size of the emitting region comparable to a white dwarf radius suggests that the primary is a white dwarf in the state of surface hydrogen burning. Assuming that the bolometric luminosity of the hot component equals the nuclear burning luminosity, the burning rate is $\approx$ (3.2$\pm$0.5)$\times$10$^{-8}$ (D/2.5 kpc)$^2$ /yr. Under the assumption of a steady state the same amount of matter should be accreted onto the white dwarf from the companion (via wind or an accretion disk, see below). Indeed, this burning rate lies within the stable-burning regime for a white dwarf of less than 0.6 (Iben and Tutukov 1989) consistent with the core mass-luminosity relation L/$\approx$ 4.6$\times$10$^4$ (M$_{\rm core}$/– 0.26) (Yungelson 1996). From the orbital parameters (Mikolajewska 1995, Smith 1996) and an inclination less than about 85 (see paragraph 5.1.2) the donor mass is estimated to be lower than 2 , in agreement with the estimated surface gravity of Smith (1996).
### Orbital flux modulation?
The observational coverage of AG Dra over 1992/1993 is extensive enough to also investigate possible temporal and spectral changes of the X-ray flux along the orbital phase. Using the ephemeris from Skopal (1994) the minimum phases of AG Dra occurred in November 1991 and June 1993.
The coincidence of the U minimum and the drop in X-ray flux in mid-1993 is surprising, suggesting the possible discovery of orbital modulation of the X-ray flux. Unfortunately, we have no full coverage of the orbital period, and thus the duration of the flux depression can not be determined. But even with the available data this duration is a matter of concern: The X-ray intensity decrease occurs very slowly over a time interval of two months, suggesting a total duration of at least 4 months if this modulation is symmetric in shape.
The radial velocity data demonstrate that during the U minima the cool companion lies in front of the hot component (cf. Mikolajewska 1995). Depending on the luminosity class, the maximum duration for an eclipse of the WD by the giant (bright giant) companion is 10 (24) days, i.e. much shorter than the observed time scale at X-rays. Thus, the drop in X-ray intensity cannot be due to a WD eclipse. As evidenced by near-simultaneous IUE spectra between April and June 1993, the far-UV continuum associated to the tail of the hot component’s continuum emission is not occulted, supporting the previous argument. A similar conclusion was reached when considering the nearly complete lack of orbital variations of the short wavelength UV continuum (Mikolajewska 1995). By inverting the arguments, the lack of an X-ray eclipse implies that the inclination should be less than 87(82) for a giant (bright giant) with 20 (70 ) radius.
Even in the reflection model of Formiggini & Leibowitz (1990), in which the eclipse of the hot component is short and its depth is expected to be considerably larger in the UV (and possibly in the soft X-ray band) than in the optical, it is difficult to imagine that a possible UV eclipse has been gone undetected.
Alternatively to an eclipse interpretation of the X-ray intensity drop just before the 1993 U band minimum, one could think of a pre-outburst which was missed in the optical region except for the marginal flux increase in the B band after JD = 2449000. In this case the interpretation would be similar to that of the major X-ray intensity drops during the optical outbursts in 1994 and 1995 mass loss (see below). Independent of the actual behaviour around JD = 9100 it is interesting to note that there is a secure detection of a 2 mag U brightness jump coincident with a nearly 1 mag B brightness increase shortly after JD = 9200. A slight brightening by about 0.2–0.3 mag is also present in the AAVSO light curve around JD 9205–9215 (Mattei 1995).
### Wind mass loss of the donor and accretion
Accepting the high luminosity during quiescence and consequently assuming that the hot component in the AG Dra system is in (or near to) the surface hydrogen burning regime, the companion has to supply the matter at the high rate of consumption by the hot component.
It is generally assumed (and supported by three-dimensional gas dynamical calculations of the accretion from an inhomogeneous medium) that the compact objects in symbiotic systems accrete matter from the wind of the companion according to the classical Bondi-Hoyle formula. The wind mass loss rate of the companion depends on several parameters. One of the important ones is the luminosity which governs the location with respect to the Linsky and Haisch dividing line. Accordingly, a bright giant (luminosity class II) is expected to have a considerably larger mass loss rate than a giant (luminosity class III).
As mentioned already earlier, the spectral classification and the luminosity class of the companion in the AG Dra binary system is not yet securely determined. According to the formula of Reimers (1975): $$- \dot M = 4 \times 10^{-13} \eta {R L \over M} M_{\odot}/yr$$ where R, L and M are the radius, luminosity and mass of the star in solar units, and $\eta$ a constant between 0.1 and 1, depending primarily on the initial stellar mass, the commonly used K3III classification would imply a mass loss rate of (0.2–7)$\times$10$^{-10}$ /yr, i.e. a factor of 100 lower than the rate necessary for steady state burning.
However, there are two additional observational hints which might help resolve the discrepancy between the expected mass loss of the cool donor and the necessary rate for a steady state burning white dwarf: (1) The cool component of AG Dra might be brighter than an average solar-metallicity giant. A comparison of the luminosity functions of K giants with 4000 T$_{\rm eff}$ 4400 in a low-metallicity versus solar-metallicity population has shown convincingly that low-metallicity K giants are nearly 2 mag brighter than the solar-metallicity giants (Smith 1996). (2) Low-metallicity giants might have larger mass-loss rates than usual giants. From observed larger 12$\mu$m excess in symbiotics as compared to that in normal giants a larger mass loss of giants in symbiotics has been deduced (Kenyon 1988). The recent comparison of IR excesses of d-type symbiotics with that of CH and barium stars supports this evidence (Smith 1996). Both of these observational hints argue for a mass loss of the cool component in AG Dra which is higher than the value derived from Reimer’s formula. At present it seems premature, however, to conclude that the wind mass loss of the donor in AG Dra is large enough to supply the matter for a steady state surface burning on the white dwarf.
An additional problem with spherical accretion at very high rates is the earlier recognized fact (Nussbaumer & Vogel 1987) that the density of matter in the vicinity of the accretor is n$_{\rm H} \approx$ 2$\times$10$^7$ (/10$^{-7}$ /yr)(V/10 km/s)$^3$(M$_{\rm WD}$/)$^{-2}$ cm$^{-3}$ (Yungelson 1996), i.e. the soft X-rays of the burning accretor will be heavily absorbed.
Alternatively, one might consider the possibility that the donor overfills its Roche lobe and that the matter supply to the white dwarf occurs via an accretion disk. Several issues have to be considered: (1) Roche-lobe filling: In order to fill its Roche lobe, the donor has to be either rather massive (as compared to a usual K giant) or rather luminous. However, a donor mass larger than $\approx$2 seems difficult due to the high galactic latitude and proper motion as well as on evolutionary grounds. Similarly, a binary system with a bright giant implies a larger distance as compared to a giant companion. (2) Evidence for the existence of an accretion disk: Indeed, Garcia (1986) has raised the suspicion that AG Dra contains an accretion disk. Circumstantial evidence for this comes from predictions of Roche lobe overflow, rapid flickering on timescales of the order of minutes in the optical band and the observation of double-peaked Balmer emission lines. Robinson (1994) have investigated in detail the double-peaked line profile of the Balmer lines in several symbiotic stars. In the case of AG Dra they find double-peaked emission lines only in one out of three observations which were three and one year apart, respectively. Using an inclination of i=32 as proposed by Garcia (1986) the double-peaked profile gives an acceptable fit for an accretion disk with an inner radius of 1.1$\times$10$^8$ cm and an outer radius of 1.3$\times$10$^{10}$ cm though the asymmetry may be explained also by self-absorption (Tomov & Tomova 1996). Recent high-resolution spectroscopy of AG Dra in the red wavelength region using AURELIE at OHP performed in December 1990 and January 1995 (i.e. during very different phases in the AG Dra orbit) revealed only a single peaked H$\alpha$ profile (Rossi 1996) and Dobrzycka (1996) failed to find evidence for flickering. Both of these new observational results argue against a steady accretion disk in the AG Dra system.
Quiescent UV emission
---------------------
UV observations show that the hot components of symbiotic stars are located in the same quarters of the Hertzsprung-Russell diagram as the central stars of planetary nebula (Mürset 1991). Due to the large binary separation in symbiotic systems the present hot component (or evolved component) should have evolved nearly undisturbed through the red giant phase. However, the outermost layers of the white dwarf might be enriched in hydrogen rich material accreted from the cool companion.
Presently the far-UV radiation of the WD is ionizing a circumstellar nebula, mostly formed (or filled in) by the cool star wind. The UV spectroscopy suggests that the CNO composition of the nebula is nitrogen enriched (C/N=0.63, and (C+N)/O=0.43), which is typical of the composition of a metal poor giant atmosphere after CN cycle burning and a first dredge up phase (Schmidt & Nussbaumer 1993). We recall that recently a low metal abundance of the K-star photosphere of \[Fe/H\]=–1.3 was derived (Smith 1996) in agreement with it being a halo object. The UV spectrum in quiescence is the usual in symbiotic systems: a continuum increasing toward the shortest wavelengths with strong narrow emission lines superimposed. The most prominent lines in the quiescence UV spectrum of AG Dra are, in order of decreasing intensity: He 1640 Å, C 1550 Å, N 1240 Å, the blend of Si and O\] at 1400 Å, N\] 1486 Åand O\] 1663 Å. The continuum becomes flatter longward approximately 2600 Å due to the contribution of the recombination continuum originated in the nebula. Although both continuum and emission lines have been found to be variable during quiescence (e.g. Mikolajewska 1995), there is no clear relation with the orbital period of the system. The ratio intensity of the recombination He 1640Å line to the far-UV continuum at 1340 Å suggests a Zanstra temperature of around 1.0-1.1$\times$10$^5$ K during quiescence (see Fig. \[light\]).
The large X-ray luminosity together with its soft spectrum allows to understand the large flux from He, most notably $\lambda$4686 Å and $\lambda$1640 Å, at quiescence (which could not be explained in earlier models; see e.g. Kenyon & Webbink 1984) as being due to X-ray ionization.
X-ray emission during the optical outbursts
-------------------------------------------
### Previously proposed models
Three different basic mechanisms have been proposed to explain the drastic intensity changes of symbiotic stars during the outburst events: (1) Thermonuclear runaways on the surface of the hot component (WD) after the accreted envelope has reached a critical mass (Tutukov & Yungelson 1976, Paczynski & Rudak 1980). The characteristic features are considerable changes in effective temperature at constant bolometric luminosity, and the appearance of an A–F supergiant spectrum thought to be produced by the expanding WD shell. (2) Instabilities in an accretion disk after an increase of mass transfer from the companion (Bath & Pringle 1982, Duschl 1986). (3) Ionization changes of the H region (from density-bounded to radiation-bounded) around the hot component caused by an abrupt change in the mass loss rate of the companion (Nussbaumer & Vogel 1987, Mikolajewska & Kenyon 1992). In the case of AG Dra, all these three scenarios have problems with some observational facts. The thermonuclear runaway is rejected by the fact that the quiescent luminosity is already at a level which strongly suggests burning before the outbursts. The disk instability scenario predicts substantial variation of the bolometric luminosity during the outbursts, for which no hints are available in AG Dra. The application of ionization changes to AG Dra is questionable because the nebula is apparently not in ionization equilibrium (Leibowitz & Formiggini 1992).
Specifically for AG Dra an additional scenario has been proposed, namely (4) the liberation of mechanical energy in the atmosphere of the companion (Leibowitz and Formiggini 1992).
### Expanding and contracting white dwarf
Using our finding of an anticorrelation of the optical and X-ray intensity and the lack of considerable changes in the temperature of the hot component during the 1994/95 outburst of AG Dra, we propose the following rough scenario. (1) The white dwarf is already burning hydrogen stably on its surface before the optical outburst(s). (2) Increased mass transfer, possibly episodic, from the cool companion results in a slow expansion of the white dwarf. (3) The expansion is restricted either due to the finite excess mass accreted or by the wind driven mass loss from the expanding photosphere of the accretor. This wind possibly also suppresses further accretion onto the white dwarf. The photosphere is expected to get cooler with increasing radius. (4) The white dwarf is contracting back to its original state once the accretion rate drops to its pre-outburst level. Since the white dwarf is very sensitive to its boundary conditions, it is not expected to return into a steady state immediately (Paczynski & Rudak 1980). Instead, it might oscillate around the equilibrium state giving rise to secondary or even a sequence of smaller outbursts following the first one.
The scenario of an expanding white dwarf photosphere due to the increase in mass of the hydrogen-rich envelope has been proposed already by Sugimoto (1979) on theoretical grounds without application to a specific source class. The expansion velocity was shown to be rather low (Fujimoto 1982)
$${dR \over dt} = R {d \ln R \over d \ln \Delta M_1} \,
\left ({ {\dot M - \dot M_{\rm RG}} \over \Delta M_1}\right ) \qquad$$ $$\qquad\qquad\qquad \approx {8~}
\left ({\dot M - \dot M_{\rm RG}} \over {10^{-6} M_{\odot} yr^{-1}}\right )
\left (\Delta M_1 \over {10^{-5} M_{\odot}}\right )^{-1} m s^{-1}$$
where $d \ln R / d \ln \Delta M_1 \approx 3-4$ at $R \approx 1 R_{\odot}$ was adopted, appropriate for a low mass white dwarf accreting at high rates (Fujimoto 1982). We note in passing that this value could be different at smaller radii, i.e. near white dwarf dimensions (see e.g. Neo (1977)). In particular, depending on the location in the H-R diagram it would be around 2.3 on the high-luminosity plateau of a 0.6 $M_{\odot}$ white dwarf, but rising up to 7 around the high-temperature knee (Blöcker 1996), thus suggesting a non-linear expansion rate. If we assume that the luminosity remains constant during the expansion we can determine the expansion velocity simply by folding the corresponding temperature decrease by the response of the ROSAT detector. Fitting the countrate decrease of a factor of 3.5 within 23 days (from 0.14 cts/s on HJD 9929 to 0.04 cts/s on HJD 9952 corresponding to a concordant temperature decrease from 12.3 eV to 11.1 eV) we find that $$\left (\dot M - \dot M_{\rm RG} \over 10^{-6} M_{\odot} yr^{-1}\right )
\left (\Delta M_1 \over {10^{-5} M_{\odot}}\right )^{-1} \approx 0.8$$ for the input parameters (blackbody radius) corresponding to 2.5 kpc (note that the input parameters change with distance). The mass of the envelope $\Delta M_1$ is difficult to assess due to the non-stationarity in the AG Dra system. In thermal equilibrium at our derived mean temperature it should be larger than 5$\times$10$^{-5}$ for a white dwarf with mass M $<$ 0.6 in the steady hydrogen-burning regime (Fujimoto 1982, Iben & Tutukov 1989). Inserting this in the above equation implies that the accretion rate necessary to trigger an expansion which is consistent with the X-ray measurements has to be (8.1/2.5/1.4)$\times$10$^{-6}$ /yr for white dwarf masses of 0.4/0.5/0.6 . This is a factor of 40–250 larger than the quiescent burning/accretion rate. The corresponding mean expansion rate is $dR/dt$ $\approx$6.5 m/s. With a duration of the X-ray decline in 1995 of about 100 days (which is consistent with the 1994 rise time in the optical region) we find that during the 1995 outburst (and probably also during the 1994 outburst) the radius of the accretor roughly doubled while the temperature decreased by about 35% (from 14.5 eV to 9.5 eV for the two parameter fit). The countrate decrease modelled with the above parameters is shown as the dotted line in Fig. \[light\], and the extrapolation to the quiescent countrate level gives an expected onset of the expansion around HJD = 9902.
A completely different and independent estimate of the response time of a white dwarf gives a similar result, thus it seems quite reasonable that a white dwarf can indeed expand and contract on a timescale (see e.g. Kovetz & Prialnik 1994, Kato 1996) which is observed as optical outburst rise and fall time in AG Dra. The contraction timescale can be approximated by the duration of the mass-ejection phase (Livio 1992), similar to the application to the supersoft transient RX J0513.9–6951 (Southwell 1996), where $\tau$$\approx$51/($^{-2/3}$ – $^{2/3}$)$^{3/2}$ days with being the ratio of white dwarf mass to the Chandrasekhar mass (Livio 1992). This relation gives a timescale of the order of 100 days (as observed) for a white dwarf mass of 0.5–0.6 , consistent with observations of the AG Dra optical outbursts.
Given the substantial accretion rate triggering the optical outbursts (which has to be supplied by the donor), mass loss via a wind from the cool companion seems to be too low to power the outbursts. Depending on the donor state in quiescence (where wind accretion is possible though we prefer Roche lobe filling; see paragraph 5.1.3.), two possibilities are conceivable: Either the companion fills its Roche lobe all the time, and the outbursts (i.e. the increased mass transfer) are triggered by fluctuations of the cool companion (e.g. radius), or a more massive and/or more luminous companion produces a strong enough wind to sustain the burning in the quiescent state without filling its Roche lobe and only occasionally overfills its Roche lobe thus triggering the outbursts. As noted above, a Roche lobe filling giant implies a distance larger than the adopted 2.5 kpc. While this imposes no problems with the interpretation of our UV and X-ray data (in fact a larger distance implies a larger intrinsic luminosity and thus shifts AG Dra even further into the stability burning region for even higher white dwarf masses), the distance dependent numbers derived here have to be adapted accordingly.
### Wind from the accretor
If the accretion indeed is spherical (i.e. as wind from the donor), it may occasionally be suppressed by the wind from the accretor (Inaguchi 1986). Hot stars with radiative envelopes are thought to suffer intense (radiation-driven) stellar winds at a rate of $_{\rm wind loss}$ = L/(v$_{\rm esc}$ c) where L is the burning luminosity, v$_{\rm esc}$ is the escape velocity and c the speed of light. This wind becomes increasingly effective at larger radii. Thus, a radiatively driven wind from the WD (Prialnik, Shara & Shaviv 1978, Kato 1983a,b) might restrict the expansion of a RG-like envelope or an expelled shell or even might remove the envelope (Yungelson 1995). An optically thick wind can be even more powerful by up to a factor of 10 (Kato and Hachisu 1994).
The wind of the hot white dwarf has typical velocities of a few hundreds km/s (Kato & Hachisu 1994). At these velocities it would take only several days until this white dwarf wind reaches the Roche lobe of the cool component, i.e. the wind of the cool component. This timescale is short enough to possibly cause the variations in the intensity of those lines which are thought to arise at the illuminated side of the wind zone of the cool component.
### Alternative scenario
An alternative to increased mass transfer would be to invoke a mild He flash on the hydrogen burning white dwarf which again might cause the photosphere to expand. Previous investigations for mild flashes have yielded timescales of the order of 15–20 yrs even for the most massive white dwarfs (Paczynski 1975). Recent calculations of H and He flashes have shown that under certain circumstances flash ignition can be rather mild without leading to drastic explosive phenomena, and shorter than about 1 yr already for white dwarf masses below 1 (Kato 1996).
UV emission during the optical outbursts
----------------------------------------
There is an evident mismatch between the outburst UV spectra and the extrapolation toward shorter wavelengths of the blackbodies with temperatures inferred from the ROSAT data (Fig. \[qnufnu\]). The most plausible explanation is the existence of an additional emission mechanism in the UV, namely recombination continuum in the nebula surrounding the hot star. While during the quiescent phase the contribution of the nebular continuum is not very large (although it is not negligible), it increases substantially during the outburst. As an example, the change in the strength of the Balmer jump from July 1994 to December 1995 can be seen in Fig. 4 of Viotti (1996).
Comparison with other supersoft X-ray sources
---------------------------------------------
Supersoft X-ray sources (SSS) are characterized by very soft X-ray radiation of high luminosity. The spectra are well described by blackbody emission at a temperature of about kT$\approx$ 25–40 eV and a luminosity close to the Eddington limit (Greiner 1991, Heise 1994). After the discovery of supersoft X-ray sources with observations, the satellite has discovered more than a dozen new SSS. Most of these have been observed in nearby galaxies (see Greiner (1996) for a recent compilation). Among the optically identified objects there are several different types of objects: close binaries like the prototype CAL 83 (Long 1981), novae, planetary nebulae, and symbiotic systems.
In addition to AG Dra, two other X-ray luminous symbiotic systems with supersoft X-ray spectra are known (RR Tel, SMC3), both being symbiotic novae. RR Tel was shown to exhibit a similar soft X-ray spectrum (Jordan 1994) and luminosity (Mürset & Nussbaumer 1994). RR Tel is one of the only seven known symbiotic nova systems. It went into outburst in 1945 with a brightness increase of 7$^{\rm m}$, and since then declined only slowly. SMC3 (= RX J0048.4–7332) in the Small Magellanic Cloud was found to be supersoft in X-rays by Kahabka (1994), and has been observed to be in optical outburst in 1981 (Morgan 1992). Simultaneous fitting of the and UV data was possible only after inclusion of a wind mass loss from the hot component, and gives a temperature above 260000 K (Jordan 1996). Symbiotic novae are generally believed to be due to a thermonuclear outburst after the compact object has accreted enough material from the (wind of the) companion or an accretion disk.
The similarity in the quiescent X-ray properties of AG Dra to those of the symbiotic novae RR Tel and SMC3 might support the speculation that AG Dra is a symbiotic nova in the post-outburst stage for which the turn-on is not documented. We have checked some early records such as the Bonner Durchmusterung, but always find AG Dra at the 10–11 intensity level. Thus, if AG Dra should be a symbiotic nova, its hypothetical turn on would have occurred before 1855. We note, however, that there are a number of observational differences between AG Dra and RR Tel which would be difficult to understand if AG Dra were a symbiotic nova.
If AG Dra is not a symbiotic nova, it would be the first wide-binary supersoft source (as opposed to the classical close-binary supersoft sources) the existence of which has been predicted recently (DiStefano 1996). These systems are believed to have donor companions more massive than the accreting white dwarf which makes the mass transfer unstable on a thermal timescale.
Summary
=======
The major results and conclusions of the present paper can be summarised as follows:
- The X-ray spectrum in quiescence is very soft, with a blackbody temperature of about 14.5$\pm$1 eV.
- The quiescent bolometric luminosity of the hot component in the AG Dra binary system is (9.5$\pm$1.5)$\times$10$^{36}$ (D/2.5 kpc)$^2$ erg/s, thus suggesting stable surface hydrogen burning in quiescence. Adopting a distance of AG Dra of 2.5 kpc, the X-ray luminosity suggests a low-mass white dwarf (M$<$0.6 ).
- In order to sustain the high luminosity the cool companion in AG Dra either has to have a wind mass loss substantially larger than usual K giants or is required to fill its Roche lobe, and consequently has to be brighter than a usual K giant. The mass of the cool component should be smaller than $\approx$2 for the given orbital parameters and the mass ratio would be 2–5.
- The monitoring of AG Dra at X-rays and UV wavelengths did not yield any hints for the predicted eclipse during the times of the U light minima.
- The recent optical outbursts of AG Dra have given us the rare opportunity to study the rapid evolution of the X-ray and UV emission with time. During the optical outburst in 1994 the UV continuum increased by a factor of 10, the UV line intensity by a factor of 2 (see Fig. \[iue1\]), and the X-ray intensity dropped by at least a factor of 100 (see Fig. \[light\]). There is no substantial time lag between the variations in the different energy bands compared to the optical variations.
- There is no hint for an increase of the absorbing column during the one ROSAT PSPC X-ray observation (with spectral resolution) performed during the decline phase of the 1994 optical outburst. Instead, a temperature decrease is consistent with the X-ray data which is also supported by the spectral results.
- Modelling the X-ray intensity drop by a slowly expanding white dwarf with concordant cooling, we find that the accretion rate has to rise only slightly above $\dot M_{\rm RG}$. Accordingly, the white dwarf expands to approximately its double size within the about three months rise of the optical outburst. The cooling during this expansion is moderate: the temperature decreases by only about 35%.
- The UV continuum emission during quiescence consists of two components which both match perfectly to the neighbouring wavelength regions (see Fig. \[qnufnu\]). Shortwards of $\approx$2000Å it corresponds to the tail of the 14–15 eV hot component as derived from the X-ray observations. However, during the optical outbursts the UV continuum shortwards of $\approx$2000Å does not correspond to the tail of the cooling hot component (9–11 eV), but instead is completely dominated by another emission mechanism, possibly a recombination continuum in the wind zone around the hot component.
- AG Dra could either be a symbiotic nova for which the turn on would have occurred before 1855, or the first example of the wide-binary supersoft source class.
We are grateful to J. Trümper and W. Wamsteker for granting the numerous target of opportunity observations with and . We thank Janet A. Mattei for providing recent AAVSO observations of AG Dra, M. Friedjung for discussions on AG Dra properties and D. Schönberner and T. Blöcker for details on the expansion/contraction rate of small core-mass stars. We also thank the referee J. Mikolajewska for a careful reading of the manuscript and helpful comments. JG is supported by the Deutsche Agentur für Raumfahrtangelegenheiten (DARA) GmbH under contract FKZ 50 OR 9201. RV is partially supported by the Italian Space Agency under contract ASI 94-RS-59 and is grateful to Dr. Willem Wamsteker for hospitality at the IUE Observatory of ESA at Villafranca del Castillo (VILSPA). RES thanks NASA for partial support of this research through grants NAG5-2103 (IUE) and NAG5-2094 (ROSAT) to the University of Denver. The project is supported by the German Bundesministerium für Bildung, Wissenschaft, Forschung und Technologie (BMBF/DARA) and the Max-Planck-Society.
Anderson C.M., Cassinelli J.P., Sanders W.T., 1981, ApJ 247, L127
Bath G.T., Pringle J.E., 1982, MNRAS 201, 345
BeuermannK., ReinschK., BarwigH., Burwitz V., de Martino D., Mantel K.-H., Pakull M.W., Robinson E.L., Schwope A.D., Thomas H.-C., Trümper J., van Teeseling A., Zhang E., 1995, A&A294, L1
Blöcker T., 1996 (priv. comm.)
Boyarchuk A.A., 1966, Astrofizika 2, 101
Dickey J.M., Lockman F.J., 1990, Ann. Rev. Astron. Astrophys. 28, 215
DiStefano R., 1996, ApJ (in press)
Dobrzycka D., Kenyon S.J., Milone A.A.E., 1996, AJ 111, 414
Duschl W.J., 1986, A&A 163, 56
Fernández-Castro T., González-Riestra R., Cassatella A., Taylor A.R., Seaquist E.R., 1995, ApJ 442, 366
Formiggini L., Leibowitz E.M., 1990, A&A 227, 121
Friedjung M., 1988, in “The symbiotic phenomenon", eds. J. Mikolajewska , IAU Coll. 103, ASSL 145, p. 199
Fujimoto M.Y., 1982, ApJ 257, 767
Garcia M.R., 1986, AJ 91, 1400
Granslo B.H., Poyner G., Takenaka Y., Schmeer P., 1994, IAU Circ. 6009
Greiner J., Hasinger G., Kahabka P. 1991, A&A 246, L17
Greiner J., 1996, in Supersoft X-ray Sources, ed. J. Greiner, Lecture Notes in Physics 472, Springer, p. 299
Greiner J., Schwarz R., Hasinger G., Orio M., 1996, A&A 312, 88
Heise J., van Teeseling A., Kahabka P., 1994, A&A 288, L45
Hric L., Skopal A., Urban Z., Komzik R., Luthardt R., Papoušek J., Hanzl D., Blanco C., Niarchos P., Velic Z., Schweitzer E., 1993, Contrib. Astron. Obs. Skalnaté Pleso 23, 73
Hric L., Skopal A., Chochol D., Komzik R., Urban Z., Papoušek J., Blanco C., Niarchos P., Rovithis-Livaniou H., Rovithis P., Chinarova L.L., Pikhun A.I., Tsvetkova K., Semkov E., Velic Z., Schweitzer E., 1994, Contrib. Astron. Obs. Skalnaté Pleso 24, 31
Huang C.C., Friedjung M., Zhou Z.X., 1994, A&AS 106, 413
Iben I., 1982, ApJ 259, 244
Iben I., Tutukov A.V., 1989, ApJ 342, 430
Iijima T., Vittone A., Chochol D., 1987, A&A 178, 203
Inaguchi T., Matsuda T., Shima E., 1986, MNRAS 223, 129
Jordan S., Mürset U., Werner K., 1994, A&A 283, 475
Jordan S., Schmutz W., Wolff B., Werner K., Mürset U., 1996, A&A 312, 897
Kafatos M., Meyer S.R., Martin I., 1993, ApJS 84, 201
Kahabka P., Pietsch W., Hasinger G., 1994, A&A 288, 538
Kaler J.B., 1987, AJ 94, 437
Kato M., 1983a, PASJ 35, 33
Kato M., 1983b, PASJ 35, 507
Kato M., 1996, in Supersoft X-ray Sources, ed. J. Greiner, Lecture Notes in Physics 472, Springer, p. 15
Kato M., Hachisu I., 1994, ApJ 437, 802
Kenyon S.J., 1986, The symbiotic stars, Cambridge Univ. Press
Kenyon S.J., 1988, in “The symbiotic phenomenon”, eds. J. Mikolajewska , IAU Coll. 103, ASSL 145, p. 11
Kenyon S.J., Webbink R.F., 1984, ApJ 279, 252
Kenyon S.J., Garcia M.R., 1986, AJ 91, 125
Kovetz A., Prialnik D., 1994, ApJ 424, 319
Kwok S., Leahy D.A., 1984, ApJ 283, 675
Leibowitz E.M., Formiggini L., 1992, A&A 265, 605
Livio M. 1988, in “The symbiotic phenomenon”, eds. J. Mikolajewska , IAU Coll. 103, ASSL 145, p. 149
Long K.S., Helfand D.J., Grabelsky D.A., 1981, ApJ 248, 925
Luthardt R., 1983, Mitt. Veränd. Sterne 9.3, p. 129
Luthardt R., 1985, IBVS 2789
Luthardt R., 1990, Contr. of the Astr. Obs. Skalnate Pleso, CSFR, vol. 20, p. 129
Lutz J.H., Lutz T.E., Dull J.D., Kolb D.D., 1987, AJ 94, 463
Mattei J.A., 1995 (priv. comm.)
Meier S.R., Kafatos M., Fahey R.P., Michalitsianos A.G., 1994, ApJ Suppl. 94, 183
Meinunger L., 1979, IVBS 1611
Mikolajewska J., Kenyon S.J., 1992, MNRAS 256, 177
Mikolajewska J., Kenyon S.J., Mikolajewski M., Garcia M.R., Polidan R.S., 1995, AJ 109, 1289
Montagni F., Maesano M., Viotti R., Altamore A., Tomova M., Tomov N., 1996, IBVS 4336
Morgan D.H., 1992, MNRAS 258, 639
Mürset U., Nussbaumer H., Schmid H.M., Vogel M., 1991, A&A 248, 458
Mürset U., Nussbaumer H., 1994, A&A 282, 586
Neo S., Miyaji S., Nomoto K., Sugimoto D., 1977, PASJ 29, 249
Nussbaumer H., Vogel M., 1987, A&A 182, 51
Paczynski B., 1975, ApJ 202, 558
Paczynski B., Rudak B., 1980, Acta Astron. 82, 349
Piro L., 1986 (priv. comm.)
Piro L., Cassatella A., Spinoglio L., Viotti R., Altamore A., 1985, IAU Circ. 4082
Prialnik D., Shara M.M., Shaviv G., 1978, A&A 62, 339
Reimers D., 1975, Mem. Soc. R. Sci. Liege 8, 369
Robinson L.J., 1969, Peremennye Zvezdy 16, 507
Robinson K., Bode M.F., Skopal A., Ivison R.J., Meaburn J., 1994, MNRAS 269, 1
Rossi C., Viotti R., Muratorio G., Friedjung M., 1996, A&AS (to be subm.)
Schmidt H.M., Nussbaumer H., 1993, A&A 268, 159
Shakura N.I., Sunyaev R.A., 1973, A&A 24, 337
Skopal A., 1994, IBVS 4096
Smith S.E., Bopp B.W., 1981, MNRAS 195, 733
Smith V.V., Cunha K., Jorissen A., Boffin H.M.J., 1996, A&A 315, 179
Southwell K.A., Livio M., Charles P.A., O’Donoghue D., Sutherland W.J., 1996, ApJ (subm.)
Sugimoto D., Fujimoto M.Y., Nariai K., Nomoto K., 1979, in White dwarfs and degenerate variable stars, IAU Coll. 53, eds. H.M. van Horn and V. Weidemann, Rochester: University of Rochester Press, p. 280
Tomov N.A., Tomova M.T., 1996, in Workshop on Symbiotic Stars, Poland, July 1996 (in press)
Trümper J., Hasinger G., Aschenbach B., Bräuninger H., Briel U.G., Burkert W., Fink H., Pfeffermann E., Pietsch W., Predehl P., Schmitt J.H.M.M., Voges W., Zimmermann U., Beuermann K., 1991, Nat. 349, 579
Tutukov A.V., Yungelson L.R., 1976, Afz 12, 521
Viotti R., 1993, in Cataclysmic Variables and Related Objects, eds. M. Hack and C. la Dous, NASA Monograph Series, NASA SP-507, Washington, USA, p. 634 and 674
Viotti R., Ricciardi O., Ponz D., Giangrande A., Friedjung M., Cassatella A., Baratta G.B., Altamore A., 1983, A&A 119, 285
Viotti R., Altamore A., Baratta G.B., Cassatella A., Friedjung M., 1984, ApJ 283, 226
Viotti R., González-Riestra R., Rossi C., 1994a, IAU Circ. 6039
Viotti R., Mignoli M., Comastri A., Bedalo C., Ferluga S., Rossi C., 1994b, IAU Circ. 6044
Viotti R., Giommi P., Friedjung M., Altamore A., 1995, Proc. Abano-Padova Conference on Cataclysmic Variables: eds. A. Bianchini, M. Della Valle, M. Orio, ASSL 205, 195
Viotti R., González-Riestra R., Montagni F., Mattei J., Maesano M., Greiner J., Friedjung M., Altamore A., 1996, in Supersoft X-ray Sources, ed. J. Greiner, Lecture Notes in Physics 472, Springer, p. 259
Yungelson L., Livio M., Tutukov A., Kenyon S.J., 1995, ApJ 447, 656
Yungelson L., Livio M., Truran J.W., Tutukov A., Federova A., 1996, ApJ 466, 890
Zimmermann H.U., Becker W., Belloni T., Döbereiner S., Izzo C., Kahabka P., Schwentker O., 1994, MPE report 257
[^1]: $\!\!\!$Also Astronomy Division, Space Science Department, ESTEC.
[^2]: $\!\!\!$Based on observations made with the [*International Ultraviolet Explorer*]{} collected at the Villafranca Satellite Tracking Station of the European Space Agency, Spain.
|
[**$\delta$-SUPERDERIVATIONS OF SEMISIMPLE JORDAN SUPERALGEBRAS**]{}\
[**Ivan Kaygorodov**]{}
[**Abstract:** ]{}
We described $\delta$-derivations and $\delta$-superderivations of simple and semisimple finite-dimensional Jordan superalgebras over algebraic closed fields with characteristic $p\neq2$. We constructed new examples of $\frac{1}{2}$-derivations and $\frac{1}{2}$-superderivations of simple Zelmanov’s superalgebra $V_{1/2}(Z,D).$
$\delta$-(super)derivation, Jordan superalgebra.
**В в е д е н и е**
Антидифференцирования, то есть такие линейные отображения $\mu$ алгебры $A$, что $$\mu(xy)=-\mu(x)y-x\mu(y),$$ изучались в работах [@hop2; @fi]. Впоследствии, в работах В. Т. Филиппова было введено понятие $\delta$-дифференцирования, то есть такого линейного отображения $\phi$ алгебры $A,$ что для фиксированного элемента $\delta$ из основного поля и произвольных элементов $x,y \in A$ верно $$\phi(xy)=\delta(\phi(x)y+x\phi(y)).$$ Он рассматривал $\delta$-дифференцирования первичных лиевых, альтернативных и мальцевских нелиевых алгебр [@Fil; @Fill; @Filll]. В дальнейшем, $\delta$-дифференцирования йордановых алгебр и супералгебр, а также простых супералгебр Ли рассматривались И. Б. Кайгородовым в работах [@kay; @kay_lie; @kay_lie2; @kay_ob_kant]; $\delta$-супердифференцирования супералгебр йордановой скобки рассматривались И. Б. Кайгородовым и В. Н. Желябиным в работе [@kay_zh]; П. Зусманович рассматривал $\delta$-супердифференцирования первичных супералгебр Ли в [@Zus]. Отметим, что в [@kay_obzor] сделан подробный обзор, посвященный изучению $\delta$-дифференцирований и $\delta$-супердифференцирований.
В данной работе дается полное описание $\delta$-дифференцирований и $\delta$-супердифференцирований простых конечномерных неунитальных йордановых супералгебр над алгебраически замкнутым полем характеристики $p>2.$ В результате мы имеем полное описание $\delta$-дифференцирований и $\delta$-супердифференцирований полупростых конечномерных йордановых супералгебр над алгебраически замкнутым полем характеристики отличной от 2. В частности показано, что полупростые конечномерные йордановы супералгебры над алгебраически замкнутым полем характеристики нуль не имеют нетривиальных $\delta$-дифференцирований и $\delta$-супердифференцирований.
**§1 Основные факты и определения.**
Пусть $F$ — поле характеристики $p \neq 2$. Алгебра $A$ над полем $F$ называется йордановой, если она удовлетворяет тождествам $$\begin{aligned}
xy=yx,&& (x^{2}y)x=x^{2}(yx).\end{aligned}$$ Пусть $G$ — алгебра Грассмана над $F$, заданная образующими $1,\xi_{1},\ldots ,\xi_{n},\ldots $ и определяющими соотношениями: $\xi_{i}^{2}=0, \xi_{i}\xi_{j}=-\xi_{j}\xi_{i}.$ Элементы $1, \xi_{i_{1}}\xi_{i_{2}}\ldots \xi_{i_{k}}, i_{1}<i_{2}< \ldots
<i_{k},$ образуют базис алгебры $G$ над $F$. Обозначим через $G_{0}$ и $G_{1}$ подпространства, порожденные соответственно произведениями четной и нечетной длинны; тогда $G$ представляется в виде прямой суммы этих подпространств: $G = G_{0}\oplus G_{1}$, при этом справедливы соотношения $G_{i}G_{j} \subseteq G_{i+j(mod 2)},
i,j=0,1.$ Иначе говоря, $G$ является $\mathbb{Z}_{2}$-градуированной алгеброй (или супералгеброй) над $F$.
Пусть теперь $A=A_{0} \oplus A_{1}$ — произвольная супералгебра над $F$. Рассмотрим тензорное произведение $F$-алгебр $G \otimes
A$. Его подалгебра $$\begin{aligned}
G(A)&=&G_{0} \otimes
A_{0} + G_{1} \otimes A_{1}\end{aligned}$$ называется грассмановой оболочкой супералгебры $A$.
Пусть $\Omega$ — некоторое многообразие алгебр над $F$. Супералгебра $A=A_{0} \oplus A_{1}$ называется $\Omega$-супералгеброй, если ее грассманова оболочка $G(A)$ является алгеброй из $\Omega$. В частности, $A=A_{0}\oplus A_{1}$ называется йордановой супералгеброй, если ее грассманова оболочка $G(A)$ является йордановой алгеброй. Далее для однородного элемента $x$ супералгебры $A=A_0 \oplus A_1$ будем считать $p(x)=i,$ если $x\in A_i$, а для элемента $x \in A=A_0 \oplus A_1$ через $x_i$ обозначим проекцию на $A_i.$
Классификация простых конечномерных йордановых супералгебр над алгебраически замкнутым полем характеристики ноль была приведена в работах И. Кантора и В. Каца [@kant; @Kacc]. В последствии М. Расином и Е. Зельмановым [@RZ], были описаны простые конечномерные йордановы супералгебры с полупростой четной частью над алгебраически замкнутым полем характеристики отличной от 2. В работе Е. Зельманова и К. Мартинез [@Zelmanov-Martines] была дана классификация простых унитальных конечномерных йордановых супералгебр с неполупростой четной частью над алгебраически замкнутым полем характеристики $p>2$. Простые неунитальные конечномерных йордановы супералгебры над алгебраически замкнутым полем характеристики $p>2$ были описаны в работе Е. Зельманова [@Zel_ssjs]. Также, в этой работе была предъявлена структура полупростых конечномерных йордановых супералгебр над алгебраически замкнутым полем характеристики $p \neq 2.$
Приведем некоторые примеры йордановых супералгебр.
**1.1. Супералгебра векторного типа $J(\Gamma, D)$.** Пусть $\Gamma=\Gamma_0 \oplus \Gamma_1$ — ассоциативная суперкоммутативная унитальная супералгебра с ненулевым четным дифференцированием $D$. Рассмотрим $J(\Gamma, D)=\Gamma \oplus \Gamma x$ — прямую сумму пространств, где $\Gamma x$ — изоморфная копия пространства $\Gamma.$ Тогда операция умножения на $J(\Gamma, D),$ которую мы обозначем через $\cdot$, определяется формулами $$a\cdot b=ab, \ a\cdot bx=(ab)x, \ ax\cdot b=(-1)^{p(b)}(ab)x, \ ax \cdot bx = (-1)^{p(b)}(D(a)b-aD(b)),$$ где $a,b$ однородные элементы из $\Gamma$ и $ab$ — произведение в $\Gamma$.
В дальнейшем, нас будет интересовать случай $\Gamma=B(m),$ где $B(m)=F[a_1, \ldots, a_m | a_i^p=0]$ — алгебра усеченных многочленов от $m$ четных переменных над полем характеристики $p>2$.
**1.2. Супералгебра Каца.** Простая девятимерная супералгебра Каца $K_{9}$ над полем характеристики 3 определяется следующим образом:
$K_{9} = A \oplus M, (K_{9})_{0}=A, (K_{9})_{1}=M$,\
$A=Fe + Fuz + Fuw + Fvz + Fvw,$\
$M= Fz + Fw + Fu + Fv$ .
Умножение задано условиями:
$e^{2}= e, e$ — единица алгебры $A, em= \frac{1}{2} m$, для любого $m \in M,$
$ [u,z] = uz, [u,w]=uw, [v,z]=vz, [v,w]=vw,$
$ [z,w]= e, [u,z]w= -u, [v,z]w= -v, [u,z] [v,w]=2e$,
а все остальные ненулевые произведения получаются из приведенных либо применением одной из кососимметрий $z
\leftrightarrow w, u \leftrightarrow v$, либо применением одновременной подстановки $z \leftrightarrow u, w \leftrightarrow
v$. Отметим, что супералгебра $K_9$ не является унитальной.
**1.3. Супералгебра $V_{1/2}(Z,D)$.** Пусть $Z$ — ассоциативно-коммутативная $F$-алгебра с единицей $e$ и дифференцированием $D: Z \rightarrow Z,$ удовлетворяющая двум условиям
i\) $Z$ не имеет собственных $D$-инвариантных идеалов,
ii\) $D$ обнуляет только элементы вида $Fe.$
Рассмотрим $Zx$ как изоморфную копию алгебры $Z$. Определим на векторном пространстве $V(Z,D)=Z+Zx$ структуру супералгебры. Положим $A=Z$ и $M=Zx$ — соответственно четная и нечетная части. Умножение $\cdot$ зададим следующим образом $$a\cdot b =ab, a\cdot bx =\frac{1}{2} (ab)x, ax \cdot bx =D(a)b-aD(b),$$ для произвольных элементов $a,b \in Z.$ Полученную супералгебру будем обозначать как $V_{1/2}(Z,D).$
Как было отмечено выше, для фиксированного элемента $\delta$ из основного поля, под $\delta$-дифференцированием супералгебры $A$ мы понимаем линейное отображение $\phi:A\rightarrow A,$ которое при произвольных $x,y \in A$ удовлетворяет условию $$\phi(xy)=\delta(\phi(x)y+x\phi(y)).$$
Под центройдом $\Gamma(A)$ супералгебры $A$ мы будем понимать множество всех линейных отображений $\chi: A \rightarrow A,$ при произвольных $a,b \in A$ удовлетворяющих условию $$\chi(ab)=\chi(a)b=a\chi(b).$$
Определение 1-дифференцирования совпадает с обычным определением дифференцирования; 0-дифференцированием является произвольный эндоморфизм $\phi$ алгебры $A$ такой, что $\phi(A^{2})=0$. Ясно, что любой элемент центроида супералгебры является $\frac{1}{2}$-дифференцированием.
Ненулевое $\delta$-дифференцирование $\phi$ будем считать нетривиальным $\delta$-дифференцированием, если $\delta \neq 0,1$ и $\phi \notin \Gamma(A).$
Под суперпространством мы понимаем $\mathbb{Z}_{2}$-градуированное пространство. Рассматривая пространство эндоморфизмов $End(A)$ супералгебры $A$, мы можем задать $\mathbb{Z}_2$-градуировку, положив четными отображения $\phi,$ такие, что $\phi(A_i) \subseteq A_i,$ а под нечетными подразумевать отображения $\phi,$ такие что $\phi(A_i) \subseteq A_{i+1}.$ Однородный элемент $\psi$ суперпространства $End(A)$ называется супердифференцированием, если для однородных $x,y \in A_0 \cup A_1$ выполненно $$\psi(xy)=\psi(x)y+(-1)^{p(x)p(\psi)}x\psi(y).$$
Для фиксированного элемента $\delta \in F$ определим понятие $\delta$-супердифференцирования супералгебры $A=A_0 \oplus A_1$. Однородное линейное отображение $\phi : A \rightarrow A$ будем называть *$\delta$-супердифференцированием*, если для однородных $x,y \in A_0 \cup A_1$ выполнено $$\begin{aligned}
\phi (xy)&=&\delta(\phi(x)y+(-1)^{p(x)p(\phi)}x\phi(y)).\end{aligned}$$
Рассмотрим супералгебру Ли $A=A_0 \oplus A_1$ с умножением $[ . , . ]$ и зафиксируем элемент $x \in A_i$. Тогда $R_x: y \rightarrow [x,y]$ является супердифференцированием супералгебры $A$ и его четность $p(R_{x})=i.$
Под суперцентроидом $\Gamma_{s}(A)$ супералгебры $A$ мы будем понимать множество всех однородных линейных отображений $\chi: A \rightarrow A,$ для произвольных однородных элементов $a,b$ удовлетворяющих условию
$$\chi(ab)=\chi(a)b=(-1)^{p(a)p(\chi)}a\chi(b).$$
Определение 1-супердифференцирования совпадает с определением обычного супердифференцирования; 0-супердифференцированием является произвольный эндоморфизм $\phi$ супералгебры $A$ такой, что $\phi(A^{2})=0$. Ненулевое $\delta$-супердифференцирование будем считать нетривиальным, если $\delta\neq 0,1$ и $\phi \notin \Gamma_s(A).$
Легко видеть, что четное $\delta$-супердифференцирование будет являться $\delta$-дифференцированием. Далее мы будем этим пользоваться дополнительно не отмечая.
**§2 $\delta$-дифференцирования и $\delta$-супердифференцирования простых конечномерных йордановых супералгебр.**
В данном параграфе мы рассмотрим $\delta$-дифференцирования и $\delta$-супердифференцирования неунитальных простых конечномерных йордановых супералгебр над алгебраически замкнутым полем характеристики $p>2,$ т.е. супералгебр $K_9$ над полем характеристики 3 и супералгебры $V_{1/2}(Z,D)$ над полем характеристики $p>2.$ В результате, учитывая результаты работ [@kay; @kay_lie2], мы будем иметь полное описание $\delta$-дифференцирований и $\delta$-супердифференцирований простых конечномерных йордановых супералгебр над алгебраически замкнутым полем характеристики отличной от 2.
Далее через $\langle x,y \rangle$ обозначим векторное пространство, порожденное векторами $x,y$.
**ЛЕММА 1.** *Супералгебра $K_9$ не имеет нетривиальных $\delta$-дифференцирований и $\delta$-супердифференцирований.*
Пусть $\phi$ — нетривиальное $\delta$-дифференцирование супералгебры $K_9$. Легко видеть, что при $m \in (K_9)_1$ выполнено $$0=\phi(me-em)=2\delta m\phi(e)_1.$$ Следовательно, учитывая $\phi(e)_{1}u=0$ и $\phi(e)_{1}v=0,$ имеем $\phi(e)_{1}=0$.
Рассмотрим ограничение $\phi$ на $(K_9)_0$. Заметим, что $(K_{9})_{0}$ — йорданова алгебра билинейной формы, по доказанному в [@kay], при $\delta=\frac{1}{2}$ получаем $\phi(x)_0=\alpha x, x \in
(K_{9})_{0}, \alpha \in F,$ в частности видим $\phi(e)_0=\alpha e,$ а при $\delta \neq \frac{1}{2}$ имеем $\phi((K_9)_0)_0=0.$
Если $\delta \neq \frac{1}{2}$ и $m \in (K_9)_1,$ то $$\phi(m)=2\phi(e m)=2\delta e\phi(m)=2\delta\phi(m)_0+\delta\phi(m)_1.$$ Откуда получаем $\phi(m)=0$ и $\phi(m_1m_2)=\delta\phi(m_1)m_2+\delta m_1\phi(m_2)=0,$ для произвольных $m_1,m_2 \in (K_9)_1$, что влечет тривиальность $\phi$.
Если $\delta=\frac{1}{2}$ и $m \in (K_9)_1,$ то $2\phi(m)=4\phi(em)=2e\phi(m)+\alpha m,$ что дает $\phi(m)=\phi(m)_0+\alpha m.$ Далее можем считать, что $\alpha=0$ и $\phi(m)=\phi(m)_0$. Заметим, что $0=\phi(m^2)=\phi(m)m$, откуда $$\phi(w)=\alpha_w uw +\beta_w vw, \phi(z)=\alpha_z uz + \beta_z vz,$$ $$\phi(u)=\alpha_u uz+\beta_u uw, \phi(v)=\alpha_v vw +\beta_v vz.$$
Легко видеть, что $$\phi(uz)=-(\phi(u)z+u\phi(z))=\beta_z z-\beta_u u,$$ откуда $$\alpha_u uz+\beta_u uw =\phi(u)=-\phi((uz)w)=(\beta_z z-\beta_u u)w+uz(\alpha_w uw+\beta_w vw)=
\beta_z e-\beta_u uw +2\beta_w e.$$ т.е. $\alpha_u=\beta_u=0.$ Аналогично получаем $\alpha_{\gamma}=\beta_{\gamma}=0,$ при $\gamma \in \{z,w,u,v\}.$ Таким образом, $\phi$ — тривиально.
Если $\phi$ — нечетное $\delta$-супердифференцирование, то легко получить, что
$\phi(e)=\delta\phi(e)e+\delta e\phi(e)=\delta \phi(e)$ и $\phi(e)=0.$
Также видим, что при $\delta\neq \frac{1}{2}$ верно
$\phi(a)=\phi(ea)=\frac{\delta}{2}\phi(a)$ и $\phi(m)=2\phi(em)=2\delta\phi(m),$
что нам дает тривиальность $\phi$.
Пусть $\delta=\frac{1}{2}$ и для $q \in \{u,z,v,w\}$ верно $$\phi(q)=\chi_q e+ \alpha_q uz+\beta_q uw +\gamma_q vz +\mu_q vw.$$ Теперь легко получить $$\phi(u)=-\phi([u,z]w)=\phi(uz)w+(uz)\phi(w)=-(\phi(u)z-u\phi(z))w+(uz)\phi(w),$$ $$\phi(u)=\phi([u,w]z)=-(\phi(uw)z+(uw)\phi(z))=(\phi(u)w-u\phi(w))z-(uw)\phi(z).$$ Ясно, что $(\phi(u)w-u\phi(w))z-(uw)\phi(z) \in \zeta \mu_u vw +\langle e,uw, uz, vw \rangle, \zeta=\pm1,$ и $-(\phi(u)z-u\phi(z))w+(uz)\phi(w) \in \langle e,uw, uz, vw \rangle,$ то есть $\mu_u=0.$ Аналогично получаем, что $\gamma_u=0.$ Проводя аналогичные рассуждения для $v,z,w$, мы получми, что $\phi(q)=\chi_q e+ y_qq, y_q \in (K_9)_1.$
Легко заметить, что $$0=2\phi(e)=2\phi([z,w])=\phi(z)w-z\phi(w)=-\chi_zw+(y_zz)w+\chi_wz-z(y_ww),$$ а также, что $(y_zz)w-z(y_ww) \in \langle u, v \rangle,$ то есть $\chi_z=\chi_w=0.$ Аналогично вычисляя, имеем $\chi_u=\chi_z=0.$
Далее заметим, что $$0=\phi(e)=-\phi([u,z][v,w])=\phi(uz)vw+uz\phi(vw)=$$ $$\frac{1}{2}((\phi(u)z-u\phi(z))vw+uz(\phi(v)w-v\phi(w)))=
\zeta_1\beta_uw + \zeta_2\gamma_zv + \zeta_3\gamma_vz + \zeta_4\beta_wu,$$ где $\zeta_i$ принимает значения $+1$ или $-1$. Отсюда имеем, что $\beta_u=\beta_w=\gamma_z=\gamma_v=0.$
Осталось отметить, что $$0=-\phi(e)=-\phi(uv)=\phi(u)v-u\phi(v)=(\alpha_u uz)v-u(\mu_v vw),$$ что дает $\mu_v=\alpha_u=0.$ Аналогично показываем, что $\phi(z)=\phi(w)=0.$ Таким образом, имеем $\phi$ — тривиально. Лемма доказана.
**ЛЕММА 2.** [*Пусть $V_{1/2}(Z,D)$ супералгебра над полем характеристики 3, $Z$ — ее четная часть, $\psi$ — дифференцирование алгебры $Z$, связанное c $D$ соотношением $\psi(a)D(b)=D(a)\psi(b)$ для любых $a,b \in Z.$ Тогда существует $z \in Z,$ что $D(z)$ — обратим и $\psi=cD$ при некотором $c \in Z$.*]{}
Легко заметить, что $D(a)\psi=\psi(a)D.$ Мы покажем, что для некоторого $z \in Z$ элемент $D(z)$ будет обратим, откуда будем иметь $\psi=(D(z)^{-1}\psi(z))D,$ то есть искомое.
Заметим, что $D(a^{3})=3D(a)a^{2}=0.$ Отсюда, легко получить, что $a^{3}Z$ является $D$-инвариантным идеалом в $Z$. Таким образом, либо $a^{3}Z=Z$, либо $a^{3}Z=0.$ Если для проивольного $a \in Z$ верно $a^{3}Z=Z,$ то $Z$ — поле. В противном случае, найдется элемент $a \in Z,$ такой что $a^{3}=0.$ Рассмотрим $R= \{ a \in Z | a^{3} =0 \} \neq 0$ — идеал в $Z$, отличный от $Z.$ Пусть $I \lhd Z,$ тогда любой элемент $a \in I$ не является обратимым, и, следовательно, $a^{3}$ не является обратимым, а также верно $a^{3}Z \lhd Z.$ Учитывая то, что $a^{3}Z$ является $D$-инвариантным идеалом, мы получаем $a^{3}Z=0,$ что влечет $a^{3}=0$ и $I \subseteq R$. Поэтому, что $R$ — наибольший идеал в $Z$. Легко понять, что $D(R) \nsubseteq R$. Иначе бы $R$ был $D$-инвариантным идеалом и совпадал с $Z$. Таким образом, для некоторого $z \in R$ верно $D(z) \notin R.$ Следовательно, $D(z)$ — обратим и, автоматически, доказывает условие леммы.
**ЛЕММА 3.** *Пусть $\phi$ — нечетное $\frac{1}{2}$-дифференцирование или нечетное $\frac{1}{2}$-супердифференцирование супералгебры $V_{1/2}(Z,D)$ над полем характеристики 3, заданное условиями $\phi(a)=\psi(a)x$ и $\phi(ax)=\mu(a)$. Тогда $\mu(a)=D(\psi(a))+a\mu(e)$ и $\psi=cD$ при некотором $c\in A.$*
Заметим, что для произвольных $a,b \in A$ выполняется $$\psi(ab)x=\phi(ab)=2(\phi(a)\cdot b +a \cdot \phi(b))=(\psi(a)b+a\psi(b))x,$$ то есть, $\psi$ — дифференцирование $A.$ Также верно, что $$2\mu(ab)=\phi(a\cdot bx)=2(\phi(a)\cdot bx+a\cdot \phi(bx))=2(\psi(a)x \cdot bx+a\mu(b))=2D(\psi(a))b-2\psi(a)D(b)+2a\mu(b),$$ то есть $$\begin{aligned}
\label{k}
\mu(ab)=D(\psi(a))b-\psi(a)D(b)+a\mu(b),\end{aligned}$$ что, при $b=e$, влечет $$\begin{aligned}
\label{k2}
\mu(a)=D(\psi(a))+a\mu(e).\end{aligned}$$ Таким образом, при подстановке (\[k2\]) в (\[k\]), мы имеем $$D(\psi(ab))+ab\mu(e)=D(\psi(a))b-\psi(a)D(b)+aD(\psi(b))+ab\mu(e),$$ что, путем прямых вычислений, нам дает $$D(a)\psi(b)=\psi(a)D(b).$$ Откуда, по лемме 2, мы получаем, что $\psi=cD,$ для некоторого $c \in A$. Лемма доказана.
**ЛЕММА 4.** *Пусть $\phi$ — нетривиальное $\delta$-дифференцирование супералгебры $V_{1/2}(Z,D)$ над полем характеристики $p\neq 2$. Тогда $\delta=\frac{1}{2}$ и $\phi(y)=(1+p(y))z \cdot y$ для фиксированного $z \in A \setminus \{Fe\}.$*
Легко заметить, что $\phi(e)=2\delta\phi(e)e=2\delta\phi(e)_0+\delta\phi(e)_1,$ т.е. либо $\delta \neq \frac{1}{2}$ и $\phi(e)=0,$ либо $\delta=\frac{1}{2}$ и $\phi(e) \in A.$
Если $mx \in M,$ то $$\phi(mx)=2\phi(e \cdot mx)=2\delta\phi(e)\cdot mx+2\delta e\cdot \phi(mx)=\delta(\phi(e)m)x+2\delta \phi(mx)_0+\delta\phi(mx)_1.$$
Откуда видим, что при $\delta \neq \frac{1}{2}$ будет $\phi(M)=0,$ а при $\delta=\frac{1}{2}$ получим $\phi(mx)_1=(\phi(e)m)x.$
Для произвольного $a \in A$ верно $$\phi(a)=\phi(e\cdot a)=\delta\phi(e)\cdot a+\delta e\cdot \phi(a)=\delta\phi(e)\cdot a+\delta\phi(a)_0+\frac{\delta}{2}\phi(a)_1.$$
Откуда, выполнено одно из условий:\
1) $\delta\neq \frac{1}{2},2$ и $\phi=0;$\
2) $\delta=2, p\neq 2,3$ и $\phi(a) \in M;$\
3) $\delta=\frac{1}{2}, p\neq 2,3$ и $\phi(a)=\phi(e)a;$\
4) $\delta=\frac{1}{2}$ и $p=3.$
Покажем, что второй случай не дает нетривиальных 2-дифференцирований. Пусть $\phi$ — 2-дифференцирование и $\phi(a)=\phi_a x,$ тогда $$0=\phi(a\cdot x)=2\phi(a)\cdot x=2\phi_a x \cdot x =2D(\phi_a),$$ откуда, пользуясь тем, что $D$ обнуляет только элементы вида $\alpha e,$ имеем $\phi_a=\alpha e.$
Заметим, что $$\phi_{a^{2}}x=\phi(a^{2})=4\phi(a)\cdot a =2(\phi_aa)x,$$ т. е. $\phi_a=0$ при $a \in A \setminus \{Fe\}.$ Полученное дает тривиальность $\phi$.
Рассмотрим третий случай, т.е. $\delta=\frac{1}{2}$ и $p\neq 2,3$. Ясно, что отображение $\psi$ заданное по правилу $$\begin{aligned}
\label{12der}
\psi(a)=za, \mbox{ } \psi(mx)=(zm)x, \mbox{ для }z,a,m \in A\end{aligned}$$ является $\frac{1}{2}$-дифференцированием. Поэтому, отображение $\chi$ определенное как $\chi=\phi-\psi$ также будет являться $\frac{1}{2}$-дифференцированием. Покажем, что $\chi=0.$
Легко видеть, что $\chi(A)=0$ и $\chi(mx)=\phi(mx)_0.$ Заметим, что $$0=2\chi(x \cdot x)=2\chi(x)\cdot x=\chi(x)x,$$ т.е. $\chi(x)=0.$ Следовательно, можем получить $\chi(ax)=\chi(a)\cdot x+a\cdot \chi(x)=0.$ Откуда, $\chi$ — тривиально и $\phi$ имеет искомый вид.
Заметим, что для $\phi$, определенного по правилу (\[12der\]), верно $$\phi(ax \cdot bx)=\phi(D(a)b-aD(b))=z(D(a)b-aD(b))=(ax)\cdot ((bz)x)+abD(z)=(ax)\cdot\phi(bx)+abD(z).$$ Откуда видим, что $\phi$ будет являться нетривиальным $\frac{1}{2}$-дифференцированием только если $D(z)\neq 0,$ т.е. если $z \in A \setminus \{ Fe\}.$ Ясно, что отображение, заданное полученным образом будет также $\frac{1}{2}$-дифференцированием и над полем характеристики $p=3.$
Рассмотрим четвертый случай, т.е. $\delta=\frac{1}{2}$ и $p=3$. В силу того, что отображение, заданное формулами (\[12der\]), является $\frac{1}{2}$-дифференцированием данной супералгебры над полем характеристики 3, то мы можем считать $\phi(A) \subseteq M, \phi(M) \subseteq A.$ Положим, что $\phi(a)=\psi(a)x$ и $\phi(ax)=\mu(a).$ По лемме 3, мы получаем, что $\psi=cD,$ для некоторого $c \in A$.
Для того, чтобы отображение $\phi$, заданное по правилу $\phi(a)=(cD(a))x, \phi(ax)=D(cD(a))+az$ для некоторых $c,z \in A,$ являлось $\frac{1}{2}$-дифференцированием, нам необходимо проверить выполнение равенства $$\phi(ax \cdot bx)=\frac{1}{2} (\phi(ax) \cdot bx + ax \cdot \phi(bx)).$$ Непосредственными вычислениями, имеем $$(cD(D(a)b-aD(b)))x=\phi(ax \cdot bx)=
\frac{1}{2} (\phi(ax) \cdot bx + ax \cdot \phi(bx))=
((D(cD(a))+az)b) x +(a(D(cD(b))+bz))x,$$ откуда $$acD^2(b)=D(c)D(a)b+aD(c)D(b)+2azb,$$ что при $b=a=e$ дает $z=0,$ а при $b=e$ дает $D(c)D(a)=0$. Учитывая, что по лемме 2 существует $w \in A,$ что $D(w)$ является обратимым, то $D(c)=0$ и $c=\alpha e, \alpha \in F.$ Соответственно, получаем равенство $\alpha a D^2(b)=0.$ Допустим, что $\alpha \neq 0,$ тогда $D^2(b)=0.$ Отсюда получаем, что $D(b) =\beta_b e, \beta_b \in F.$ Заметим, что $\beta_{b^2} e=D(b^2)=2D(b)b=2\beta_{b}b,$ то есть $\beta_b=0$, что дает $D=0$ и противоречение с тем, что $D$ ненулевое отображение. Таким образом, мы имеем $\alpha=0$ и $\psi=\mu=0.$ Лемма доказана.
**ЛЕММА 5.** *Пусть $\phi$ — нетривиальное нечетное $\delta$-супердифференцирование супералгебры $V_{1/2}(Z,D)$ над алгебраически замкнутым полем характеристики $p \neq 2.$ Тогда $\delta=\frac{1}{2}$ и\
1) если $p \neq 3$, то $\phi(A)=0, \phi(ax)=az$ для некоторого $z \in A \setminus \{ 0\} ;$\
2) если $p = 3$, то $\phi(a)=(\alpha D(a))x, \phi(ax)=D(\alpha D(a))+az$ для некоторых $z \in A, \alpha \in F,$ таких что элементы $z$ и $\alpha$ одновременно не обращаются в нуль.*
Ясно, что $\phi(A) \subseteq M$ и $\phi(M) \subseteq A.$ Легко заметить, что $$\phi(e)=2\delta\phi(e)\cdot e=\delta\phi(e),$$ т.е. $\phi(e)=0.$ Пусть $a \in A$, тогда $$\phi(a)=\phi(e\cdot a)=\delta e\phi(a)=\frac{\delta}{2}\phi(a), \phi(ax)=2\phi(e \cdot ax)=2\delta e\cdot \phi(ax)=2\delta \phi(ax).$$ Откуда, выполнено одно из условий:\
1) $\delta=\frac{1}{2}, p=3;$\
2) $\delta=\frac{1}{2}, p\neq3;$\
3) $\delta=2, p\neq 3;$\
4) $\phi=0.$
Третий случай дает $\phi(M)=0$ и $0=\phi(a\cdot x)=2 \phi(a) \cdot x$, то есть $\phi(A)=0.$
Второй случай дает то $\phi(A)=0$ и $\phi(ax)=\phi_a.$ Тогда $$0=2\phi(ax \cdot bx)=\phi_a \cdot bx - ax \cdot \phi_b,$$ что влечет $\phi_a b=a\phi_b,$ т.е. $\phi_a=a\phi_e.$ Для того, чтобы отображение $\phi$ определенное как $\phi(A)=0, \phi(ax)=za$ являлось $\frac{1}{2}$-супердифференцированием, необходимо чтобы выполнялось условие $$\phi(a \cdot bx)=\frac{1}{2}\phi((ab)x)=\frac{1}{2}abz=\frac{1}{2}(0\cdot bx+ a\cdot (bz))=\frac{1}{2}(\phi(a)\cdot bx+a\cdot\phi(bx)).$$ Откуда видим, что $\phi$ является $\frac{1}{2}$-супердифференцированием. Заметим, что $$\phi(x \cdot x)=0\neq -\frac{1}{2}zx= -x \cdot \phi(x),$$ т.е. $\phi$ является нетривиальным $\frac{1}{2}$-супердифференцированием.
Первый случай, то есть $p=3$ и $\delta=\frac{1}{2},$ рассмотрим более детально. Положим, что $\phi(a)=\psi(a)x$ и $\phi(ax)=\mu(a).$ По лемме 3, мы получаем, что $\psi=cD,$ для некоторого $c \in A$.
Для того, чтобы отображение $\phi$, заданное по правилу $\phi(a)=(cD(a))x, \phi(ax)=D(cD(a))+az$ для некоторых $c,z \in A,$ являлось $\frac{1}{2}$-супердифференцированием, нам необходимо проверить выполнение равенства $$\phi(ax \cdot bx)=\frac{1}{2} (\phi(ax) \cdot bx - ax \cdot \phi(bx)).$$ Непосредственными вычислениями, имеем $$(cD(D(a)b-aD(b))x=\phi(ax \cdot bx)=
\frac{1}{2} (\phi(ax) \cdot bx - ax \cdot (bx))=
((D(cD(a))+az)b) x -(a(D(cD(b))+bz))x,$$ что влечет $D(c)(D(a)b-aD(b))=0,$ то есть, при $b=e,$ легко получаем $D(c)D(a)=0.$ Согласно лемме 2, существует такой элемент $w \in A,$ что $D(w)$ — обратим, что влечет $D(c)=0$ и по условиям определения дифференцирования $D$ для алгебры $A,$ заключаем, что $c=\alpha e$. Легко проверить, что отображение $\phi,$ заданное по правилам $$\phi(a)=(\alpha D(a))x, \phi(ax)=D(\alpha D(a))+az$$ для $z \in A, \alpha \in F,$ является нечетным $\frac{1}{2}$-супердифференцированием. Ясно, что если $\alpha\neq 0$ и $a \neq \beta e, \beta \in F$, то $$\phi(a\cdot e) - a \cdot \phi(e) = \phi(a) \neq 0,$$ то есть $\phi$ — нетривиальное $\frac{1}{2}$-супердифференцирование. Лемма доказана.
Отметим, что леммы 4 и 5 дают примеры новых нетривиальных $\frac{1}{2}$-дифференцирований и $\frac{1}{2}$-супердифференцирований, не являющихся операторами правого умножения.
Напомним, что в работе [@kay_zh] были построены новые примеры нетривиальных $\frac{1}{2}$-дифференцирований для простых унитальных супералгебр векторного типа $J(\Gamma, D),$ построенных на супералгебрах с тривиальной нечетной часть. Оказалось, что каждый оператор правого умножения $R_z,$ где $D(z) \neq 0, z \in \Gamma$ является нетривиальным $\frac{1}{2}$-дифференцированием, и, в свою очередь, все нетривиальные $\delta$-дифференцирования данных супералгебр исчерпываются операторами правого умножения данного вида.
**ТЕОРЕМА 6.** *Пусть $J$ — простая конечномерная йорданова супералгебра над алгебраически замкнутым полем характеристики $p \neq 2,$ обладающая нетривиальным $\delta$-дифференцированием или $\delta$-супердифференцированием. Тогда $p>2, \delta=\frac{1}{2}$ и либо $J=V_{1/2}(Z,D),$ либо $J=J(B(m), D).$*
Отметим, что $\delta$-дифференцирования и $\delta$-супердифференцирования простых конечномерных йордановых супералгебр над алгебраически замкнутым полем характеристики нуль полностью описаны в работах И. Б. Кайгородова [@kay; @kay_lie2]. Там было показано отсутствие нетривиальных $\delta$-дифференцирований и $\delta$-супердифференцирований для данных супералгебр. Согласно работе И. Б. Кайгородова и В. Н. Желябина [@kay_zh], нетривиальные $\delta$-(супер)дифференцирования на простых унитальных конечномерных йордановых супералгебрах над алгебраически замкнутым полем характеристики $p > 2$ возможны только в случае $\delta=\frac{1}{2}$ и супералгебры векторного типа $J(B(m),D).$ Согласно работе Е. Зельманова [@Zel_ssjs] простые неунитальные конечномерные йордановы супералгебры над алгебраически замкнутым полем характеристики $p>2$ исчерпываются супералгебрами вида $K_{3}, V_{1/2}(Z,D),$ супералгеброй $K_9$ в характеристике 3. Для супералгебры Капланского $K_3$ результаты об отсутствии нетривиальных $\delta$-дифференцирований и $\delta$-супердифференцирований справедливы из работ И. Б. Кайгородова [@kay; @kay_lie2] вне зависимости от характеристики поля. Лемма 1 показывает, что супералгебра $K_9$ не обладает нетривиальными $\delta$-дифференцированиями и $\delta$-супердифференцированиями. Леммы 4-5 дают примеры нетривиальных $\delta$-дифференцирований и $\delta$-супердифференцирований супералгебры $V_{1/2}(Z,D),$ которые возможны только в случае $\delta=\frac{1}{2}.$ Учитывая приведенные пояснения, теорема доказана.
**§3 $\delta$-дифференцирования и $\delta$-супердифференцирования полупростых конечномерных йордановых супералгебр.**
В данном параграфе мы получим полное описание $\delta$-дифференцирований и $\delta$-супердифференцирований полупростых конечномерных йордановых супералгебр над алгебраически замкнутым полем характеристики отличной от 2. В частности, мы покажем отсутствие нетривиальных $\delta$-дифференцирований и $\delta$-супердифференцирований на полупростых конечномерных йордановых супералгебрах над алгебраически замкнутым полем характеристики нуль.
Под супералгеброй с неделителем нуля, мы будем подразумевать супералгебру $A$ для которой существует элемент $a,$ что из равенства $ax=0$ для некоторого элемента $x \in A,$ следует $x=0.$ Примером таких супералгебр могут являться унитальные супералгебры, супералгебры $K_9$, $V_{1/2}(Z,D)$ и др.
**ЛЕММА 7.** *Пусть $\phi$ — $\delta$-дифференцирование или $\delta$-супердифференцирование супералгебры $A,$ причем $A=A_1 \oplus A_2,$ где $A_i$ — супералгебры с однородным неделителем нуля. Тогда $\phi(A_i) \subseteq A_i.$*
Пусть $e_i$ — однородный неделитель нуля супералгебры $A_i,$ а $x_i$ произвольный элемент $A_i.$ Тогда
$$0=\phi(e_i x_j)=\delta(\phi(e_i) x_j +(-1)^{p(e_i)p(\phi)}e_i\phi(x_j))=\delta\phi(e_i)_jx_j+(-1)^{p(e_i)p(\phi)}\delta e_i\phi(x_j)_i.$$
Откуда получаем $e_i\phi(x_j)_i=0$ и $\phi(x_j)_i=0$, что влечет $\phi(A_j) \subseteq A_j.$ Лемма доказана.
Пусть $J_{i}$ неунитальная супералгебра, удовлетворяющая условиям:
i\) алгебра $(J_i)_0$ имеет единицу $e_i$;
ii\) для любого элемента $z \in (J_i)_1$ верно $2e_i\cdot z=z.$
Примерами таких супералгебр являются супералгебры $K_9, K_3$ и $V_{1/2}(Z,D).$
Тогда справедлива
**ЛЕММА 8.** *Супералгебра $J=J_1 \oplus \ldots \oplus J_n+F\cdot 1$ не имеет нетривиальных $\delta$-дифференцирований и $\delta$-супердифференцирований.*
Пусть $\phi$ — нетривиальное $\delta$-дифференцирование или $\delta$-супердифференцирование. Согласно [@kay], унитальные супералгебры могут иметь нетривиальные $\delta$-дифференцирования и $\delta$-супердифференцирования только при $\delta=\frac{1}{2}.$ Рассмотрим случай $n=1,$ то есть $J=J_1 +F\cdot 1$ и $e$ — единица алгебры $(J_1)_0,$ тогда $\phi(1)=\alpha\cdot 1 +j_0+j_1,$ где $j_i \in (J_1)_i.$ Отметим, что $$2\alpha e +2j_0+ \frac{1}{2} j_1=2(\phi(1)\cdot e)\cdot e=\phi(e)\cdot e+e\cdot \phi(e)=2\phi(e)=2\phi(1)\cdot e=2\alpha e+2j_0 +j_1,$$ следовательно $\phi(1)=\alpha\cdot 1+j_0.$ Заметим, что для $x \in (J_1)_1$ верно $$\alpha x+ j_0 \cdot x=\phi(x)=2\phi(e \cdot x)=\phi(e)\cdot x+e\cdot \phi(x)=\frac{1}{2}\alpha x+j_0\cdot x +\frac{1}{2}\alpha x+\frac{1}{2} j_0 \cdot x,$$ откуда $j_0=0.$ Что влечет тривиальность $\phi$ в случае $n=1.$ В общем случае, рассматривая подсупералгебры $I_i=J_i+F\cdot 1,$ получим тривиальность ограничения $\phi$ на этих подсупералгебрах, что влечет тривиальность $\phi$ на всей супералгебре $J.$ Лемма доказана.
Согласно работе Е. Земальнова [@Zel_ssjs], если $J$ полупростая конечномерная йорданова супералгебра над алгебраически замкнутым полем $F$ характеристики $p \neq 2,$ то $J=\bigoplus_{i=1}^{s}(J_{i1}\oplus\ldots \oplus J_{ir_i}+K_i \cdot 1)\oplus J_1 \oplus \ldots \oplus J_t,$ где $J_1,\ldots,J_t$ — простые йордановы супералгебры, $K_1, \ldots, K_s$ — расширения поля $F$ и $J_{i1},\ldots , J_{ir_i}$ — простые неунитальные йордановы супералгебры над полем $K_i.$ Воспользовавнись этим результатом, леммами 7-8 и работами [@kay; @kay_lie2], где было показано отсутствие нетривиальных $\delta$-дифференцирований и $\delta$-супердифференцирований у простых конечномерных йордановых супералгебр над алгебраически замкнутым полем характеристики нуль, мы можем получить теорему
**ТЕОРЕМА 9.** *Полупростая конечномерная йорданова супералгебра над алгебраически замкнутым полем характеристики нуль не имеет нетривиальных $\delta$-дифференцирований и $\delta$-супердифференцирований.*
Пользуясь результатами работы Е. Зельманова [@Zel_ssjs], леммами 7-8 и теоремой 6 мы можем получить следующую теорему
**ТЕОРЕМА 10.** *Пусть полупростая конечномерная йорданова супералгебра $J$ над алгебраически замкнутым полем характеристики $p>2$ имеет нетривиальное $\delta$-дифференцирование или $\delta$-супердифференцирование. Тогда $\delta=\frac{1}{2}$ и $J= J^* \oplus J_*$, где либо $J_*=J(B(m), D),$ либо $J_*=V_{1/2}(Z,D).$*
В заключение, автор выражает благодарность В. Н. Желябину за внимание к работе и предложенное доказательство леммы 2.
[10]{}
Hopkins N. C., [*Generalizes Derivations of Nonassociative Algebras*]{}, Nova J. Math. Game Theory Algebra **5** (1996), 3, 215–224.
Филиппов В. Т., Об алгебрах Ли, удовлетворяющих тождеству 5-ой степени, Алгебра и логика, [**34**]{} (1995), 6, 681–705.
Филиппов В. Т., *О $\delta$-дифференцированиях алгебр Ли*, Сиб. матем. ж. **39** (1998), 6, 1409–1422.
Филиппов В. Т., *О $\delta$-дифференцированиях первичных алгебр Ли*, Сиб. матем. ж. **40** (1999), 1, 201–213.
Филиппов В. Т., *О $\delta$-дифференцированиях первичных альтернативных и мальцевских алгебр*, Алгебра и Логика **39** (2000), 5, 618–625.
Кайгородов И. Б., *О $\delta$-дифференцированиях простых конечномерных йордановых супералгебр*, Алгебра и логика **46** (2007), 5, 585–605. \[ http://arxiv.org/abs/1010.2419 \]
Кайгородов И. Б., *О $\delta$-дифференцированиях классических супералгебр Ли*, Сиб. матем. ж. **50** (2009), 3, 547–565. \[ http://arxiv.org/abs/1010.2807 \]
Кайгородов И. Б., *О $\delta$-супердифференцированиях простых конечномерных йордановых и лиевых супералгебр*, Алгебра и логика **49** (2010), 2, 195–215. \[ http://arxiv.org/abs/1010.2423 \]
Кайгородов И. Б., *Об обобщенном дубле Кантора*, Вестник Самарского гос. университета [**78**]{} (2010), 4, 42–50. \[ http://arxiv.org/abs/1101.5212 \]
Желябин В. Н., Кайгородов И. Б., *О $\delta$-супердифференцированиях простых супералгебр йордановой скобки*, Алгебра и анализ [**23**]{} (2011), 4, 40–58.
Zusmanovich P., *On $\delta$-derivations of Lie algebras and superalgebras*, J. of Algebra [**324**]{} (2010), 12, 3470–3486. \[ http://arxiv.org/abs/0907.2034 \]
Кайгородов И. Б., *О $\delta$-дифференцированиях алгебр и супералгебр*, “Проблемы теоретической и прикладной математики”, Труды 41-ой Всероссийской молодежной школы-конференции, 29–33. URL: http://home.imm.uran.ru/digas/School-2010\_A5.pdf
Кантор И. Л., *Йордановы и лиевы супералгебры, определяемые алгеброй Пуассона*, Алгебра и анализ, Томск, изд-во ТГУ (1989), 89–126.
Kac V. G., *Сlassification of simple $\mathbb{Z}$-graded Lie superalgebras and simple Jordan superalgebras*, Comm. Algebra **13** (1977), 1375–1400.
Racine M. L., Zelmanov E. I., *Simple Jordan superalgebras with semisimple even part*, J. of Algebra [**270**]{} (2003), 2, 374–444.
Martines C., Zelmanov E., *Simple finite-dimensional Jordan superalgebras of prime characteristic*, J. of Algebra **236** (2001), 2, 575–629.
Zelmanov E., *Semisimple finite dimensional Jordan superalgebras*, (English) \[A\] Fong, Yuen (ed.) et al., Lie algebras, rings and related topics. Papers of the 2nd Tainan-Moscow international algebra workshop ’97, Tainan, Taiwan, January 11–17, 1997. Hong Kong: Springer (2000), 227–243.
|
---
abstract: 'Waveguide-integrated plasmonics is a growing field with many innovative concepts and demonstrated devices in the visible and near-infrared. Here, we extend this body of work to the mid-infrared for the application of surface-enhanced infrared absorption (SEIRA), a spectroscopic method to probe molecular vibrations in small volumes and thin films. Built atop a silicon-on-insulator (SOI) waveguide platform, two key plasmonic structures useful for SEIRA are examined using computational modeling: gold nanorods and coaxial nanoapertures. We find resonance dips of 80% in near diffraction-limited areas due to arrays of our structures and up to 40% from a single resonator. Each of the structures are evaluated using the simulated SEIRA signal from poly(methyl methacrylate) and an octadecanethiol self-assembled monolayer. The platforms we present allow for a compact, on-chip SEIRA sensing system with highly efficient waveguide coupling in the mid-IR.'
author:
- 'Daniel A. Mohr, Daehan Yoo, Che Chen, Mo Li, and Sang-Hyun Oh'
title: 'Waveguide-Integrated Mid-Infrared Plasmonics with High-Efficiency Coupling for Ultracompact Surface-Enhanced Infrared Absorption Spectroscopy'
---
Introduction
============
Much work has previously been performed on integrating plasmonics[@ref1; @ref2] with low-loss dielectric photonic integrated circuits (PICs) in an effort to miniaturize free-space optical components into a compact, chip-based system orders of magnitude smaller and at lower cost. Broad potential applications have already been explored, such as surface-enhanced Raman spectroscopy (SERS)[@ref3], hybrid lasers[@ref4], optical switching[@ref5], directional waveguide coupling[@ref6], plasmon-enhanced optical forces in waveguides[@ref7; @ref8], two-plasmon quantum interference[@ref9], nanofocusing[@ref10; @ref11], integration with two-dimensional (2D) materials[@ref12; @ref13; @ref14], and refractive index sensing[@ref15]. While most of this work is in the visible and near-infrared (NIR), the mid-infrared (MIR) has yet to be greatly explored. The MIR (typically 2-10 $\mu$m in wavelengths) is an important regime for biochemical spectroscopies thanks to the vast number of chemical resonances present that can give detailed information regarding molecular structure[@ref16; @ref17; @ref18; @ref19]. The vibrational spectra of molecules are often measured with Fourier-Transform Infra-Red (FTIR) spectroscopy using broadband sources and free-space optics. However, with the advent of bright coherent MIR laser sources, MIR plasmonic antennas[@ref20], the maturation of silicon photonics technology, and the growing interest in ultrasensitive chemical identification and diagnostics, there is tremendous potential for waveguide-integrated MIR systems.
In this letter, we theoretically examine silicon waveguide-integrated plasmonics for surface-enhanced infrared absorption (SEIRA)[@ref21; @ref22; @ref23; @ref24; @ref25; @ref26; @ref27; @ref28] spectroscopy using two common plasmonic resonator building blocks: nanorods and coaxial apertures (Figure \[fig1\]). These structures have been studied in a free-space context for SEIRA, and have shown large performance improvements over standard infrared absorption techniques thanks to the field enhancement afforded by plasmonics, highly advantageous for thin films. Here, these same building blocks are arranged on a Si waveguide to perform obtain high coupling with a minimized footprint. The Si waveguide presented is designed based on a 600 nm silicon-on-insulator (SOI) wafer platform built for operation around 3 $\mu$m, a common location for resonances based on C-H bond stretching. To maintain single-mode operation, the width of the waveguide is limited to 1.6 $\mu$m, and a 200 nm thick support is used for releasing the waveguide from the oxide below. A schematic of the waveguide designed and the resonators proposed is available in Figure \[fig1\].
![\[fig1\] Schematics of the proposed devices for MIR waveguides integrated with plasmonics. (a) Nanorod pairs coupled together for high-efficiency coupling with minimal off-resonance scattering. (b) Coaxial nanoaperture embedded in a gold pad atop the waveguide for high coupling with a single resonator.](Figure1_diagram.png){width="3.3in"}
Theory
======
Metallic nanorods are a versatile building block for both waveguide-integrated plasmonics and resonant SEIRA. They are compact and can yield high field enhancements despite having such a simple geometry. While structures with few nanorods atop waveguides have already been fabricated and well-studied, we take this structure and tune it for use on MIR waveguides using dimensions similar to previous work in this area[@ref22; @ref29]. Using these parameters combined with our waveguide design, we find that pairs of nanorods placed on the waveguide yield efficient structures with lower scattering outside of the resonant wavelength compared to arrays of single rods (Figures \[fig2\]a and \[figS1\]a). To further increase the resonance dip in waveguide transmission, multiple pairs are placed in close proximity to allow coupling (Figure \[fig2\]a). We find that this design shows similar field enhancement to the far-field array version of this device (Figures \[fig2\]b,c, and \[figS2\]), while drastically reducing the excitation area from the spot-size of a typical FTIR light source ($\sim$100$\lambda^2$) to a diffraction-limited area ($\lambda^2/3$ when measured outside the waveguide). This allows for both the miniaturization to a chip-based device while also pushing the limit of detection closer to single-particle levels. In studying the device, we found that the antenna length, intra-antenna pair distance, and inter-antenna pair distance (i.e. period) are all critical in determining the resonance and coupling efficiency of the structure (Figures \[figS1\]b-c).
![\[fig2\] Nanorod-pair arrays integrated on a Si waveguide designed for the MIR. (a) Waveguide transmission of nanorod-pair arrays with different number of elements. Increasing the number of nanorod pairs increases the coupling obtained from the device. (b) Electric field distribution in a plane normal to the direction of waveguide propagation. (c) Same as (b), except in a plane taken at the top surface of the waveguide. Field enhancements are similar to those observed for far-field array devices (Figure \[figS1\]). The nanorods have dimensions of 500 nm x 275 nm x 100 nm (LxWxH) with a radius of curvature of 50 nm for corners pictured in (b). The nanorod intra-pair spacing is 500 nm with a periodicity of 475 nm.](Figure2.png){width="3.3in"}
A complementary structure we propose for waveguide-integrated plasmonics are annular apertures with very small gaps ($\sim$10s of nm), which are also promising for MIR SEIRA applications[@ref30; @ref31]. These structures can support many different resonances[@ref30; @ref32; @ref33; @ref34; @ref35; @ref36], including a TEM mode[@ref37; @ref38] that does not exhibit a cut-off, but the most convenient for excitation from a single TE-mode Si waveguide is the TE$_{11}$-like cutoff resonance. This cutoff resonance can be explained simply as a zeroth-order Fabry-Perot resonance. As the excitation wavelength increases to the cutoff wavelength, the real part of the propagation constant, $\beta$, decreases to zero, while the imaginary component increases, turning the mode from propagative to evanescent. Examining the transmission intensity for a Fabry-Perot cavity, $T=|t_1t_2|^2/|1-r_1r_2e^{i(2\beta d +\phi_1+\phi_2})|^2$ where $t_{1,2}$ are the transmission amplitude coefficients of the cavity ends, $r_{1,2}$ are the reflection amplitude coefficients with corresponding phases $\phi_{1,2}$, and $d$ is the thickness, we find that if the reflection phase is negative, it can cancel with $2\beta d$ and the Fabry-Perot condition is fulfilled, leading to resonant transmission and field enhancement. Since the propagation phase in the cavity is canceled upon reflection at the ends (and $2\beta d<2\pi$), the field profile is uniform, as seen in Figure \[fig3\]c. Alternatively, this zeroth-order Fabry-Perot resonance can be considered as an example of epsilon-near-zero (ENZ) phenomena[@ref39; @ref40] and the high transmission resulting from ‘super-coupling’. As the radius of the aperture is increased, the resonance red shifts (toward longer wavelengths) and increases the coupling efficiency (Figure \[figS3\]a). As the gap is decreased, the resonance also red-shifts (Figure \[figS3\]b), but the strength of the resonance remains approximately the same, suggesting that mode-overlap area is more critical in coupling than the open-area of the aperture.
{width="7in"}
Placing a single aperture on the waveguide with a 20 nm gap, we find coupling efficiencies of $\>$40% around 3.5 $\mu$m (Figure \[fig3\]a), much higher than those reported for individual nanorods[@ref41]. Placing multiple structures on the same gold pad increases the coupling as expected, with little dependence of the periodicity of the devices (Figure \[figS3\]c). A fabrication method of these structures with open gaps available for the greatest SEIRA signal leverages atomic layer deposition (ALD) of sacrificial Al$_2$O$_3$ layers to define nanometer-wide gaps, a scheme called atomic layer lithography[@ref30; @ref42; @ref43]. In placing this structure on the waveguide, a metallic pad is needed to act as the cladding of the coaxial aperture, which we found to introduce relatively small scattering losses into the signal. Failure to properly design the cladding, however, can lead to quite significant pad resonances (Figure \[figS3\]d).
Results and Discussion
======================
To evaluate the SEIRA performance of each of these structures, we simulated the structures with both a uniform 200 nm layer of PMMA and a 2 nm octadecanethiol (ODT) self-assembled monolayer and examined the absorption of the C-H bond bending resonances around 3.4 $\mu$m. The 200 nm layer of PMMA is intended to probe the entire electric field distribution available for sensing, while the 2 nm layer yield information regarding the confinement of the field to the surface. As can be seen in Figure \[fig4\], both nanorods and coaxial nanoapertures yield clear absorption responses of 2% and 7%, respectively, for the 200 nm PMMA layer. With a 2 nm layer of ODT, the normalized absorption for a coaxial nanoaperture is over 0.7%, while absorption in the nanorod case is much less, at just over 0.1%. This is due to the relative resonance modes of each of the structures. For nanorods, a significant portion of the electric field distribution on resonance is contained inside the silicon, and therefore is not available for sensing (Figure \[fig2\]b), while in etched coaxial nanoapertures, nearly the entire field distribution is available (Figure \[fig3\]b). This explains some of the intuition of why coaxial nanoapertures in the 200 nm PMMA case yield higher absorption responses. When reducing the analyte thickness to only 2 nm, the metal nanorods yield greatly reduced signals due to the poor field confinement, while coaxial nanoapertures maintain higher absorption levels since the field confinement is essentially defined by the gap size, which is 20 nm in this case.
![\[fig4\] (a) Simulated SEIRA of PMMA and ODT coated on the two waveguide-integrated plasmonic structures (b) and corresponding difference signals calculated from the reflection spectra. The solid lines in (a) correspond to full PMMA and ODT dielectric functions, while the dashed lines correspond to lossless dielectric functions. The spectra in (b) were calculated by subtracting the lossy spectra from the lossless spectra and then normalizing to the lossless spectra.](Figure4.png){width="3.3in"}
These findings suggest that coaxial nanoapertures generally exhibit higher performance, which is not surprising when taking into account the recent work of Huck, et al.[@ref44], which describes why aperture-based structures are often superior to antennas. Essentially, the bulk of the nanorod signal is sourced from the ends of the device, while gaps generally have a more uniform, higher electric field distribution which increases sensitivity to analyte. The gap-defined field distribution in coaxial nanoapertures can be precisely controlled below 10 nm (e.g. by using atomic layer lithography). This allows one to both tailor the field distribution to the sensing analyte and increase the sensitivity of the device per analyte particle. With the devices considered in our work, we also found greater coupling per single resonator when comparing the two resonator designs, important for pushing to low limits of detection.
Up to this point, structures with only one geometrical set of parameters have been examined. By placing multiple structures on the same waveguide with varying parameters, it is possible to create very broad resonances. As seen in Figure \[fig5\]a, coaxial nanoapertures with varying radii can be combined into one small Au pad (3 $\mu$m long) to create a broadband resonance. Figure \[fig5\]b contains the spectra simulated for five nanorod-pair triplets placed atop the same waveguide. Many other groups have been interested in creating plasmonic resonators for SEIRA that contain multiple resonances to obtain broad spectra of the analyte[@ref45]. While many of those same structures can still be used here (depending on their polarization sensitivity), by integrating devices serially on the waveguide, the same effect can be shown. While the work here has been targeted for SEIRA near 3 $\mu$m, the platform can easily be scaled to longer (or shorter) wavelength regimes, such as 6 $\mu$m, where a wide range of organic molecules have many strong vibrational modes.
![\[fig5\] Plasmonic resonators with differing geometrical parameters can be fabricated atop a single MIR waveguide leading to broadband resonances, useful for probing a larger spectral area for SEIRA. (a) Resonance due to only three separate coaxial nanoapertures placed in a serial array made in one Au pad acting as the cladding. Associated field maps of the absorption dips are shown to the right. (b) Resonance dip from an array of five nanorod-pair triplets on the same waveguide. Scale bar in (a) is 200 nm.](Figure5.png){width="3.3in"}
Conclusion
==========
Waveguide-integrated plasmonics is a growing field with many promising aspects for use in the lab-on-a-chip and photonics communities. While visible/NIR sensing applications of both SERS and refractometric sensing have both been previously demonstrated, in this work we computationally push the devices to the MIR for the application of SEIRA. In comparing gold nanorods and coaxial nanoapertures using simulated PMMA and ODT layers, we find that ring-shaped coaxial nanoapertures, a gap-based device exhibiting super-coupling effects, yields superior performance to nanorods, but both can have $\sim$90% absorption in a near diffraction-limited size. Furthermore, using a single coaxial nanoaperture, we find up to 40% of absorption, pushing the bar towards ultrasensitive detections of low-number molecular analytes. While the devices are only theoretically studied here, the fabrication of these devices is achievable, which we plan to demonstrate in a future publication and evaluate real-world performance.
Methods
=======
Computational modeling was performed using COMSOL Multiphysics in the frequency domain. To decrease simulation time, only half of the geometry was simulated taking advantage of the plane of symmetry parallel to the waveguide while scattering boundary conditions were used perpendicular to the waveguide direction (except for the boundary of symmetry). Perfectly matched layers were used at the ends of the waveguide to increase the accuracy of the simulation. Reflection and transmission measurements were measured across a simulation plane before and after the plasmonic devices. Only the optical power in the fundamental TE modes were included in these calculations. Optical constants for silicon, gold, alumina, PMMA, and ODT were obtained from previous works[@ref46; @ref47; @ref48; @ref49; @ref50].
Acknowledgments
===============
This work was supported primarily by the grant from the National Science Foundation (NSF ECCS No. 1610333 to D.Y. and S.-H.O.; NSF ECCS No. 1708768 to C.C. and M.L.); Seagate Technology through the University of Minnesota MINT consortium (D.A.M. and S.- H.O.). D.A.M. also acknowledges support from the NIH Biotechnology Training Grant (T32 GM008347) and Jackie. Computational modeling was carried out in part using resources provided by the University of Minnesota Supercomputing Institute.
[1]{} Maier, S. A.; Kik, P. G.; Atwater, H. A.; Meltzer, S.; Harel, E.; Koel, B. E.; Requicha, A. A. G. Local Detection of Electromagnetic Energy Transport Below the Diffraction Limit in Metal Nanoparticle Plasmon Waveguides. Nature Mater. 2003, 2, 229-232. Barnes, W. L.; Dereux, A.; Ebbesen, T. W. Surface Plasmon Subwavelength Optics. Nature 2003, 424, 824-830. Peyskens, F.; Dhakal, A.; Van Dorpe, P.; Le Thomas, N.; Baets, R. Surface Enhanced Raman Spectroscopy Using a Single Mode Nanophotonic-Plasmonic Platform. ACS Photonics 2015, 3, 102-108. Oulton, R. F.; Sorger, V. J.; Zentgraf, T.; Ma, R.-M.; Gladden, C.; Dai, L.; Bartal, G.; Zhang, X. Plasmon Lasers at Deep Subwavelength Scale. Nature 2009, 461, 629-632. Haffner, C.; Heni, W.; Fedoryshyn, Y.; Niegemann, J.; Melikyan, A.; Elder, D. L.; Baeuerle, B.; Salamin, Y.; Josten, A.; Koch, U.; Hoessbacher, C.; Ducry, F.; Juchli, L.; Emboras, A.; Hillerkuss, D.; Kohl, M.; Dalton, L. R.; Hafner, C.; Leuthold, J. All-Plasmonic Mach–Zehnder Modulator Enabling Optical High-Speed Communication at the Microscale. Nature Photonics 2015, 9, 525-528. Vercruysse, D.; Neutens, P.; Lagae, L.; Verellen, N.; Van Dorpe, P. Single Asymmetric Plasmonic Antenna as a Directional Coupler to a Dielectric Waveguide. ACS Photonics 2017, 4, 1398-1402. Li, H.; Noh, J. W.; Chen, Y.; Li, M. Enhanced Optical Forces in Integrated Hybrid Plasmonic Waveguides. Opt. Express 2013, 21, 11839-11851. Thijssen, R.; Verhagen, E.; Kippenberg, T. J.; Polman, A. Plasmon Nanomechanical Coupling for Nanoscale Transduction. Nano Lett. 2013, 13, 3293-3297. Fakonas, J. S.; Lee, H.; Kelaita, Y. A.; Atwater, H. A. Two-Plasmon Quantum Interference. Nature Photonics 2014, 8, 317-320. Choo, H.; Kim, M.-K.; Staffaroni, M.; Seok, T. J.; Bokor, J.; Cabrini, S.; Schuck, P. J.; Wu, M. C.; Yablonovitch, E. Nanofocusing in a Metal–Insulator–Metal Gap Plasmon Waveguide with a Three-Dimensional Linear Taper. Nature Photonics 2012, 6, 838-844. Nielsen, M. P.; Lafone, L.; Rakovich, A.; Sidiropoulos, T. P. H.; Rahmani, M.; Maier, S. A.; Oulton, R. F. Adiabatic Nanofocusing in Hybrid Gap Plasmon Waveguides on the Silicon-on-Insulator Platform. Nano Lett. 2016, 16, 1410-1414. Liu, M.; Yin, X.; Ulin-Avila, E.; Geng, B.; Zentgraf, T.; Ju, L.; Wang, F.; Zhang, X. A Graphene-Based Broadband Optical Modulator. Nature 2011, 474, 64-67. Youngblood, N.; Chen, C.; Koester, S. J.; Li, M. Waveguide-Integrated Black Phosphorus Photodetector with High Responsivity and Low Dark Current. Nature Photonics 2015, 9, 247-252. Chen, C.; Youngblood, N.; Peng, R.; Yoo, D.; Mohr, D. A.; Johnson, T. W.; Oh, S.-H.; Li, M. Three-Dimensional Integration of Black Phosphorus Photodetector with Silicon Photonics and Nanoplasmonics. Nano Lett. 2017, 17, 985-991. Février, M.; Gogol, P.; Barbillon, G.; Aassime, A.; Mégy, R.; Bartenlian, B.; Lourtioz, J.-M.; Dagens, B. Integration of Short Gold Nanoparticles Chain on SOI Waveguide Toward Compact Integrated Bio-Sensors. Opt. Express 2012, 20, 17402. Maier, S. Plasmonics: Fundamentals and Applications; Springer: New York, NY, 2007. Law, S.; Podolskiy, V.; Wasserman, D. Towards Nano-Scale Photonics with Micro-Scale Photons: the Opportunities and Challenges of Mid-Infrared Plasmonics. Nanophotonics 2013, 2, 103-130. Rodrigo, D.; Limaj, O.; Janner, D.; Etezadi, D.; de Abajo, F. J. G.; Pruneri, V.; Altug, H. Mid-Infrared Plasmonic Biosensing with Graphene. Science 2015, 349, 165-168. Low, T.; Avouris, P. Graphene Plasmonics for Terahertz to Mid-Infrared Applications. ACS Nano 2014, 8, 1086-1101. Schnell, M.; Alonso-Gonzalez, P.; Arzubiaga, L.; Casanova, F.; Hueso, L. E.; Chuvilin, A.; Hillenbrand, R. Nanofocusing of Mid-Infrared Energy with Tapered Transmission Lines. Nature Photonics 2011, 5, 283-287. Osawa, M. Surface-Enhanced Infrared Absorption. Topics Appl. Phys. 2001, 81, 163-187. Adato, R.; Yanik, A. A.; Amsden, J. J.; Kaplan, D. L.; Omenetto, F. G.; Hong, M. K.; Erramilli, S.; Altug, H. Ultra-Sensitive Vibrational Spectroscopy of Protein Monolayers with Plasmonic Nanoantenna Arrays. Proc. Natl. Acad. Sci. USA 2009, 106, 19227-19232. Wu, C.; Khanikaev, A. B.; Adato, R.; Arju, N.; Yanik, A. A.; Altug, H.; Shvets, G. Fano-Resonant Asymmetric Metamaterials for Ultrasensitive Spectroscopy and Identification of Molecular Monolayers. Nature Mater. 2012, 11, 69-75. Brown, L. V.; Zhao, K.; King, N.; Sobhani, H.; Nordlander, P.; Halas, N. J. Surface-Enhanced Infrared Absorption Using Individual Cross Antennas Tailored to Chemical Moieties. J. Am. Chem. Soc. 2013, 135, 3688-3695. Brown, L. V.; Yang, X.; Zhao, K.; Zheng, B. Y.; Nordlander, P.; Halas, N. J. Fan-Shaped Gold Nanoantennas Above Reflective Substrates for Surface-Enhanced Infrared Absorption (SEIRA). Nano Lett. 2015, 15, 1272-1280. Huck, C.; Neubrech, F.; Vogt, J.; Toma, A.; Gerbert, D.; Katzmann, J.; H'’artling, T.; Pucci, A. Surface-Enhanced Infrared Spectroscopy Using Nanometer-Sized Gaps. ACS Nano 2014, 8, 4908-4914. Chen, X.; Cirací, C.; Smith, D. R.; Oh, S.-H. Nanogap-Enhanced Infrared Spectroscopy with Template-Stripped Wafer-Scale Arrays of Buried Plasmonic Cavities. Nano Lett. 2015, 15, 107-113. Neubrech, F.; Huck, C.; Weber, K.; Pucci, A.; Giessen, H. Surface-Enhanced Infrared Spectroscopy Using Resonant Nanoantennas. Chem. Rev. 2017, 117, 5110-5145. Chen, Y.; Lin, H.; Hu, J.; Li, M. Heterogeneously Integrated Silicon Photonics for the Mid-Infrared and Spectroscopic Sensing. ACS Nano 2014, 8, 6955-6961. Yoo, D.; Nguyen, N.-C.; Martín-Moreno, L.; Mohr, D. A.; Carretero-Palacios, S.; Shaver, J.; Peraire, J.; Ebbesen, T. W.; Oh, S.-H. High-Throughput Fabrication of Resonant Metamaterials with Ultrasmall Coaxial Apertures via Atomic Layer Lithography. Nano Lett. 2016, 16, 2040-2046. Yoo, D.; Mohr, D. A.; Vidal-Codina, F.; John-Herpin, A.; Jo, M.; Kim, S.; Matson, J.; Caldwell, J. D.; Jeon, H.; Nguyen, N.-C.; Martín-Moreno, L.; Peraire, J.; Altug, H.; Oh, S.-H. High-Contrast Infrared Absorption Spectroscopy via Mass-Produced Coaxial Zero-Mode Resonators with Sub-10 Nm Gaps. Nano Lett. 2018, 18, 1930-1936. Baida, F. I.; Van Labeke, D. Light Transmission by Subwavelength Annular Aperture Arrays in Metallic Films. Opt. Commun. 2002, 209, 17-22. Fan, W.; Zhang, S.; Minhas, B.; Malloy, K. J.; Brueck, S. R. J. Enhanced Infrared Transmission Through Subwavelength Coaxial Metallic Arrays. Phys. Rev. Lett. 2005, 94, 033902. Orbons, S. M.; Roberts, A. Resonance and Extraordinary Transmission in Annular Aperture Arrays. Opt. Express 2006, 14, 12623-12628. de Waele, R.; Burgos, S. P.; Polman, A.; Atwater, H. A. Plasmon Dispersion in Coaxial Waveguides From Single-Cavity Optical Transmission Measurements. Nano Lett. 2009, 9, 2832-2837. Catrysse, P. B.; Fan, S. Understanding the Dispersion of Coaxial Plasmonic Structures Through a Connection with the Planar Metal-Insulator-Metal Geometry. Appl. Phys. Lett. 2009, 94, 231111. Baida, F. I.; Belkhir, A.; Van Labeke, D.; Lamrous, O. Subwavelength Metallic Coaxial Waveguides in the Optical Range: Role of the Plasmonic Modes. Phys. Rev. B 2006, 74, 205419. Li, D.; Gordon, R. Electromagnetic Transmission Resonances for a Single Annular Aperture in a Metal Plate. Phys. Rev. A 2010, 82, 041801(R). Silveirinha, M.; Engheta, N. Theory of Supercoupling, Squeezing Wave Energy, and Field Confinement in Narrow Channels and Tight Bends Using $\epsilon$-Near-Zero Metamaterials. Phys. Rev. Lett. 2007, 76, 245109. Argyropoulos, C.; Chen, P.-Y.; D’Aguanno, G.; Engheta, N.; Alù, A. Boosting Optical Nonlinearities in $\epsilon$-Near-Zero Plasmonic Channels. Phys. Rev. B 2012, 85, 045129. Espinosa-Soria, A.; Martínez, A.; Griol, A. Experimental Measurement of Plasmonic Nanostructures Embedded in Silicon Waveguide Gaps. Opt. Express 2016, 24, 9592-9601. Im, H.; Bantz, K. C.; Lindquist, N. C.; Haynes, C. L.; Oh, S.-H. Vertically Oriented Sub-10-nm Plasmonic Nanogap Arrays. Nano Lett. 2010, 10, 2231-2236. Chen, X.; Park, H.-R.; Pelton, M.; Piao, X.; Lindquist, N. C.; Im, H.; Kim, Y. J.; Ahn, J. S.; Ahn, K. J.; Park, N.; Kim, D.-S.; Oh, S.-H. Atomic Layer Lithography of Wafer-Scale Nanogap Arrays for Extreme Confinement of Electromagnetic Waves. Nature Commun. 2013, 4, 2361. Huck, C.; Vogt, J.; Sendner, M.; Hengstler, D.; Neubrech, F.; Pucci, A. Plasmonic Enhancement of Infrared Vibrational Signals: Nanoslits Versus Nanorods. ACS Photonics 2015, 2, 1489-1497. Limaj, O.; Etezadi, D.; Wittenberg, N. J.; Rodrigo, D.; Yoo, D.; Oh, S.-H.; Altug, H. Infrared Plasmonic Biosensor for Real-Time and Label-Free Monitoring of Lipid Membranes. Nano Lett. 2016, 16, 1502-1508. Chandler-Horowitz, D.; Amirtharaj, P. M. High-Accuracy, Midinfrared (450 $\text{cm}^{-1}$ $\leq$ $\omega$ $\leq$ 4000 $\text{cm}^{-1}$) Refractive Index Values of Silicon. J. Appl. Phys. 2005, 97, 123526. Olmon, R. L.; Slovick, B.; Johnson, T. W.; Shelton, D.; Oh, S.-H.; Boreman, G. D.; Raschke, M. B. Optical Dielectric Function of Gold. Phys. Rev. B 2012, 86, 235147. Kischkat, J.; Peters, S.; Gruska, B.; Semtsiv, M.; Chashnikova, M.; Klinkm'’uller, M.; Fedosenko, O.; Machulik, S.; Aleksandrova, A.; Monastyrskyi, G.; Flores, Y.; Ted Masselink, W. Mid-Infrared Optical Properties of Thin Films of Aluminum Oxide, Titanium Dioxide, Silicon Dioxide, Aluminum Nitride, and Silicon Nitride. Appl. Opt. 2012, 51, 6789. Graf, R. T.; Koenig, J. L.; Ishida, H. Optical Constant Determination of Thin Polymer Films in the Infrared:. Appl. Spectrosc. 1985, 39, 405-408. Hu, Z. G.; Prunici, P.; Patzner, P.; Hess, P. Infrared Spectroscopic Ellipsometry of N-Alkylthiol (C$_{5}$-C$_{18}$) Self-Assembled Monolayers on Gold. J. Phys. Chem. B 2006, 110, 14824-14831.
Supporting Information
======================
{width="7in"}
 Spectra of an array version of the nanorod structure used. Here, the rod length is 500 nm, the width is 275 nm, the period in the x-direction is 1 $\mu$m, and the period in the y-direction is 475 nm. (b) Corresponding electric field map exhibiting similar field enhancement as the waveguide-integrated device.](FigureS2.png){width="3.3in"}
{width="7in"}
|
---
abstract: 'In this paper, we study the communication problem from rovers on Mars’ surface to Mars-orbiting satellites. We first justify that, to a good extent, the rover-to-orbiter communication problem can be modelled as communication over a $2 \times 2$ X-channel with the network topology *varying* over time. For such a fading X-channel where transmitters are only aware of the time-varying topology but not the time-varying channel state (i.e., no CSIT), we propose coding strategies that *code across topologies*, and develop upper bounds on the sum degrees-of-freedom (DoF) that is shown to be tight under certain pattern of the topology variation. Furthermore we demonstrate that the proposed scheme approximately achieves the ergodic sum-capacity of the network. Using the proposed coding scheme, we numerically evaluate the ergodic rate gain over a time-division-multiple-access (TDMA) scheme for Rayleigh and Rice fading channels. We also numerically demonstrate that with practical orbital parameters, a 9.6% DoF gain, as well as more than 11.6% throughput gain can be achieved for a rover-to-orbiter communication network.'
author:
- 'Songze Li, David T.H. Kao, and A. Salman Avestimehr, [^1] [^2] [^3]'
bibliography:
- 'ref.bib'
title: |
Rover-to-Orbiter Communication in Mars:\
Taking Advantage of the Varying Topology
---
Mars, Rover-to-Orbiter Communication, X-channel, Varying Topologies, Coding Across Topologies.
Introduction
============
As an increasing amount of science and engineering data are collected by rovers on Mars’ surface and sent back to Earth, efficient rover-to-Earth communication is of primary importance. Originally, each Mars exploration rover communicated to the deep space network (DSN) on Earth directly in a point-to-point fashion. However, the data rate suffered from limited rover transmission power, large path loss and occlusion in line-of-sight. Currently, much more data is sent to Earth via the relaying of Mars orbiting satellites (orbiters). For example, each of the 2004 Mars Exploration Rovers (MER) returned well over 150 Mbits of data to Earth per Mars solar day (sol), 92% of which were relayed by two Mars orbiters. That was 5 times of the communication rate of the 1996 Mars Pathfinder lander, which was only capable of direct-to-earth (DTE) communication [@statman2004coding]. The dramatic data rate increase enabled more frequent usage of the data-rich applications like continuous sensing and monitoring, video recording and high-resolution, three-color panoramas.
Current Mars orbiters employ the decode-and-forward (DF) relaying strategies such that they decode the messages from rovers and re-encode them to send to Earth. On the rover-to-orbiter proximity link, time-division-multiple-access (TDMA) that allows one rover to talk to one orbiter at a time has been used to avoid interference. Although TDMA requires low communication overhead and little coordination between communication entities, it severely limits the overall communication rate. In this paper, we are interested in within the DF framework (although other joint-decoding schemes [@avestimehr2011; @lim2011noisy; @nazer2011] may achieve higher data rate), how can we increase the rover-to-orbiter communication rate without significantly increasing the system complexity and communication overhead?
To answer this question, we first note that the existence of a line-of-sight (LOS) communication link from a rover to an orbiter varies over time. For example, the Mars Reconnaissance Orbiter (MRO) appears in the line-of-sight of a rover located close to the Mars north pole 12 times per sol, for a duration of 8 minutes each time [@taylor2006]. Consequently, for a system comprised of multiple rovers and orbiters, the *topology* of the communication network, determined by which rovers can see which orbiters, varies over time. We also note that communicating topological information to the rovers requires little communication overhead (one bit of feedback per rover-orbiter pair at each time), and is more practical than providing the rovers with full channel state information.
Even with only knowledge of network topology at transmitters, recent results [@vahid2013; @sun2013; @issa2013; @gherekhloo2013; @GCS; @chen14] have demonstrated the potential communication rate gains by *coding across topologies* (CAT) for the interference channel, X-channel, and broadcast channel with varying topologies. Given the time-varying property of the rover-to-orbiter network topology, our goal in this paper is to understand the fundamental impact of coding across topologies on the performance of the rover-to-orbiter communication network. As a first step, we justify that a 2-rover, 4-orbiter communication problem can be modelled as a $2\times 2$ X-channel with varying topologies. We also assume that, due to practical constraints (e.g., limited coordination), no channel state information is available at transmitters other than the network topology (i.e., no CSIT).
For a $2 \times 2$ Gaussian X-channel with varying topologies and no CSIT besides the network topology, we identify two *coding opportunities* where we may exploit the time varying topologies by coding across them, and we characterize the sum-DoF. The key idea of the achievable scheme is to exhaustively create coding opportunities in a way that maximizes the DoF gain. Conversely, we develop upper bounds on the sum-DoF that is tight under certain pattern of the topology variation. In addition to the DoF results, we also demonstrate that our coding scheme achieves the ergodic sum-capacity to within a constant gap for uniform-phase, Rayleigh and Rice fading X-channels with varying topologies.
Finally, we numerically demonstrate the rate gains of our proposed scheme over the rate achieved by TDMA in Rayleigh and Rice fading channels with varying topologies. We also demonstrate that for a 2-rover, 4-orbiter communication network, applying the proposed coding scheme can increase the DoF by 9.6%, and the throughput by at least 11.6%.
Problem setting and statement of main results
=============================================
To study the rover-to-orbiter communication problem, we begin by justifying that we may model the problem as communication over a $2 \times 2$ Gaussian X-channel with varying topologies.
Currently, the Mars landing rovers (e.g., Mars Science Laboratory (MSL) Curiosity rover [@makovsky2009]) send exploration data to Earth by relaying through two Mars orbiters: MRO and Mars Odyssey Orbiter [@makovsky2002]. However, additional orbiters, including the Mars Express spacecraft and the recently launched Mars Atmosphere and Volatile Evolution (MAVEN) Orbiter, are also capable of relaying. We assume, according to [@taylor2006], that a rover is able to establish a communication link to an orbiter if and only if the orbiter is within the line-of-sight of the rover, which is defined as:
An orbiter is said to be within the line-of-sight (LOS) of a rover if the orbiter forms at least a elevation angle from the surface tangent plane of the rover. $\hfill \square$
We then simulated a system comprised of 2 rovers and 4 orbiters over one sol (approximately 24 hours), and recorded the instantaneous network topologies determined by the active links.
{width="48.00000%"}
Our simulation indicates that, interestingly, 40% (sum of the time fractions of the topologies listed in Table \[tab:statis\]) of the time when *any* communication is possible, the network topology is made up of three or more rover-to-orbiter LOS links. In all of these “complicated” topologies (listed in Table \[tab:statis\]), only 2 rovers and 2 orbiters are relevant. Furthermore, because all orbiters serve as relays to forward decoded messages to Earth, a rover’s message will reach Earth as long as any single orbiter can decode the message. Since 1) the topology rarely includes more than two orbiters, 2) the identity of the rover is irrelevant, and 3) we wish to maximize the data rate, it is sufficient to study the sum-capacity of the $2 \times 2$ X-channel, where each rover communicates messages to each of the two orbiters.
Additionally, we emphasize that the network topology varies over time in a non-trivial manner. This is evidenced by observing that, during one sol, a good fraction of time (ranging from 1% to 11%) is spent in each of the topologies listed in Table \[tab:statis\]. Moreover, the time spent in each topology in Table \[tab:statis\] is comparable to the time spent in other topologies with only one or two LOS communication links. Therefore, to capture the time-varying properties of the topology of the aforementioned $2\times 2$ X-channel, all non-degenerate topologies should be incorporated into the problem formulation as possible scenarios. To this end, a $2 \times 2$ Gaussian X-channel with the topology varying among all possibilities (except for the case where no LOS link exists) becomes the natural choice to model the rover-to-orbiter communication problem.
Problem Formulation and Definitions
-----------------------------------
A $2 \times 2$ Gaussian X-channel with varying topologies consists of two transmitters and two receivers, where each transmitter has an independent message to send to each of the two receivers. The network topology is defined by a binary vector $\mathbf{c}=(c_{11},c_{12},c_{21},c_{22})^T$, where $c_{ij}$, $i,j \in \{1,2\}$, is equal to 1 if and only if there is a LOS (defined in Definition 1) communication link from Transmitter $j$ to Receiver $i$, $c_{ij}=0$ otherwise. We index all considered topologies in Fig. \[fig:top\], and let $\mathcal{A} =\{s_1,s_2,s_3,s_4,m_1,m_2,b_1,b_2,z_1,z_2,z_3,z_4,i_1,i_2,f\}$ denote the set of all topology indices. We assume that over the course of communication (consisting of $N$ time slots), the network topology changes randomly with an arbitrary distribution over the possibilities in Fig. \[fig:top\], and we denote the fraction of time spent in Topology $a$ as $\lambda_a$, for all $a \in \mathcal{A}$. Because we focus only on time instances where at least one communication link exists, $\sum\limits_{a \in \mathcal{A}} \lambda_a =1$.
![Possible topologies for a $2\times2$ X-channel. Each topology diagram consists of 4 nodes with 2 transmitter nodes on the left and 2 receiver nodes on the right. A directed edge from a transmitter node to a receiver node exists if and only if there is a LOS communication link from the transmitter to the receiver.[]{data-label="fig:top"}](topologyJ.pdf){width="45.00000%"}
Given the varying topology, the output signal at Receiver $j$ in time slot $n$ is, for $i,j \in \{1,2\}$: $$Y_j(n) = \sum\limits_{i: c_{ji}(n)=1} h_{ji}(n)X_i(n) + Z_j(n),$$ where $\mathbf{c}(n)$ is the topology in time slot $n$, globally known at all transmitters and receivers, $X_i(n) \in \mathbb{C}$ is the input symbol of Transmitter $i$, $h_{ji}(n) \in \mathbb{C}$ is the channel coefficient from Transmitter $i$ to Receiver $j$, and $Z_j(n)\sim \mathcal{N}(0,1)$ is the complex additive white Gaussian noise at Receiver $j$. We assume that the channel coefficients, $h_{ji}(n)$, are drawn identically and independently across indices $i,j$ from a bounded, continuous distribution. Each transmitter is subject to the same power constraint $P$, i.e., $\mathbb{E}\{|X_i(n)|^2\} \leq P$, $i = 1,2$.
We denote the collection of channel coefficients over $N$ time slots as ${{\boldsymbol h}^N = \{h_{11}(n),h_{12}(n),h_{21}(n),h_{22}(n)\}_{n=1}^N}$, and we assume that the transmitters are not aware of the values of ${\boldsymbol h}^N$, while the receivers know the values of ${\boldsymbol h}^N$ perfectly. Note that this assumption of no CSIT but network topology was also referred as *minimal CSIT* in the topological interference management (TIM) problem defined in [@jafar13], .
Over $N$ time slots, message $W_{ji}\in \{1,\ldots,2^{NR_{ji}}\}$, $i,j =1,2$, is communicated from Transmitter $i$ to Receiver $j$ at rate $R_{ji}$. The rate tuple $(R_{11},R_{12},R_{21},R_{22})$ is achievable if the probability of decoding error for every message vanishes as $N \rightarrow \infty$. The capacity region $\mathcal{C}$ is the union of all achievable rate vectors, and the sum-capacity is defined as $$\begin{aligned}
C_{\Sigma} \overset{\Delta}{=} \underset{(R_{11},R_{12},R_{21},R_{22})\in \mathcal{C}}{\text{max}} R_{11}+R_{12}+R_{21}+R_{22}. \end{aligned}$$
We also define the sum degrees-of-freedom (DoF) as $d_{\Sigma} \overset{\Delta}{=} \lim \limits_{P \rightarrow \infty} \dfrac{C_\Sigma}{\log P}$.
Main Results
------------
Our first result characterizes the sum-DoF of the $2\times 2$ Gaussian X-channel with the topology varying symmetrically as defined below:
A $2 \times 2$ Gaussian X-channel with the topology varying across the ones in Fig. \[fig:top\], is said to have *symmetric* topology variation if $\lambda_{z_1}+\lambda_{z_4} = \lambda_{z_2}+\lambda_{z_3}$. $\hfill \square$
Notice that the property of a “symmetric topology variation” is a property the fractions of the time the system stays in different topologies should satisfy, but not a property of the individual topologies. $\hfill \square$
The sum-DoF of the $2 \times 2$ Gaussian X-channel, with symmetric topology variation $$\begin{aligned}
d_{\Sigma} =& 1+\lambda_{i_1}+\lambda_{i_2} \nonumber \\
+& \text{min}\big\{\lambda_{z_1}+\lambda_{z_4}, \lambda_{z_1}+\lambda_{z_3}+\lambda_f, \lambda_{z_2}+\lambda_{z_4}+\lambda_f\big\}.\label{eq:DoF}\end{aligned}$$
Other than the two interference-free topologies $i_1$ and $i_2$ (which achieves a sum-DoF of 2), all of the topologies are individually limited to a sum-DoF of 1 without CSIT [@JS2008]. Hence when coding within each topology separately, the maximal achievable sum-DoF is $1+\lambda_{i_1}+\lambda_{i_2}$, and the last term in (\[eq:DoF\]) represents the DoF gain by coding across topologies. $\hfill \square$
A similar problem is solved in [@sun2013] with 4 admissible topologies $z_1$, $z_3$, $i_1$ and $f$. Incorporating more topologies into the problem creates new opportunities for the system to benefit from coding across topologies (see Section III). $\hfill \square$
Without the symmetric constraint on the topology variation, we have the following lower and upper bounds on the sum-DoF:
The sum-DoF of a $2 \times 2$ Gaussian X-channel, with the network topology varying over time across the topologies listed in Fig. \[fig:top\], $d_{\Sigma}$ is bounded as $$\begin{aligned}
d_{\Sigma} \geq & 1+\lambda_{i_1}+\lambda_{i_2}+ \min \{\lambda_{z_1}+\lambda_{z_4},\lambda_{z_2}+\lambda_{z_3}, \nonumber\\
&\lambda_{z_1}+\lambda_{z_3}+\lambda_{f}, \lambda_{z_2}+\lambda_{z_4}+\lambda_{f}\}, \label{eq:Alower}\\
d_{\Sigma} \leq & 1+\lambda_{i_1}+\lambda_{i_2}+\min \{ \frac{\lambda_{z_1}+\lambda_{z_2}+\lambda_{z_3}+\lambda_{z_4}}{2}, \nonumber \\
&\lambda_{z_1}+\lambda_{z_3}+\lambda_{f},\lambda_{z_2}+\lambda_{z_4}+\lambda_{f} \}. \label{Aupper}\end{aligned}$$
In general, the lower and upper bounds in Theorem 2 do not match. However, as demonstrated in Table \[tab:gap\] below, the average gap between the lower and the upper bounds, evaluated over 10000 randomly chosen $\{\lambda_{z_1},\lambda_{z_2},\lambda_{z_3},\lambda_{z_4},\lambda_f\}$, is rather small compared to the achieved sum-DoF in (\[eq:Alower\]). $\hfill \square$
$\lambda_{z_1}\!+\!\lambda_{z_2}\!+\!\lambda_{z_3}\!+\!\lambda_{z_4}\!+\!\lambda_f$ Maximum Gap Average Gap
------------------------------------------------------------------------------------- ------------- -------------
0.2 0.0887 0.0174
0.5 0.2164 0.0436
0.8 0.3515 0.0702
: Gaps between lower and upper bounds of the sum-DoF of an X-channel with varying topologies[]{data-label="tab:gap"}
Using the proposed coding scheme in Theorem 2 (described in Section III and Appendix II) on the simulated system whose topology variation is specified in Table I achieves a DoF gain of 9.47% over the TDMA scheme. To see this:
1. We apply the proposed coding scheme respectively to the X-channel containing the 5 topologies at the top of Table I and another X-channel containing the 5 topologies at the bottom of Table I.
2. Then we use the normalized time fractions in Table I to calculate the DoF gains over TDMA (i.e., the last term in (\[eq:Alower\])) in each of the X-channels. That is 4.21% for the top X-channel and 5.26% for the bottom X-channel.
3. We sum up these two DoF gains and compare with the DoF achieved by the TDMA scheme (=1 because $\lambda_{i_1}+\lambda_{i_2} = 0$ for both X-channels in the simulation results) to obtain an overall DoF gain of 9.47%. $\hfill \square$
When channel coefficients are also i.i.d. varying over time, the following result characterizes the ergodic sum-rate achieved by our proposed coding scheme:
For the $2 \times 2$ Gaussian X-channel with symmetric topology variation (defined in Definition 2) and i.i.d. (across space and time) channel coefficients, we define ${\phi \overset{\Delta}{=} \text{min}\{|\lambda_{z_1}\!-\!\lambda_{z_2}|, \lambda_f\}}$, and an ergodic sum-rate $R_\Sigma$ is achievable if $$\begin{aligned}
R_{\Sigma} \leq & A\left[\sum \limits_{k=1}^4 \lambda_{s_k} + 2(\lambda_{i_1}+\lambda_{i_2})+\lambda_{b_1}+\lambda_{b_2}\right] \nonumber \\
&+ \!B \big[\lambda_{m_1}\!+\!\lambda_{m_2}\!+\!|\lambda_{z_1}\!-\!\lambda_{z_2}|\!+\!|\lambda_{z_3}\!-\!\lambda_{z_4}|\!+\!\lambda_f -3\phi \big] \nonumber\\
&+ \!C\left[\text{min}\{\lambda_{z_1},\lambda_{z_2}\} + \text{min}\{\lambda_{z_3},\lambda_{z_4}\}\right] +D\phi,\end{aligned}$$ where $$\begin{aligned}
A \!=& \mathbb{E}\left\{\log(1+|h_{11}|^2P)\right\}, \\
B \!=& \mathbb{E}\left\{\log(1+|h_{11}|^2P+|h_{12}|^2P)\right\},\\
C \!=& \mathbb{E}\left\{\log\left(\left|\mathbf{I} + P \mathbf{V}\mathbf{V}^H\right|\right)\right\} \\
&+ \mathbb{E}\left\{\log\left(1+\frac{|h_{21}(2)|^2 P}{1+|h_{22}(2)|^2\slash |h_{22}(1)|^2}\right)\right\},\\
D \!=& 2\mathbb{E}\left \{\! \log \!\! \left(\frac{\Bigg | \begin{bmatrix}1 & 0\\ 0 & 1+|h_{12}(3)|^2 \slash |h_{12}(1)|^2
\end{bmatrix}\!\!+\! P \mathbf{U}\mathbf{U}^H\Bigg |}{1+|h_{12}(3)|^2 \slash |h_{12}(1)|^2}\right)\! \right\},\\
\mathbf{V} \!=& \begin{bmatrix}
h_{11}(1) & h_{12}(1)\\ 0 & h_{12}(2)
\end{bmatrix},\quad
\mathbf{U} \!= \begin{bmatrix}
h_{12}(2) & h_{11}(2) \\ 0 & h_{11}(3) \end{bmatrix},\end{aligned}$$ with all expectations taken over random channel coefficients.
Furthermore, we demonstrate via the following corollary of Theorem 3 that our coding scheme approximately achieves the ergodic sum-capacity of the $2\times 2$ Gaussian X-channel with symmetric topology variation, for uniform-phase, Rayleigh and Rice fading channel coefficients.
The ergodic sum-capacity of the $2\times 2$ Gaussian X-channel with symmetric topology variation and i.i.d. (across space and time) channel coefficients is achievable to within a constant gap for one of the following channel distributions if $\lambda_{z_1} = \lambda_{z_2}$ or $\lambda_f \leq |\lambda_{z_1}-\lambda_{z_2}|$:
- $h_{ik} = e^{j\theta_{ik}}$, and $\theta_{ik}$ is uniformly distributed over $[0,2\pi)$, $i,k = 1,2$,
- Rayleigh fading with $\mathbb{E}\{h_{ik}\}=0$ and $\mathbb{E}\{|h_{ik}|^2\}=1$, $i,k=1,2$,
- Rice fading with Rice factor $K_r=1$.
We present the achievable scheme of Theorem 1 in Section III and demonstrate that the proposed scheme is indeed optimal by proving the converse of Theorem 1 in Section IV and Appendix I. In Appendix II we provide a sketch of the proof of Theorem 2, which extends the results of Theorem 1 to the general asymmetric topology variation setting. Finally, Theorem 3 and Corollary 1 are proved in Section V to demonstrate the rate performance of the proposed scheme for fading channels.
Achievability of Theorem 1 {#sec:achieve}
==========================
A *coding opportunity* is a combination of topologies, coding across which achieves a DoF gain over coding in each topology separately. In this section, we first identify two coding opportunities for the $2 \times 2$ X-channel with varying topologies, and then state our achievable scheme, which optimally creates and exploits the two coding opportunities to achieve the sum-DoF of Theorem 1. For ease of exposition, we will ignore noise terms in the DoF analysis. We also note that, by assumption, all channel coefficients are non-zero with probability 1 at any time.
Coding Opportunity 1: $\{z_1,z_2\}$, $\{z_3,z_4\}$ {#sec:coding1}
--------------------------------------------------
We illustrate in Fig. \[fig:CO1\] a two-phase transmission strategy for the topology combination $\{z_1,z_2\}$ to achieve a sum-DoF of $\frac{3}{2}$, and note that the same scheme can be applied to topology combination $\{z_3,z_4\}$ by relabelling the transmitters and receivers.
![Illustration of a $\frac{3}{2}$-DoF coding scheme for topology combination $\{z_1,z_2\}$. Receiver 1 decodes $X_1(1)$ and $X_2(1)$ from two linearly independent combinations $(L_1,L_1')$ of $(X_1(1),X_2(1))$, and Receiver 2 decodes $X_1(2)$ from two linearly independent combinations $(L_2,L_2')$ of $(X_1(2),X_2(1))$.[]{data-label="fig:CO1"}](z1z2.pdf){width="45.00000%"}
Receiver 1 uses $\left(L_1,L_1'\right)$ to decode $X_1(1)$ and $X_2(1)$ (always decodable because all channel gains are almost surely non-zero); Receiver 2 uses $L_2$ to cancel $X_2(1)$ from $L_2'$, and then decodes $X_1(2)$. In total, 3 symbols are decoded in 2 time slots, achieving a sum-DoF of $\frac{3}{2}$.
Coding Opportunity 2: $\{z_2, z_4, f\}$, $\{z_1, z_3, f\}$ {#sec:coding2}
----------------------------------------------------------
Next, we illustrate in Fig. \[fig:CO2\] a three-phase transmission strategy for the topology combination $\{z_2,z_4,f\}$ (and $\{z_1,z_3,f\}$ by relabelling the receivers) to achieve a sum-DoF of $\frac{4}{3}$.
Receiver 1 uses $L_1$ to cancel $X_2(1)$ from $L_1''$, obtaining a residual signal $\tilde{L}_1''$, and decodes $X_1(2)$ and $X_2(2)$ using $L_1'$ and $\tilde{L}_1''$. Receiver 2 uses $L_2'$ to cancel $X_1(2)$ from $L_2''$, obtaining a residual signal $\tilde{L}_2''$, and decodes $X_{1}(1)$ and $X_2(1)$ using $L_2$ and $\tilde{L}_2''$. Overall, 4 symbols are decoded in 3 time slots, achieving a sum-DoF of $\frac{4}{3}$.
![Illustration of a $\frac{4}{3}$-DoF coding scheme for topology combination $\{z_2,z_4,f\}$. Receiver 1 decodes $X_1(2)$ and $X_2(2)$ from 3 linearly independent combinations $(L_1,L_1',L_1'')$ of $(X_2(1),X_1(2),X_2(2))$, and Receiver 2 decodes $X_1(1)$ and $X_2(1)$ from 3 linearly independent combinations $(L_2,L_2',L_2'')$ of $(X_1(1),X_2(1),X_1(2))$.[]{data-label="fig:CO2"}](z2z4f.pdf){width="45.00000%"}
The scheme for this coding opportunity was introduced for a binary fading interference channel, in which the channel links are either “on” or “off” [@vahid2013]. The same concept was used in [@sun2013] to improve the sum-DoF of the two-user interference channel and X-channel with alternating topology. Finally in [@issa2013], the authors demonstrated that for the two-hop interference channel, the best vector-linear strategy at the intermediate relays is to vary relaying coefficients over time, creating end-to-end topologies equivalent to the three used in coding opportunity 2. $\hfill \square$
Coding Scheme for $2 \times 2$ X-channel with Varying Topologies
----------------------------------------------------------------
Equipped with the transmission strategies for the two coding opportunities, we proceed to present our achievable scheme that focuses on the symmetric topology variation as defined in Definition 2. The general idea of the achievable scheme is to create and utilize the two coding opportunities in a greedy manner to maximize the overall sum-DoF.
First, we code in all instances of interference-free topologies $i_1$ and $i_2$ separately at DoF $2$, delivering a total of $2(\lambda_{i_1}+\lambda_{i_2})N$ symbols. Then we split the description of the rest of the coding scheme into the following two cases:
*Case 1: $\lambda_{z_1} \leq \lambda_{z_2}$ and $\lambda_{z_3} \leq \lambda_{z_4}$*
i. We pair each $z_1 (z_3)$ instance with a $z_2(z_4)$ instance until $z_1 (z_3)$ is exhausted, obtaining $(\lambda_{z_1}+\lambda_{z_3})N$ instances of coding opportunity 1. For each paired $\{z_1,z_2\}$, we perform the scheme in Section \[sec:coding1\], delivering $3\lambda_1N$ symbols. Similarly, another $3\lambda_{z_3}N$ symbols are delivered by performing coding opportunity 1 scheme on all $\{z_3,z_4\}$ pairs.
ii. We pair one of $(\lambda_{z_2}-\lambda_{z_1})N$ surplus $z_2$ instances with one of $(\lambda_{z_4}-\lambda_{z_3})N$ surplus $z_4$ instances, and one of $\lambda_fN$ $f$ instances until one of them is exhausted, obtaining $\theta N$ instances of coding opportunity 2, where $\theta \overset{\Delta}{=}\text{min}\{\lambda_{z_2}-\lambda_{z_1},\lambda_{z_4}-\lambda_{z_3},\lambda_f\}$. Applying the $\frac{4}{3}$-DoF scheme in Section \[sec:coding2\] to each coding opportunity 2 instance, we deliver $4\theta N$ symbols.
iii. Lastly, for each of $(\lambda_{z_2}\!-\!\lambda_{z_1}\!-\theta)N$ surplus $z_2$ instances, $(\lambda_{z_4}\!-\!\lambda_{z_3}\!-\!\theta)N$ surplus $z_4$ instances, $(\lambda_{f}\!-\!\theta)N$ surplus $f$ instances, and all the remaining topology instances, we code them separately, delivering 1 symbol in each of them.
Thus we achieve the overall sum-DoF: $$\begin{aligned}
&3(\lambda_{z_1}\!+\!\lambda_{z_3}) \!+\! 4\theta \!+\! (\lambda_{z_2}\!-\!\lambda_{z_1}\!-\!\theta) \!+\! (\lambda_{z_4}\!-\!\lambda_{z_3}\!-\!\theta) \!+\! (\lambda_f \!-\!\theta) \nonumber \\
&+\! \sum\limits_{k=1}^{4}\lambda_{s_k} \!+\! \sum\limits_{k=1}^{2}(\lambda_{m_k}\!+\!\lambda_{b_k}\!+\! 2\lambda_{i_k})\nonumber\\
=& 1+ \lambda_{i_1}+\lambda_{i_2}+\text{min}\{\lambda_{z_2}+\lambda_{z_3},\lambda_{z_1}+\lambda_{z_3}+\lambda_f\}.\end{aligned}$$
![Coding scheme applied to an example realization of time-varying network topologies for Case 1. Coding opportunity 1 is colored red for $\{z_1,z_2\}$, and yellow for $\{z_3,z_4\}$; Coding opportunity 2 is colored green, and all topologies coded separately are left blank. Notice that in this example, no more future topology instance has Topology $z_1$ or $z_3$.[]{data-label="fig:coding_str1"}](coding_strategy1.pdf){width="48.00000%"}
*Case 2: $\lambda_{z_1} > \lambda_{z_2}$ and $\lambda_{z_3} > \lambda_{z_4}$* The achievable scheme is similar to the first case, whereas topology instances of $z_1$ and $z_3$ are surplus from Step i. and the instances of combination $\{z_1,z_3,f\}$ are exhaustively consumed using the scheme for coding opportunity 2. The achieved sum-DoF in this case is $1\!+\! \lambda_{i_1} \!+\! \lambda_{i_2}\!+\!\text{min}\{\lambda_{z_1}+\lambda_{z_4},\lambda_{z_2}+\lambda_{z_4}+\lambda_f\}$.
Therefore in general, the achieved sum-DoF when the topology varies symmetrically (${\lambda_{z_1}+\lambda_{z_4} = \lambda_{z_2}+\lambda_{z_3}}$), $d_{\Sigma}$ is $$\begin{aligned}
&1\! + \! \lambda_{i_1} \!+ \!\lambda_{i_2} \nonumber\\
&+\!\text{min}\bigg\{ \lambda_{z_1}\!+\!\lambda_{z_4},\text{min}\{\lambda_{z_1}\!,\lambda_{z_2}\}\!+\!\text{min}\{\lambda_{z_3}\!,\lambda_{z_4}\}\!+\!\lambda_f \bigg\}\nonumber\\
=& 1 \!+\! \lambda_{i_1}\!+\!\lambda_{i_2} \!+\! \text{min}\big\{\!\lambda_{z_1}\!+\!\lambda_{z_4},\lambda_{z_1}\!+\!\lambda_{z_3}\!+\!\lambda_f, \lambda_{z_2}\!+\!\lambda_{z_4}\!+\!\lambda_f \! \big\}.\label{eq:achieved_DoF}\end{aligned}$$
As mentioned in Remark 2, when the variation of the topology is symmetric, the overall DoF gain by coding across topologies is $\min\big\{\!\lambda_{z_1}\!+\!\lambda_{z_4},\lambda_{z_1}\!+\!\lambda_{z_3}\!+\!\lambda_f, \lambda_{z_2}\!+\!\lambda_{z_4}\!+\!\lambda_f \! \big\}$. From the above performance analysis of our proposed scheme, we find that the DoF gain due to coding opportunity 1 is $\min\{\lambda_{z_1},\lambda_{z_2}\}+\min\{\lambda_{z_3},\lambda_{z_4}\} = \min\{\lambda_{z_1}+\lambda_{z_3},\lambda_{z_2}+\lambda_{z_4}\}$, and the DoF gain due to coding opportunity 2 is $\min\{|\lambda_{z_1}-\lambda_{z_2}|,|\lambda_{z_3}-\lambda_{z_4}|,\lambda_f\}$. $\hfill \square$
Converse of Theorem 1
=====================
To prove the optimality of the achieved sum-DoF in (\[eq:achieved\_DoF\]), we will show in this section that the sum-DoF $d_{\Sigma}$ is bounded as $$\begin{aligned}
d_{\Sigma} & \leq 1+ \lambda_{i_1} + \lambda_{i_2}+\lambda_{z_1}+\lambda_{z_4},\label{eq:bond1}\\
d_{\Sigma} & \leq 1+ \lambda_{i_1} + \lambda_{i_2}+\lambda_{z_1}+\lambda_{z_3}+\lambda_f,\label{eq:bond2}\\
d_{\Sigma} & \leq 1+ \lambda_{i_1} + \lambda_{i_2}+\lambda_{z_2}+\lambda_{z_4}+\lambda_f.\label{eq:bond3}\end{aligned}$$
We point out that taking the minimum of these three bounds results in the expression in (\[eq:achieved\_DoF\]), therefore proving these three bounds proves the converse. We will present the proof of (\[eq:bond1\]) in this section and defer the proofs for (\[eq:bond2\]) and (\[eq:bond3\]) to Appendix I.
We define the length-$N$ output vector at Receiver $j$ as $Y_j^{N}$, $j = 1,2$, and the sub-vector of $Y_{j}^N$ received when the network is in Topology $a$ as $Y_{j,a}^{N}$. The received vector at Receiver 1 may thus be expressed as $Y_{1}^N = \big(\{Y_{1,s_k}^N\}_{k=1}^4,Y_{1,m_1}^N, Y_{1,m_2}^N, Y_{1,b_1}^N, Y_{1,b_2}^N, \{Y_{1,z_k}^N\}_{k=1}^4,Y_{1,i_1}^N,$ $Y_{1,i_2}^N, Y_{1,f}^N\big)$.
Further, for any subset of topology indices, $\mathcal{S}\subseteq\mathcal{A}$, $Y_{j,\mathcal{S}}^{N} \overset{\Delta}{=} \{Y_{j,a}^N: a \in \mathcal{S}\}$. In particular, we define the set of topologies $\mathcal{B}\subseteq\mathcal{A}$ as $\mathcal{B} \overset{\Delta}{=} \mathcal{A} \backslash \{s_1,s_2,s_3,s_4,m_1,m_2\}$.
Before beginning proofs of the individual bounds, we make one last general observation that in the two broadcast topologies $b_1$, $b_2$ and the fully-connected topology $f$, because the transmitters only know the network topology, and the channel coefficients are i.i.d. across space, $Y^N_{1,a}$ and $Y^N_{2,a}$, are statistically equivalent for $a \in \mathcal{E} \triangleq\{b_1,b_2,f\}$. Hence, if a receiver can decode the desired messages using the signals of $Y_{2,\mathcal{E}}^N$, it can also do so using $Y_{1,\mathcal{E}}^N$. That is, without any rate loss, Receiver 2 should be able to decode $(W_{21},W_{22})$ using $Y_{2}^N \!\!=\!\! \left(\{Y_{2,s_k}^N\}_{k=1}^4, Y_{2,m_1}^N, Y_{2,m_2}^N,\{Y_{2,z_k}^N\}_{k=1}^4, Y_{2,i_1}^N, Y_{2,i_2}^N, Y_{1,\mathcal{E}}^N\right)$.
We start the proof of the bound in (\[eq:bond1\]) with the following two steps:
*1. Channel Enhancements:* We enhance the system without any rate loss by allowing the two transmitters to cooperate at infinite rate. As a result, the system is converted into a two-user MISO broadcast channel with message $W_j$ (with rate $R_j$) intended for Receiver $j$, $j=1,2$.
*2. Virtual Receivers:* We introduce two virtual receivers, labeled $\tilde{1}$ and $\tilde{2}$, which are statistically indistinguishable from Receiver 1 and 2 respectively.
Specifically at Receiver $\tilde{1}$, $Y_{\tilde{1},a} =Y_{1,a}$ when $a \neq z_1,z_4$. When the topology at time $n$ is $z_1$ or $z_4$, $$\label{eq:outz1z4}
\begin{aligned}
Y_1(n) &= h_{11}(n)X_1(n) + h_{12}(n)X_2(n) + Z_1(n),\\
Y_{\tilde{1}}(n) &= h_{\tilde{1}1}(n)X_1(n) + h_{\tilde{1}2}(n)X_2(n) + Z_{\tilde{1}}(n).
\end{aligned}$$
At Receiver $\tilde{2}$, $Y_{\tilde{2},a}=Y_{2,a}$ when $a \neq z_2,z_3$. When the topology at time $n$ is $z_2$ or $z_3$, $$\label{eq:outz2z3}
\begin{aligned}
Y_2(n) &= h_{21}(n)X_1(n) + h_{22}(n)X_2(n) + Z_2(n),\\
Y_{\tilde{2}}(n) &= h_{\tilde{2}1}(n)X_1(n) + h_{\tilde{2}2}(n)X_2(n) + Z_{\tilde{2}}(n).
\end{aligned}$$
In (\[eq:outz1z4\]) and (\[eq:outz2z3\]), for $j \in \{1,2\}$, $(h_{\tilde{j}1}(n), h_{\tilde{j}2}(n))$ is identically distributed and independent of $(h_{j1}(n),h_{j2}(n))$, and $Z_j(n)$ and $Z_{\tilde{j}}(n)$ are i.i.d. Gaussian with zero mean and unit variance.
We denote the collection of channel coefficients over $N$ time slots (including the channel coefficients at the virtual receivers) as $\tilde{{\boldsymbol h}}^N \overset{\Delta}{=} \left\{h_{ji}(n): j \in \{1,2,\tilde{1},\tilde{2}\}, i \in \{1,2\}\right\}_{n=1}^N$, and recall the assumption that the values of $\tilde{{\boldsymbol h}}^N$ are not known at the transmitters, but are perfectly known at Receiver $j$, $j=1,2,\tilde{1},\tilde{2}$. We also note that due to the statistical equivalence of $Y_j^N$ and $Y_{\tilde{j}}^N$, if $R_j$ is achievable at Receiver $j$, then Receiver $\tilde{j}$ can also decode $W_j$.
In Topologies $z_1$ and $z_4$ ($z_2$ and $z_3$), given the signals at Receivers $1$ and $\tilde{1}$ ($2$ and $\tilde{2}$) and the channel coefficients, the signals at Receiver $2$ ($1$) can be approximately reconstructed: $$\begin{aligned}
h(Y_{2,a}^N|Y_{1,a}^N,Y_{\tilde{1},a}^N,\tilde{{\boldsymbol h}}^N) &\leq No(\log P), \; a = z_1, z_4, \label{eq:recon2}\\
h(Y_{1,a}^N|Y_{2,a}^N,Y_{\tilde{2},a}^N,\tilde{{\boldsymbol h}}^N ) &\leq No(\log P), \; a = z_2, z_3.\label{eq:recon1}\end{aligned}$$
We prove the case when the topology is $z_1$ ($a=z_1$ in (\[eq:recon2\])). The proofs for the other 3 cases follow similarly, and are omitted here.
We recall that if the topology at time $n$ is $z_1$, then by (\[eq:outz1z4\]), $Y_1(n)$ and $Y_{\tilde{1}}(n)$ almost surely provide two noisy linearly independent equations of $X_1(n)$ and $X_2(n)$. More specifically, $$\begin{bmatrix}
Y_1(n) \\ Y_{\tilde{1}}(n) \\ Y_2(n)
\end{bmatrix} = \begin{bmatrix}
h_{11}(n) & h_{12}(n) \\ h_{\tilde{1}1}(n) & h_{\tilde{1}2}(n) \\ 0 & h_{22}(n)
\end{bmatrix} \begin{bmatrix}
X_1(n) \\X_2(n)
\end{bmatrix} + \begin{bmatrix}
Z_1(n) \\ Z_{\tilde{1}}(n) \\Z_2(n)
\end{bmatrix},$$ where $\begin{bmatrix}
h_{11}(n) & h_{12}(n) \\ h_{\tilde{1}1}(n) & h_{\tilde{1}2}(n)
\end{bmatrix}$ is invertible almost surely.
$$\begin{aligned}
&h(Y_2(n)|Y_1(n),Y_{\tilde{1}}(n),\tilde{{\boldsymbol h}}^N\!) \\
= & h \bigg(\!\!Y_2(n)\!-\!\!\begin{bmatrix}
0 & h_{22}(n)
\end{bmatrix}\!\!\!\begin{bmatrix}
h_{11}(n) & h_{12}(n) \\ h_{\tilde{1}1}(n) & h_{\tilde{1}2}(n)
\end{bmatrix}^{-1}\!\!\begin{bmatrix}
Y_1(n) \\ Y_{\tilde{1}}(n)
\end{bmatrix} \!\! \Bigg| \\
&Y_1(n),\!Y_{\tilde{1}}(n),\tilde{{\boldsymbol h}}^N \!\! \bigg)\\
\leq & h \bigg(\!\! Z_2(n)\!-\!\begin{bmatrix}
0 & h_{22}(n)
\end{bmatrix}\!\!\begin{bmatrix}
h_{11}(n) & h_{12}(n) \\ h_{\tilde{1}1}(n) & h_{\tilde{1}2}(n)
\end{bmatrix}^{-1}\!\!\begin{bmatrix}
Z_1(n) \\ Z_{\tilde{1}}(n)
\end{bmatrix}\!\!\Bigg|\tilde{{\boldsymbol h}}^N \!\!\bigg)\\
=& o(\log P),\end{aligned}$$
and thus $$\begin{aligned}
h(Y_{2,z_1}^N|Y_{1,z_1}^N,Y_{\tilde{1},z_1}^N,\tilde{{\boldsymbol h}}^N\!)&\! \leq \!\!\!\!\!\!\!\sum \limits_{\substack{n \in \text{ time slots}\\\text{in Topology } z_1}} \!\!\!\!\!\! h(Y_2(n)|Y_1(n),Y_{\tilde{1}}(n),\tilde{{\boldsymbol h}}^N\! )\\
&\leq No(\log P).\end{aligned}$$
In the rest of the proof, we first derive two upper bounds on $R_1$ at Receiver $1$ and $\tilde{1}$ respectively, then combine the two bounds and use Lemma 1 to establish an upper bound on $2R_1$. We then repeat the same steps at Receiver $2$ and $\tilde{2}$ to obtain an upper bound on $2R_2$. Lastly, we sum up the upper bounds for $2R_1$ and $2R_2$ to arrive at an upper bound for the sum-rate, yielding (\[eq:bond1\]) as an upper bound for the sum-DoF.
At Receiver 1, we apply Fano’s inequality while treating $\tilde{{\boldsymbol h}}^N$ as a part of the output signals: $$\begin{aligned}
N&R_1 \overset{(a)}{\leq} I(W_1;Y_1^N|\tilde{{\boldsymbol h}}^N) + N\epsilon_N \nonumber\\
=& I(W_1;Y_{1,\mathcal{B}}^N|\tilde{{\boldsymbol h}}^N) \nonumber \\
&+\! I(W_1;\{Y_{1,s_k}^N\}_{k=1}^4, Y_{1,m_1}^N, Y_{1,m_2}^N|Y_{1,\mathcal{B}}^N,\tilde{{\boldsymbol h}}^N)\!+\! N\epsilon_N \nonumber\\
= & I(W_1;Y_{1,\mathcal{B}}^N|\tilde{{\boldsymbol h}}^N) \!+\! h(\{Y_{1,s_k}^N\}_{k=1}^4, Y_{1,m_1}^N, Y_{1,m_2}^N|Y_{1,\mathcal{B}}^N,\tilde{{\boldsymbol h}}^N)\nonumber\\
&-\! h(\{Y_{1,s_k}^N\}_{k=1}^4, Y_{1,m_1}^N, Y_{1,m_2}^N|W_1,Y_{1,\mathcal{B}}^N,\tilde{{\boldsymbol h}}^N)\!+\! N\epsilon_N\\
\overset{(b)}{\leq} &I(W_1;Y_{1,\mathcal{B}}^N|\tilde{{\boldsymbol h}}^N) \!+\! \sum\limits_{k=1}^4 h(Y_{1,s_k}^N|\tilde{{\boldsymbol h}}^N) \nonumber \\
&+\! h(Y_{1,m_1}^N|\tilde{{\boldsymbol h}}^N)\!+\! h(Y_{1,m_2}^N|\tilde{{\boldsymbol h}}^N)\!+\! N\epsilon_N \label{eq:BCR1}\\
\leq & I(W_1;Y_{1,\mathcal{B}}^N|\tilde{{\boldsymbol h}}^N) \nonumber \\
&+ N(\lambda_{s_1}+\lambda_{s_4}+\lambda_{m_1})(\log P+o(\log P)) + N\epsilon_N,\label{eq:fano1}\end{aligned}$$ where (a) holds since $\tilde{{\boldsymbol h}}^N$ is independent of $W_1$, and (b) is because that given that $\left(X_1^N,X_2^N\right)$ is a function of $(W_1,W_2)$ $$\begin{aligned}
&h(\{Y_{1,s_k}^N\}_{k=1}^4,Y_{1,m_1}^N,Y_{1,m_2}^N|W_1,Y_{1,\mathcal{B}}^N,\tilde{{\boldsymbol h}}^N) \\
\geq& h(\{Y_{1,s_k}^N\}_{k=1}^4,Y_{1,m_1}^N,Y_{1,m_2}^N|W_1,W_2,Y_{1,\mathcal{B}}^N,\tilde{{\boldsymbol h}}^N)\\
=& \! h(\{Y_{1,s_k}^N\}_{k=1}^4,Y_{1,m_1}^N,Y_{1,m_2}^N|W_1,W_2,X_1^N,X_2^N,Y_{1,\mathcal{B}}^N,\tilde{{\boldsymbol h}}^N)\\
=&\! N\left(\sum \limits_{k=1}^4\lambda_{s_k}+\lambda_{m_1}+\lambda_{m_2}\right)\log(\pi e) \geq 0.\end{aligned}$$
Similarly at Receiver $\tilde{1}$, $$\begin{aligned}
NR_1 \leq & I(W_1;Y_{\tilde{1},\mathcal{B}}^N|\tilde{{\boldsymbol h}}^N) \nonumber\\
&+ N(\lambda_{s_1}+\lambda_{s_4}+\lambda_{m_1})(\log P+o(\log P))+N\epsilon_N. \label{eq:fano1t}\end{aligned}$$
Combining (\[eq:fano1\]) and (\[eq:fano1t\]), we have $$\begin{aligned}
2NR_1 \leq & I(W_1;Y_{1,\mathcal{B}}^N|\tilde{{\boldsymbol h}}^N)\!+\! I(W_1;Y_{\tilde{1},\mathcal{B}}^N|\tilde{{\boldsymbol h}}^N) \nonumber\\
&+\! 2N(\lambda_{s_1}\!+\!\lambda_{s_4}\!+\!\lambda_{m_1})(\log P\!+\!o(\log P))\!+\! N\epsilon_N.\label{eq:R1}\end{aligned}$$
We can further bound the terms $I(W_1;Y_{1,\mathcal{B}}^N|\tilde{{\boldsymbol h}}^N)$ and $I(W_1;Y_{\tilde{1},\mathcal{B}}^N|\tilde{{\boldsymbol h}}^N)$ in (\[eq:R1\]) as follows: $$\begin{aligned}
&I(W_1;Y_{1,\mathcal{B}}^N|\tilde{{\boldsymbol h}}^N) \nonumber \\
=& I(W_1;\{Y_{1,z_k}^N\}_{k=1}^4,\! Y_{1,\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N) \nonumber \\
&+\!I(W_1;Y_{1,i_1}^N,\! Y_{1,i_2}^N|\{Y_{1,z_k}^N\}_{k=1}^4,\! Y_{1,\mathcal{E}}^N,\tilde{{\boldsymbol h}}^N)\nonumber\\
\leq &I(W_1;\{Y_{1,z_k}^N\}_{k=1}^4,Y_{1,\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N) \nonumber \\
&+ h(Y_{1,i_1}^N,\! Y_{1,i_2}^N|\{Y_{1,z_k}^N\}_{k=1}^4,\! Y_{1,\mathcal{E}}^N,\tilde{{\boldsymbol h}}^N)\\
\leq & I(W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N\!) \!\!+\!\! N(\lambda_{i_1}\!\!+\!\lambda_{i_2})(\log P \!+\! o(\log P))\nonumber \\
&+I(W_1;Y_{1,z_1}^N,\!Y_{1,z_4}^N|Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N,\tilde{{\boldsymbol h}}^N) \\
\leq & I(W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N\!) \!+\!\! N(\lambda_{i_1}\!\!+\!\lambda_{i_2})(\log P \!\!+\!\! o(\log P)) \nonumber \\
&+ h(Y_{1,z_1}^N,\!Y_{1,z_4}^N|Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N,\tilde{{\boldsymbol h}}^N)\nonumber\\
&-\!h(Y_{1,z_1}^N,\!Y_{1,z_4}^N|W_1, \!Y_{1,z_2}^N\!, Y_{1,z_3}^N\!, Y_{1,\mathcal{E}}^N,\tilde{{\boldsymbol h}}^N)\\
\leq & N(\lambda_{i_1}\!+\!\lambda_{i_2}\!+\!\lambda_{z_1}\!+\!\lambda_{z_4})(\log P \!+\! o(\log P))\nonumber \\
&+I(W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N)\nonumber \\ &-h(Y_{1,z_1}^N,Y_{1,z_4}^N|W_1,Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N,\tilde{{\boldsymbol h}}^N),\label{eq:w1y1b}\end{aligned}$$
and similarly, $$\begin{aligned}
&I(W_1;Y_{\tilde{1},\mathcal{B}}^N|\tilde{{\boldsymbol h}}^N) \nonumber \\
\leq & N(\lambda_{i_1}\!+\!\lambda_{i_2}\!+\!\lambda_{z_1}\!+\!\lambda_{z_4})(\log P \!+\! o(\log P)) \nonumber \\
&+\! I(W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N)\nonumber \\
&-h(Y_{\tilde{1},z_1}^N,Y_{\tilde{1},z_4}^N|W_1,Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N,\tilde{{\boldsymbol h}}^N).\label{eq:w1ty1b}\end{aligned}$$
Summing up the two bounds in (\[eq:w1y1b\]) and (\[eq:w1ty1b\]), we find $$\begin{aligned}
&I (W_1;Y_{1,\mathcal{B}}^N|\tilde{{\boldsymbol h}}^N)+ I(W_1;Y_{\tilde{1},\mathcal{B}}^N|\tilde{{\boldsymbol h}}^N)\nonumber\\
\leq & 2N(\lambda_{i_1}+\lambda_{i_2}+\lambda_{z_1}+\lambda_{z_4})(\log P + o(\log P)) \nonumber \\
&+2I(W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N)\nonumber \\
&-h(Y_{1,z_1}^N,Y_{1,z_4}^N,Y_{\tilde{1},z_1}^N,Y_{\tilde{1},z_4}^N|W_1,Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N,\tilde{{\boldsymbol h}}^N)\\
\leq & 2N(\lambda_{i_1}+\lambda_{i_2}+\lambda_{z_1}+\lambda_{z_4})(\log P + o(\log P)) \nonumber \\
&+ 2I(W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N) \nonumber \\
&+h(Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N)\nonumber \\
&-h(Y_{1,z_1}^N,Y_{1,z_4}^N,Y_{\tilde{1},z_1}^N,Y_{\tilde{1},z_4}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N)\\
\overset{(a)}{\leq} & 2N(\lambda_{i_1}+\lambda_{i_2}+\lambda_{z_1}+\lambda_{z_4})(\log P + o(\log P)) \nonumber \\
&+ 2I(W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N) \nonumber \\
&+ h(Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N) \nonumber \\
&-h(Y_{1,z_1}^N,Y_{1,z_4}^N,Y_{\tilde{1},z_1}^N,Y_{\tilde{1},z_4}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N)\nonumber\\
& + h(Y_{1,z_1}^N,Y_{1,z_4}^N,Y_{\tilde{1},z_1}^N,Y_{\tilde{1},z_4}^N,Y_{1,\mathcal{E}}^N|W_1,W_2,\tilde{{\boldsymbol h}}^N) \nonumber \\
&-h(Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|W_1,W_2,\tilde{{\boldsymbol h}}^N)\\
= & 2N(\lambda_{i_1}+\lambda_{i_2}+\lambda_{z_1}+\lambda_{z_4})(\log P + o(\log P)) \nonumber \\
&+ 2I (W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N) \nonumber \\
&+ I(W_2;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N)\nonumber \\
&-I(W_2;Y_{1,z_1}^N,Y_{1,z_4}^N,Y_{\tilde{1},z_1}^N,Y_{\tilde{1},z_4}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N)\\
= & 2N(\lambda_{i_1}+\lambda_{i_2}+\lambda_{z_1}+\lambda_{z_4})(\log P + o(\log P)) \nonumber \\
&+ I(W_1,W_2;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N) \nonumber \\
& + I(W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N) \nonumber \\
&- I(W_2;Y_{1,z_1}^N,Y_{1,z_4}^N,Y_{\tilde{1},z_1}^N,Y_{\tilde{1},z_4}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N)\label{eq:EUint3}\\
\leq & 2N(\lambda_{i_1}+\lambda_{i_2}+\lambda_{z_1}+\lambda_{z_4})(\log P + o(\log P)) \nonumber\\
&+ h(Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N) + I(W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N) \nonumber \\
&- I(W_2;Y_{1,z_1}^N,Y_{1,z_4}^N,Y_{\tilde{1},z_1}^N,Y_{\tilde{1},z_4}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N) \\
\leq &N\big[2(\lambda_{i_1}\!+\!\lambda_{i_2}\!+\!\lambda_{z_1}\!+\!\lambda_{z_4})\!+\!\lambda_{z_2}\!+\!\lambda_{z_3}\!+\!\lambda_{b_1}\!+\!\lambda_{b_2}\!+\!\lambda_f\big]\nonumber \\
&(\log P + o(\log P))+I\left(W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N\right|\tilde{{\boldsymbol h}}^N) \nonumber \\
&\!\!-\!\! I(W_2;Y_{1,z_1}^N,Y_{1,z_4}^N,Y_{\tilde{1},z_1}^N,Y_{\tilde{1},z_4}^N,Y_{2,z_1}^N,Y_{2,z_4}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N) \nonumber\\
&\!\!+\!\!\underbrace{I(W_2;Y_{2,z_1}^N,Y_{2,z_4}^N|W_1,Y_{1,z_1}^N,Y_{1,z_4}^N,Y_{\tilde{1},z_1}^N,Y_{\tilde{1},z_4}^N,Y_{1,\mathcal{E}}^N,\tilde{{\boldsymbol h}}^N\!)}_{\leq h\left(Y_{2,z_1}^N|Y_{1,z_1}^N,Y_{\tilde{1},z_1}^N,\tilde{{\boldsymbol h}}^N\right) + h\left(Y_{2,z_4}^N|Y_{1,z_4}^N,Y_{\tilde{1},z_4}^N,\tilde{{\boldsymbol h}}^N\right)},\label{eq:lemma1}\end{aligned}$$ where Step (a) results from the observations: $$\begin{aligned}
&h(Y_{1,z_1}^N,Y_{1,z_4}^N,Y_{\tilde{1},z_1}^N,Y_{\tilde{1},z_4}^N,Y_{1,\mathcal{E}}^N|W_1,W_2,\tilde{{\boldsymbol h}}^N) \nonumber \\
=& N\left[2(\lambda_{z_1}+\lambda_{z_4})+\lambda_{b_1}+\lambda_{b_2}+\lambda_f\right]\log(\pi e),\\
&h(Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|W_1,W_2,\tilde{{\boldsymbol h}}^N) \nonumber \\
=& N\left[\lambda_{z_2}+\lambda_{z_3}+\lambda_{b_1}+\lambda_{b_2}+\lambda_f\right]\log(\pi e),\end{aligned}$$ and $\lambda_{z_1} +\lambda_{z_4} =\lambda_{z_2}+\lambda_{z_3}$, due to symmetry of the topology variation.
Then by virtue of Lemma 1, (\[eq:lemma1\]) becomes $$\begin{aligned}
&I (W_1;Y_{1,\mathcal{B}}^N|\tilde{{\boldsymbol h}}^N)+ I(W_1;Y_{\tilde{1},\mathcal{B}}^N|\tilde{{\boldsymbol h}}^N)\nonumber\\
\leq &N\left[2(\lambda_{i_1}\!+\!\lambda_{i_2}\!+\!\lambda_{z_1}\!+\!\lambda_{z_4})\!+\!\lambda_{z_2}\!+\!\lambda_{z_3}\!+\!\lambda_{b_1}\!+\!\lambda_{b_2}\!+\!\lambda_f\right]\nonumber \\
&\left(\log P + o(\log P)\right)+I(W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N | \tilde{{\boldsymbol h}}^N) \nonumber \\
&-I(W_2;Y_{2,z_1}^N,Y_{2,z_4}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N )+ N o(\log P)\nonumber\\
\overset{(b)}{\leq} & N\left[2(\lambda_{i_1}+\lambda_{i_2})+3(\lambda_{z_1}+\lambda_{z_4})+\lambda_{b_1}+\lambda_{b_2}+\lambda_{f}\right]\nonumber \\
&\left(\log P+o(\log P)\right)+ I(W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|W_2,\tilde{{\boldsymbol h}}^N )\nonumber \\
&- I(W_2;Y_{2,z_1}^N, Y_{2,z_4}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N )+No(\log P),\label{eq:w1y1bw1y1t}
\vspace{-2 mm}\end{aligned}$$ where Step (b) is due to the symmetric topology variation, and that $W_1$ and $W_2$ are independent. Similarly for Receivers 2 and $\tilde{2}$, we have $$\begin{aligned}
2NR_2
\leq & I(W_2;Y^N_{1,\mathcal{E}},Y_{2,\mathcal{B}\backslash\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N )+ I(W_2;Y^N_{1,\mathcal{E}},Y_{\tilde{2},\mathcal{B}\backslash\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N )\nonumber\\
& \!+\! 2N(\lambda_{s_2}+\lambda_{s_3}+\lambda_{m_2})(\log P+o(\log P)) \!+\! N\epsilon_N, \label{eq:R2}\end{aligned}$$ where $$\begin{aligned}
I&(W_2;Y^N_{1,\mathcal{E}},Y_{2,\mathcal{B}\backslash\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N )+ I(W_2;Y^N_{1,\mathcal{E}},Y_{\tilde{2},\mathcal{B}\backslash\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N )\nonumber\\
\leq & N\left[2(\lambda_{i_1}+\lambda_{i_2})+3(\lambda_{z_1}+\lambda_{z_4})+\lambda_{b_1}+\lambda_{b_2}+\lambda_{f}\right]\nonumber \\
&\left(\log P+o(\log P)\right) \nonumber + I(W_2;Y_{2,z_1}^N, Y_{2,z_4}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N )\nonumber \\
&- I(W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|W_2,\tilde{{\boldsymbol h}}^N )+No(\log P).\label{eq:w2y1e}\end{aligned}$$
Adding both sides of (\[eq:R1\]) and (\[eq:R2\]), and by (\[eq:w1y1bw1y1t\]) and (\[eq:w2y1e\]), we arrive at $$\begin{aligned}
2&N(R_1+R_2)\nonumber \\
\leq & 2N\bigg[2(\lambda_{i_1}+\lambda_{i_2})+3(\lambda_{z_1}+\lambda_{z_4})+\lambda_{b_1}+\lambda_{b_2}+\lambda_{f}\bigg]\nonumber \\
&(\log P+o(\log P))\!+\!No(\log P)\!+\! N \epsilon_N \nonumber\\
& +2N\left(\sum\limits_{k=1}^4 \lambda_{s_k} + \lambda_{m_1} + \lambda_{m_2}\right)(\log P+o(\log P)) \nonumber\\
=& 2N\left(1+\lambda_{i_1}+\lambda_{i_2}+\lambda_{z_1}+\lambda_{z_4}\right)(\log P+o(\log P))\nonumber \\
&+No(\log P)+N \epsilon_N\label{eq:sum1}.\end{aligned}$$
The last equality is a result of evaluating the total probability of topological states: $\sum\limits_{k=1}^4 \lambda_{s_k}+\sum\limits_{k=1}^2(\lambda_{m_k}+\lambda_{b_k}+\lambda_{i_k})+2(\lambda_{z_1}+\lambda_{z_4})+\lambda_f =1$. Dividing both sides of (\[eq:sum1\]) by $2N \log P$ and let both $N$ and $P$ go to infinity, we obtain the sum-DoF upper bound in (\[eq:bond1\]).
In [@chen14] and [@tandon2013], similar techniques were utilized to upper bound the DoF in the context of fading MISO broadcast channel with alternating (varying) CSIT. $\hfill \square$
Ergodic sum-capacity of $2 \times 2$ Gaussian X-channel with symmetric topology variation
=========================================================================================
In this section, we explore the performance of our CAT-based scheme at finite SNR. We start with the proof of Theorem 3, by establishing an ergodically achievable sum-rate for the $2 \times 2$ Gaussian X-channel with symmetric topology variation. We then demonstrate the approximate optimality of our coding scheme by proving Corollary 1.
Achievability (Proof of Theorem 3)
----------------------------------
The achievable scheme follows the approach described in Section III, which maximizes the sum-DoF. Before proceeding to present the ergodic sum-rate achieved by individual topologies and coding opportunities, we recall the assumption that channel coefficients are i.i.d. across space and time, transmitters know the network topology but not the actual values of the channel coefficients, and the receivers know both topology and all of the channel coefficients perfectly. The rate expressions $A$, $B$, $C$, $D$ are as defined in the statement of Theorem 3.
### Topologies that jointly achieve $A$
Across all instances of Topologies $s_1$, $s_4$, $b_1$, $b_2$, $s_2$ and $s_3$, using point-to-point codes, one of the two transmitters communicates to Receiver 1 in $s_1$, $s_4$, $b_1$ and $b_2$, and to Receiver $2$ in $s_2$ and $s_3$, achieving an ergodic rate equal to $A$. In all instances of Topologies $i_1$ and $i_2$, each transmitter communicates with the receiver towards which it has a communication link, achieving an ergodic sum-rate $2A$.
### Topologies that jointly achieve $B$
Across all instances of Topologies $m_1$ and $z_1$ (or $z_4$), and/or $f$ that were surplus from creating coding opportunity 1 or 2, transmitters use Gaussian codebooks with rates dictated by the ergodic sum-capacity of the fading multiple access channel from the two transmitters to Receiver 1, achieving a sum-rate $B$. Similarly, $B$ can be achieved across instances of Topology $m_2$ and surplus $z_2$ (or $z_3$).
### Coding Opportunity 1 achieves $C$
Over instances of combination $\{z_1,z_2\}$ (same for $\{z_3,z_4\}$), we encode 3 sub-messages, each using a Gaussian codebook, and employ the two-phase scheme described in Section \[sec:coding1\].
The received signals in the first time slot are: $$\begin{aligned}
Y_1(1) &= h_{11}(1)X_1(1)+h_{12}(1)X_2(1)+Z_1(1), \\
Y_2(1) &=h_{22}(1)X_2(1)+Z_2(1).
\end{aligned}$$
The received signals in the second time slot are: $$\begin{aligned}
Y_1(2) &=\! h_{12}(2)X_2(1)\!+\!Z_1(2), \\
Y_2(2) &=\! h_{21}(2)X_1(2)\!+\!h_{22}(2)X_2(1)\!+\!Z_2(2).
\end{aligned}$$
With many instances of $\left[Y_1(1) \;\; Y_1(2)\right]^T$, Receiver 1 uses MIMO decoding of the codewords associated with $X_1(1)$ and $X_2(1)$ to achieve a sum-rate of $\mathbb{E}\left\{\log\left(\left|\mathbf{I} + P \mathbf{V} \mathbf{V}^H\right|\right)\right\}$.
On the other hand, in *each* instance of coding opportunity 1, Receiver 2 uses $Y_2(1)$ to cancel $X_2(1)$ from $Y_2(2)$, resulting in the residual signal ${h_{21}(2)X_1(2) \!+\! Z_2(2)\!-\!\dfrac{h_{22}(2)}{h_{22}(1)}Z_2(1)}$. Using the residual signals, Receiver 2 can decode the sub-message associated with $X_1(2)$ at rate $\mathbb{E}\left\{\log\left(1+\dfrac{|h_{21}(2)|^2 P}{1+|h_{22}(2)|^2\slash |h_{22}(1)|^2}\right)\right\}$.
Adding up achieved rates at two receivers, we obtain the sum-rate of $C$ bits per instance of coding opportunity 1, as desired.
### Coding Opportunity 2 achieves $D$
Over the instances of topology combination $\{z_2,z_4,f\}$ (same for $\{z_1,z_3,f\}$), we create random Gaussian codebooks for 4 sub-messages, and apply the three-phase scheme in Section \[sec:coding2\].
The received signals in the first time slot are: $$\begin{aligned}
Y_1(1) &= h_{12}(1)X_2(1)\!+\!Z_1(1), \\
Y_2(1) &= h_{21}(1)X_1(1)\!+\! h_{22}(1)X_2(1)\!+\!Z_2(1).
\end{aligned}$$ The received signals in the second time slot are: $$\begin{aligned}
Y_1(2) &=\! h_{11}(2)X_1(2)\!+\!h_{12}(2)X_2(2)\!+\!Z_1(2), \\
Y_2(2) &=\! h_{21}(2)X_1(2)\!+\!Z_2(2).
\end{aligned}$$ The received signals in the third time slot are: $$\begin{aligned}
Y_1(3) &=\! h_{11}(3)X_1(2) \!+\! h_{12}(3)X_2(1)\!+\!Z_1(3), \\
Y_2(3) &=\! h_{21}(3)X_1(2) \!+\! h_{22}(3)X_2(1)\!+\!Z_2(3).
\end{aligned}$$
In each instance of coding opportunity 2, Receiver 1 uses $Y_1(1)$ to completely cancel $X_2(1)$ from $Y_1(3)$, yielding a residual signal ${\tilde{Y}_1(3) = h_{11}(3)X_1(2)+Z_1(3)\!-\!\frac{h_{12}(3)}{h_{12}(1)}Z_1(1)}$. Performing MIMO decoding on many instances of $\left[Y_1(2) \; \; \tilde{Y}_1(3)\right]^T$, Receiver 1 can decode the sub-messages associated with $X_1(2)$ and $X_2(2)$ at rate $r \!=\!\mathbb{E}\left\{\!\log\!\!\left(\!\frac{\Bigg|\begin{bmatrix}1 & 0\\ 0 & 1+|h_{12}(3)|^2 \slash |h_{12}(1)|^2
\end{bmatrix}\!+\! P \mathbf{U}\mathbf{U}^H\Bigg|}{1+|h_{12}(3)|^2 \slash |h_{12}(1)|^2}\right)\!\!\right\}$.
Receiver 2 uses $Y_2(2)$ to completely cancel $X_1(2)$ from $Y_2(3)$, and decodes the sub-messages associated with $X_1(1)$ and $X_2(1)$ with the same ergodic rate at Receiver 1.
Therefore, an ergodic sum-rate $2r \!=\! D$ bits per coding opportunity 2 instance is achieved.
### Overall Ergodic Sum-Rate {#overall-ergodic-sum-rate .unnumbered}
In $N$ time slots, our proposed coding scheme creates $\text{min}\{\lambda_{z_1},\lambda_{z_2}\}N + \text{min}\{\lambda_{z_3},\lambda_{z_4}\}N$ instances of coding opportunity 1, and $\text{min}\{|\lambda_{z_1}-\lambda_{z_2}|,\lambda_f\}N$ instances of coding opportunity 2. Let $\phi \overset{\Delta}{=}\text{min}\{|\lambda_{z_1}-\lambda_{z_2}|,\lambda_f\}$, we also have $\left(|\lambda_{z_1}-\lambda_{z_2}|-\phi\right)N$ instances of Topology $z_1$ (or $z_2$), $\left(|\lambda_{z_3}-\lambda_{z_4}|-\phi\right)N$ instances of Topology $z_3$ (or $z_4$), and $(\lambda_f-\phi)N$ instances of Topology $f$ surplus from creating the two coding opportunities. Therefore, applying the coding scheme described above, we achieve the ergodic sum-rate $R_{\Sigma}$: $$\begin{aligned}
R_{\Sigma} =& A\left[\sum \limits_{k=1}^4 \lambda_{s_k} + 2(\lambda_{i_1}+\lambda_{i_2})+\lambda_{b_1}+\lambda_{b_2}\right] \nonumber \\
&+ \!B \left[\lambda_{m_1}\!+\!\lambda_{m_2}\!+\!|\lambda_{z_1}\!-\!\lambda_{z_2}|\!+\!|\lambda_{z_3}\!-\!\lambda_{z_4}|\!+\!\lambda_f -3\phi \right] \nonumber\\
&+ \!C\left[\text{min}\{\lambda_{z_1},\lambda_{z_2}\} + \text{min}\{\lambda_{z_3},\lambda_{z_4}\}\right] +D\phi,\label{eq:ergodic_rate}\end{aligned}$$
Proof of Corollary 1
--------------------
The proof proceeds in three steps. First, we develop 3 upper bounds on the ergodic sum-capacity. Second, we compare the upper bounds with the achievable rate $R_{\Sigma}$ in (\[eq:ergodic\_rate\]) to identify the gaps between them. Lastly, we evaluate the gaps with uniform-phase, Rayleigh and Rice faded channel coefficients respectively.
### Upper Bounds
We begin by proving 3 upper bounds $U_1$, $U_2$ and $U_3$ on the ergodic sum-capacity $C_{\Sigma}$ such that $C_{\Sigma} \leq \text{min}\{U_1,U_2,U_3\}$. The derivations of $U_1$, $U_2$ and $U_3$ are carried out along the same line of reasoning as in the proofs of (\[eq:bond1\]), (\[eq:bond2\]) and (\[eq:bond3\]) respectively in the converse for Theorem 1. The key difference is that we carefully evaluate or bound all $o(\log P)$ terms, and then take expectation over random channel coefficients to arrive at: $$\begin{aligned}
U_1 =& A\left[\sum \limits_{k=1}^4 \lambda_{s_k} + 2(\lambda_{i_1}+\lambda_{i_2})+ \lambda_{b_1} +\lambda_{b_2}+\lambda_{z_1}+\lambda_{z_4}\right]\nonumber \\
&+B\left[\lambda_{m_1} + \lambda_{m_2} + \sum \limits_{k=1}^4 \lambda_{z_k} + \lambda_f\right] + E(\lambda_{z_1}+\lambda_{z_4}) \nonumber \\
&+\log(\pi e)\Bigg[\sum\limits_{k=1}^4{\lambda_{s_k}}+\lambda_{m_1}+\lambda_{m_2} \nonumber \\
&+\! 2(\lambda_{i_1}+\lambda_{i_2})+2(\lambda_{z_1}+\lambda_{z_4})\Bigg], \label{eq:EU1}\\
U_2 = &A\left[\sum \limits_{k=1}^4 \lambda_{s_k} + 2(\lambda_{i_1}+\lambda_{i_2})+ \lambda_{b_1} +\lambda_{b_2}+\lambda_{z_1}+\lambda_{z_3} \right]\nonumber \\
&+ B\left[\lambda_{m_1} + \lambda_{m_2} + \sum \limits_{k=1}^4 \lambda_{z_k} +2\lambda_f\right] + F(\lambda_{z_2}+\lambda_{z_4}) \nonumber \\
&+ \log(\pi e) \left[1+\lambda_{i_1}\!+\! \lambda_{i_2}\!+\! 2(\lambda_{z_1}\!+\! \lambda_{z_4}) \!+\! \lambda_f\right], \label{eq:EU2}\\
U_3 =& A\left[\sum \limits_{k=1}^4 \lambda_{s_k} + 2(\lambda_{i_1}+\lambda_{i_2})+ \lambda_{b_1} +\lambda_{b_2}+\lambda_{z_2}+\lambda_{z_4} \right] \nonumber \\
&+ B\left[\lambda_{m_1} + \lambda_{m_2} + \sum \limits_{k=1}^4 \lambda_{z_k} +2\lambda_f\right] + F(\lambda_{z_1}+\lambda_{z_3})\nonumber \\
&+ \log(\pi e) \left[1+\lambda_{i_1}+\lambda_{i_2}+2(\lambda_{z_1}\!+\!\lambda_{z_4}) \!+\! \lambda_f\right],\label{eq:EU3}\\
\text{where}\nonumber \\
E \!\!=\!\mathbb{E}\!&\left\{\!\log\!\left(\!\! 1\!+\!\!\dfrac{|h_{22}h_{\tilde{1}1}|^2\!+\!|h_{11}h_{22}|^2}{|h_{11}h_{\tilde{1}2}|^2\!+\!|h_{12}h_{\tilde{1}1}|^2\!-\!2Re(h_{11}h_{\tilde{1}1}^*h_{\tilde{1}2}h_{12}^*)}\!\right)\!\!\right\}\!\!, \nonumber \\
F \!\!= \!\mathbb{E}\!&\left\{\log\left(1+\dfrac{|h_{12}|^2}{|h_{22}|^2}\right)\right\}, \nonumber\end{aligned}$$ $(h_{\tilde{1}1},h_{\tilde{1}2})$ is identically distributed and independent of $(h_{11},h_{12})$.
### Gaps Between $R_\Sigma$ and Upper Bounds
Next, we compare $U_1$, $U_2$ and $U_3$, with $R_{\Sigma}$ under the assumption that the topology variation is symmetric ($\lambda_{z_1}+\lambda_{z_4} = \lambda_{z_2}+\lambda_{z_3}$). Particularly, we consider two cases 1) $\lambda_{z_1} =\lambda_{z_2}$, .
Case 1): When $\lambda_{z_1}=\lambda_{z_2}$, and thus $\lambda_{z_3}=\lambda_{z_4}$, $$\begin{aligned}
&C_{\Sigma} \!-\! R_{\Sigma} \leq U_1 \!-\! R_{\Sigma} \nonumber \\
= &(A\!+\!2B\!-\!C\!+E)(\lambda_{z_1}\!+\!\lambda_{z_4}) \!+\! \log(\pi e) \Bigg[\sum\limits_{k=1}^4{\lambda_{s_k}}\nonumber \\
&+\lambda_{m_1}+\lambda_{m_2}+2(\lambda_{i_1}+\lambda_{i_2})+2(\lambda_{z_1}+\lambda_{z_4})\Bigg].\label{eq:gap1}\end{aligned}$$
Case 2): $U_2$ and $U_3$ are compared with $R_{\Sigma}$ respectively in each of following sub-cases:
i\. When $\lambda_{z_1}<\lambda_{z_2}$, $\lambda_{z_3}<\lambda_{z_4}$, and $\lambda_f \leq \lambda_{z_2}-\lambda_{z_1}$, $$\begin{aligned}
&C_{\Sigma} \!-\!\! R_{\Sigma}\leq U_2 -R_{\Sigma} \nonumber \\
= & (A+2B-C)\left(\lambda_{z_1}+\lambda_{z_3}\right)+(4B-D)\lambda_f+\!F(\lambda_{z_2}\!+\!\lambda_{z_4}) \nonumber\\
&\!+\!\log(\pi e)\left[1\!+\!\lambda_{i_1}\!+\!\lambda_{i_2}\!+\!2(\lambda_{z_1}\!+\!\lambda_{z_4})\!+\!\lambda_f\right].\label{eq:gap2_1}\end{aligned}$$
ii\. When $\lambda_{z_1}> \lambda_{z_2}$, $\lambda_{z_3} > \lambda_{z_4}$, and $\lambda_f \leq \lambda_{z_1}-\lambda_{z_2}$, $$\begin{aligned}
&C_{\Sigma} \!-\! R_{\Sigma}\leq U_3 -R_{\Sigma} \nonumber \\
= & (A+2B-C)\left(\lambda_{z_2}+\lambda_{z_4}\right)+(4B-D)\lambda_f+ \!F(\lambda_{z_1}\!+\!\lambda_{z_3}) \nonumber \\
&+\log(\pi e)\left[1\!+\!\lambda_{i_1}\!+\!\lambda_{i_2}\!+\!2(\lambda_{z_1}\!+\!\lambda_{z_4})\!+\!\lambda_f\right].\label{eq:gap2_2}\end{aligned}$$
### Gap Evaluation
In the last step, we evaluate the obtained gaps of $C_{\Sigma}-R_{\Sigma}$ with the uniform-phase fading, Rayleigh fading and Rice fading respectively.
*Uniform-Phase Fading*
Because $|h_{ji}|^2=1$ for all $j,i$, $F=1$. We also have $$\begin{aligned}
A+2B-C =& \log(1\!+\!P)\!-\!\log\left(1\!+\!\dfrac{P}{2}\right)\!+\!2\log(1+2P) \nonumber \\
&-\! \log(P^2\!+\!3P\!+\!1) \nonumber\\
=& \log\dfrac{2P+2}{P+2} + \log\dfrac{4P^2+4P+1}{P^2+3P+1} \leq 3, \label{eq:A2BC}\end{aligned}$$ and $$\begin{aligned}
4B-D =& 2\left[2\log(2P+1)-\log\left(\dfrac{P^2}{2}+\dfrac{5P}{2} +1\right)\right] \nonumber \\
=& 2 \log\dfrac{8P^2+8P+2}{P^2+5P+2} \leq 6, \label{eq:4BD}\end{aligned}$$
As we notice that $(h_{11}h_{\tilde{1}1}^*h_{\tilde{1}2}h_{12}^*)$ has magnitude 1, and its phase, mod $2\pi$, also admits a uniform distribution over $[0,2\pi)$, therefore we have $E = \dfrac{1}{2\pi}\int_0^{2\pi} \log\left( 1+\dfrac{1}{1-\cos \theta} \right)\,d\theta=1.9$.
When $\lambda_{z_1} = \lambda_{z_2}$, $\lambda_{z_3} = \lambda_{z_4}$, by (\[eq:gap1\]) and (\[eq:A2BC\]), $$\begin{aligned}
C_{\Sigma} \!-\! R_{\Sigma} \leq&(3+1.9)(\lambda_{z_1}\!+\!\lambda_{z_4}) \!+\! \log(\pi e)\Bigg[\sum\limits_{k=1}^4{\lambda_{s_k}} \nonumber \\
&+\lambda_{m_1}+\lambda_{m_2}+2(\lambda_{i_1}+\lambda_{i_2})+2(\lambda_{z_1}+\lambda_{z_4})\Bigg] \nonumber\\
\leq& 4.9(\lambda_{z_1}+\lambda_{z_4})+ \log(\pi e)+\log(\pi e)(\lambda_{i_1}+\lambda_{i_2}) \nonumber\\
=&\log(\pi e) + \frac{4.9}{2}\left[\lambda_{i_1}+\lambda_{i_2}+2(\lambda_{z_1}+\lambda_{z_4})\right]\nonumber \\
&+\left(\log(\pi e)-\frac{4.9}{2}\right)(\lambda_{i_1}+\lambda_{i_2})\nonumber \\
& \leq 2\log(\pi e)\leq 6.2.\end{aligned}$$
When $\lambda_f \leq |\lambda_{z_1}-\lambda_{z_2}|$, we can assume WLOG, that $\lambda_{z_1} > \lambda_{z_2}$, then by (\[eq:gap2\_2\]), (\[eq:A2BC\]) and (\[eq:4BD\]), $$\begin{aligned}
C_{\Sigma} -R_{\Sigma} \leq & 3(\lambda_{z_2}\!+\!\lambda_{z_4})+6\lambda_f + \lambda_{z_1}+\lambda_{z_3}+2\log(\pi e) \nonumber \\
\leq &4(\lambda_{z_1}+\lambda_{z_3})+6.2 \leq 10.2.\end{aligned}$$
*Rayleigh Fading*
For i.i.d. Rayleigh fading, i.e., $h_{ik}$ are identically and independently Rayleigh distributed with $\mathbb{E}\{h_{ik}\}=0$, $\mathbb{E}\{|h_{ik}|^2\}=1$, for all $i \in \{1,\tilde{1},2\} $ and $k \in \{1,2\}$, we numerically evaluate the expectation of the expression $\log\!\left(\!\! 1\!+\!\dfrac{|h_{22}h_{\tilde{1}1}|^2\!+\!|h_{11}h_{22}|^2}{|h_{11}h_{\tilde{1}2}|^2\!+\!|h_{12}h_{\tilde{1}1}|^2\!-\!2Re(h_{11}h_{\tilde{1}1}^*h_{\tilde{1}2}h_{12}^*)}\!\right)$ to obtain an upper bound on $E$: $E \leq 1.457$. Similarly, we numerically evaluate the expectation of $\log\left(1+\dfrac{|h_{12}|^2}{|h_{22}|^2}\right)$ to obtain an upper bound on $F$: $$F = \int_{0}^{\infty} \!\!\! \int_{0}^{\infty}4xy \: e^{-(x^2+y^2)} \log\left(1+\dfrac{x^2}{y^2}\right) dx dy \leq 1.443.$$ We also numerically find that for arbitrary value of $P$, $A+2B-C \leq 4.35$, and $4B-D\leq 8.71$. From these, we calculate bounds on the gap in the same way as for the uniform-phase fading, and we have $$C_{\Sigma}-R_{\Sigma} \leq \begin{cases} 6.2 \text{ bits/sec/Hz}, & \lambda_{z_1} = \lambda_{z_2}\\
12 \text{ bits/sec/Hz}, & \lambda_f \leq |\lambda_{z_1}-\lambda_{z_2}| \end{cases}.$$
*Rice Fading*
For i.i.d. Rice fading with Rice factor $K_r$, the channel coefficient $h_{ik}$, for all $i \in \{1,\tilde{1},2\} $ and $k \in \{1,2\}$, can be written as $h_{ik} = h_{Re} + jh_{Im}$ where $j^2=-1$, $h_{Re}\sim\mathcal{N}(\sqrt{K_r},\frac{1}{2})$ and $h_{Im}\sim\mathcal{N}(0,\frac{1}{2})$. For $K_r=1$, we numerically evaluate the terms $A+2B-C$, $4B-D$, $E$ and $F$, and find that for arbitrary value of $P$, $A+2B-C$ and $4B-D$ are upper bounded by 4.18 and 8.4 respectively. Also we have that $E \!=\! \mathbb{E} \! \left\{\!\log\!\left(\!\! 1\!\!+\!\!\dfrac{|h_{22}h_{\tilde{1}1}|^2\!+\!|h_{11}h_{22}|^2}{|h_{11}h_{\tilde{1}2}|^2\!+\!|h_{12}h_{\tilde{1}1}|^2\!-\!2Re(h_{11}h_{\tilde{1}1}^*h_{\tilde{1}2}h_{12}^*)}\!\!\right)\!\!\right\} \!\!\leq\! 1.67$, and $F = \mathbb{E}\left\{\log\left(1+\dfrac{|h_{12}|^2}{|h_{22}|^2}\right)\right\} \leq 1.39$. Applying these upper bounds to the derived gaps in (\[eq:gap1\]), (\[eq:gap2\_1\]) and (\[eq:gap2\_2\]), we have for i.i.d. Rice fading with Rice factor $K_r=1$, $$C_{\Sigma}-R_{\Sigma} \leq \begin{cases} 6.2\textup{ bits/sec/Hz}, &\lambda_{z_1}=\lambda_{z_2} \\ 11.77\textup{ bits/sec/Hz}, & \lambda_f \leq |\lambda_{z_1}-\lambda_{z_2}|\end{cases}.$$
We emphasize here that our scheme is particularly well-suited to the high SNR regime since the gap becomes negligible with respect to the rate (i.e., ergodic capacity $\gg$ 10 bits/sec/Hz).
Numerical Results
=================
Having identified two coding opportunities where coding across topologies provide DoF gains, i.e., 50% for coding opportunity 1 and 33.3% for coding opportunity 2, compared with the TDMA scheme, we ask the question that how much rate gains can we actually get from each of the coding opportunities? In the first part of this section, we numerically evaluate the ergodic rate gains for Rayleigh and Rice fading channels in each of the coding opportunities. In the second part, we return to the rover-to-orbiter communication problem, and evaluate the DoF/throughput gains by coding across topologies for a 2-rover, 4-orbiter communication network.
Ergodic Rate Gains of Coding Opportunities
------------------------------------------
We numerically evaluate the ergodic rate gains over TDMA (one transmitter communicates with one receiver in each time slot) in both coding opportunities for the following i.i.d. fading channels: i. Rayleigh fading: $\mathbb{E}\{h_{ij}\}=0$, $\mathbb{E}\{|h_{ij}|^2\}=1$. ii. Rice fading: $\mathbb{E}\{h_{ij}\}=\sqrt{K_r}$, $\mathbb{E}\{|h_{ij}|^2\}=K_r+1$, where $K_r$ denotes the Rice factor.
### Coding Opportunity 1
We assume that half of the topology instances are $z_1$, and the other half are $z_2$. We apply the proposed CAT scheme in Section \[sec:coding1\] to all $\{z_1,z_2\}$ combinations to numerically evaluate the ergodic sum-rate in (\[eq:ergodic\_rate\]), for the case $\lambda_{z_1}=\lambda_{z_2}=\frac{1}{2}$.
We see from Fig. \[fig:co1\_rate\] that a substantial rate gain of at least 40% can be achieved by the proposed CAT scheme over TDMA, for all channel distributions and transmit power constraints. As $P$ increases, the ergodic rate gain converges to the DoF gain of 50%.
### Coding Opportunity 2
We assume that one third of the topology instances are $z_2$, another one third are $z_4$ and the remaining one third are $f$. We apply the proposed CAT scheme in Section \[sec:coding2\] to all $\{z_2,z_4,f\}$ combinations to numerically evaluate the ergodic sum-rate in (\[eq:ergodic\_rate\]), for the case $\lambda_{z_2}=\lambda_{z_4}= \lambda_{f}=\frac{1}{3}$.
We see from Fig. \[fig:co2\_rate\] that a rate gain of at least 22% can be achieved by the proposed CAT scheme over TDMA, for all channel distributions and transmit power constraints. As $P$ increases, the ergodic rate gain converges to the DoF gain of 33.3%.
Furthermore, we find that for both coding opportunities, when the transmit power constraint $P$ is small (e.g., less than 8 dB for coding opportunity 1 and less than 10 dB for coding opportunity 2), coding across topologies is more beneficial for the channels with many reflection/scattering multipath signal components. However, as $P$ increases, the rate gain by coding across topologies is more significant for the channels with a dominant signal component.
Performance Evaluation of a 2-rover, 4-orbiter System
-----------------------------------------------------
For a 2-rover, 4-orbiter communication network, we numerically evaluate the DoFs and throughputs achieved by TDMA and our proposed scheme, to see how much performance gain our scheme can provide in practical settings.
### System Settings {#system-settings .unnumbered}
We use the satellite orbital parameters drawn from MRO and Odyssey. We assume that the four ascending nodes, which specify the longitudes of the orbits of the four orbiters, are uniformly spaced, and the two rovers are placed at the same latitude.
We assume 500 K system temperature and 800 kHz system bandwidth. We employ the free space propagation model for the rover-to-orbiter communication link with path loss exponent 2. Each rover has transmit power of 10 W.
We find that, as shown in Fig. \[fig:DoF\_gain\], the DoF gain achieved by coding across topologies varies around 9% across rover distances ranging from 400 km to 800 km, indicating that approximately 20% of the topology instances participate in one of the coding opportunities.
Fig. \[fig:rate\_gain\] indicates that the proposed CAT scheme achieves at lease 11.6% throughput gain over TDMA for all rover distances. We notice that the throughput gain consists of two parts: 1) the multiple-access rate gain by allowing two rovers to communicate to one orbiter simultaneously, 2) the rate gain by coding across topologies. When the rover distance increases from 400 km to around 650 km, while the DoF gain increases by 1%, the throughput gain monotonically decreases. This is because the multiple-access rate gain drops more quickly within this distance range. After 650 km, the rate gain by coding across topologies starts to dominate, boosting up the overall throughput gain while the multiple-access rate gain keeps decreasing.
Concluding Remarks and Future Work
==================================
In this paper, we modeled the communication problem from Mars rovers to Mars satellites as an X-channel with varying topologies, and conducted a theoretic analysis of the impact of coding across topologies on the DoF and the communication rate. For a $2 \times 2$ X-channel with varying topologies, we characterize the exact sum-DoF when the topology varies symmetrically and derive lower and upper bounds on the sum-DoF for the general asymmetric setting. Further, we numerically demonstrated the DoF/throughput gain in data rate provided by our proposed coding scheme over the baseline TDMA scheme that is currently in use. A natural extension of this work is to investigate methods/techniques (by either designing better coding schemes or proving a tighter upper bound) to characterize the exact sum-DoF without the symmetric constraint on the topology variation. Another interesting future direction is to explore the scenario with more than 4 orbiters (which is likely to happen in the future) to identify the potential new coding opportunities to further increase the DoF/throughput.
Appendix I\
Proof of upper bounds in (\[eq:bond2\]) and (\[eq:bond3\]) {#appendix-i-proof-of-upper-bounds-in-eqbond2-and-eqbond3 .unnumbered}
==========================================================
Recall that the output signals at Receiver 2 in Topology $a \in \mathcal{E}=\{b_1,b_2,f\}$, $Y_{2,a}^N$, is statistically equivalent to those received at Receiver 1, i.e., $Y_{1,a}^N$, and Receiver 2 decodes $(W_{21},W_{22})$ using ${Y_{2}^N \!\!=\!\! \left(\{Y_{2,s_k}^N\}_{k=1}^4, Y_{2,m_1}^N, Y_{2,m_2}^N,\{Y_{2,z_k}^N\}_{k=1}^4, Y_{2,i_1}^N, Y_{2,i_2}^N, Y_{1,\mathcal{E}}^N\right)}$.
To prove that (\[eq:bond2\]) is an upper bound for the sum-DoF, we first use Fano’s inequality at Receiver 2 and Receiver 1 respectively to establish upper bounds on $(R_{21}+R_{22})$ and $(R_{11}+R_{12})$. Then we simply add up these two bounds to obtain (\[eq:bond2\]) as an upper bound on the sum-DoF.
At Receiver 2, by Fano’s inequality and that $W_{11}$ is independent of $(W_{21},W_{22})$, $$\begin{aligned}
&N (R_{21}+R_{22})\leq I(W_{21},W_{22};Y_2^N|W_{11},{\boldsymbol h}^N)+N\epsilon_N \nonumber\\
= & h(Y_2^N|W_{11},{\boldsymbol h}^N)\!\!-\!\! h(Y_2^N|W_{21},W_{22},W_{11},{\boldsymbol h}^N) \!\!+\!\! N\! \epsilon_N\\
\leq & h(Y_2^N|W_{11},{\boldsymbol h}^N) \!-\! h(Y_{1,b_2}^N|W_{21},W_{22},W_{11},{\boldsymbol h}^N)\nonumber \\
&-\! h(Y_{2,z_2}^N|W_{21},W_{22},W_{11},Y_{1,b_2}^N,{\boldsymbol h}^N) \!+\! N \epsilon_N \label{eq:y2b2z2} \\
\overset{(a)}{=} & h(Y_2^N|W_{11},{\boldsymbol h}^N) - h(Y_{1,b_2}^N|W_{22},{\boldsymbol h}^N)\nonumber \\
&-h(Y_{2,z_2}^N|W_{21},W_{22},W_{11},X_{1,z_2}^N,Y_{1,b_2}^N,{\boldsymbol h}^N)\!+\! N\epsilon_N \\
\overset{(b)}{=} & h(Y_2^N|W_{11},{\boldsymbol h}^N) - h(Y_{1,b_2}^N|W_{22},{\boldsymbol h}^N) \nonumber\\
&-h\left((h_{22}X_2\!+\!Z_2)_{z_2}^N|W_{22},Y_{1,b_2}^N,{\boldsymbol h}^N\right) + N \epsilon_N \label{eq:EU2int1}\\
\overset{(c)}{\leq} & h(Y_2^N|W_{11},{\boldsymbol h}^N) - h(Y_{1,b_2}^N|W_{22},{\boldsymbol h}^N)+ No(\log P) \nonumber \\
&- h(Y_{1,z_2}^N|W_{22},Y_{1,b_2}^N,{\boldsymbol h}^N) + N\epsilon_N \\
\leq & h(Y_{2,s_2}^N|{\boldsymbol h}^N) \!+\! h(Y_{2,s_3}^N|{\boldsymbol h}^N)\!+\! h(Y_{2,m_2}^N|{\boldsymbol h}^N)\!+\! h(Y_{1,f}^N|{\boldsymbol h}^N) \nonumber \\
&+ \sum\limits_{k=1}^{2} h(Y_{2,i_k}^N|{\boldsymbol h}^N)\!\!+\! h(Y_{1,b_2}^N|{\boldsymbol h}^N)\!+\! \sum\limits_{k \neq 4}h(Y_{2,z_k}^N|{\boldsymbol h}^N) \nonumber \\
&\!+\!\! h(Y_{1,b_1}^N|W_{11},{\boldsymbol h}^N\!)\!\!+\! h(Y_{2,z_4}^N|W_{11},Y_{1,b_1}^N,{\boldsymbol h}^N)\!+\! N\! o(\log P)\nonumber\\
&- h(Y_{1,b_2}^N|W_{22},{\boldsymbol h}^N)\!-\! h(Y_{1,z_2}^N|W_{22},Y_{1,b_2}^N,{\boldsymbol h}^N)\!+\! N\epsilon_N, \label{eq:receiver2} \end{aligned}$$
Recall that $Y_{2,z_2}^N$ is the sub-vector of $Y_2^N$ in Topology $z_2$, and here $(h_{22}X_2\!+\!Z_2)_{z_2}^N$ denotes the sub-vector of the vector $\big\{h_{22}(n)X_2(n)+Z_2(n)\big\}_{n=1}^N$ in Topology $z_2$. Steps (a) and (b) result because $X_2^N$ is independent of $(W_{21},W_{11})$, and $X_1^N$ is a function of $(W_{21},W_{11})$.
Step (c) holds because given $(h_{22}X_2\!+\!Z_2)_{z_2}^N$, $$\begin{aligned}
h&\left(Y_{1,z_2}^N|(h_{22}X_2\!+\!Z_2)_{z_2}^N,{\boldsymbol h}^N\right) \nonumber\\
\leq & \sum_{\substack{n \in \text{time slots}\\
\text{in Topology } z_2}} \!\!\!\!h \bigg(Y_1(n)\!-\! \dfrac{h_{12}(n)\left(h_{22}(n)X_2(n)\!+\! Z_2(n)\right)}{h_{22}(n)} \bigg| \nonumber \\
& h_{22}(n)X_2(n) \!+\! Z_2(n),{\boldsymbol h}^N\bigg)\nonumber\\
\leq & \sum_{\substack{n \in \text{time slots}\\
\text{in Topology } z_2}} \!\!\! \mathbb{E}\left\{\log \left[\pi e\left(1+\left|\frac{h_{12}(n)}{h_{22}(n)}\right|^2\right)\right]\right\}\!=\! No(\log P),\label{eq:EU2int3}\end{aligned}$$ and hence, $$\begin{aligned}
-&h\left((h_{22}X_2\!+\!Z_2)_{z_2}^N|W_{22},Y_{1,b_2}^N,{\boldsymbol h}^N\right)\nonumber\\
=& h\left(Y_{1,z_2}^N|(h_{22}X_2\!+\!Z_2)_{z_2}^N, W_{22},Y_{1,b_2}^N,{\boldsymbol h}^N\right)\nonumber \\
&-h\left((h_{22}X_2\!+\!Z_2)_{z_2}^N, Y_{1,z_2}^N|W_{22},Y_{1,b_2}^N,{\boldsymbol h}^N\right)\nonumber\\
\leq & No(\log P)\!-\! h\left(Y_{1,z_2}|W_{22},Y_{1,b_2}^N,{\boldsymbol h}^N\right). \label{eq:EU2int2}\end{aligned}$$
Similarly at Receiver 1, $$\begin{aligned}
N &(R_{11}+R_{12}) \nonumber\\
\leq & h(Y_1^N|W_{22},{\boldsymbol h}^N) \!-\! h(Y_{1,b_1}^N|W_{11},W_{12},W_{22},{\boldsymbol h}^N)\nonumber \\
&-\!h(Y_{1,z_4}^N|W_{11},W_{12},W_{22},Y_{1,b_1}^N,{\boldsymbol h}^N)\!+\!\! N\!\epsilon_N \label{eq:y1b1z4}\\
\leq & h(Y_{1,s_1}^N|{\boldsymbol h}^N) \!+\! h(Y_{1,s_4}^N|{\boldsymbol h}^N) \!+\! h(Y_{1,m_1}^N|{\boldsymbol h}^N)\!+\! h(Y_{1,f}^N|{\boldsymbol h}^N) \nonumber \\
&+ \sum\limits_{k=1}^{2} h(Y_{1,i_k}^N|{\boldsymbol h}^N)\!+\! h(Y_{1,b_1}^N|{\boldsymbol h}^N) + \sum\limits_{k \neq 2}h(Y_{1,z_k}^N|{\boldsymbol h}^N) \nonumber \\
&\!+\! h(Y_{1,b_2}^N|W_{22},{\boldsymbol h}^N)\!+\! h(Y_{1,z_2}^N|W_{22},Y_{1,b_2}^N,{\boldsymbol h}^N)\!+\! N \!o(\log P) \nonumber\\
&-h(Y_{1,b_1}^N|W_{11},{\boldsymbol h}^N\!)\!-\! \! h(Y_{2,z_4}^N|W_{11},Y_{1,b_1}^N,{\boldsymbol h}^N)\!+\! N \epsilon_N. \label{eq:receiver1}\end{aligned}$$
Summing up (\[eq:receiver2\]) and (\[eq:receiver1\]), we have $$\begin{aligned}
&N (R_{21}+R_{22}+R_{11}+R_{12})\nonumber\\
\leq& N \!\bigg[\!\sum\limits_{k=1}^{4}(\lambda_{s_k}\!\!+\!\lambda_{z_k}) \!+\! \sum\limits_{k=1}^{2}\left(\lambda_{b_k}\!+\!\lambda_{m_k}\!+\!2\lambda_{i_k}\right)\!+\! \lambda_{z_1} \!\!+\!\lambda_{z_3} \!\!+\!2\lambda_f \!\bigg]\nonumber \\
&(\log P \!+\! o(\log P))\!+\!\! No(\log P)\!+\!\! N \epsilon_N\nonumber\\
=& N(1+\lambda_{i_1}+\lambda_{i_2}+\lambda_{z_1} +\lambda_{z_3}+\lambda_{f})(\log P + o(\log P))\nonumber \\
&+ No(\log P)\!+\! N\epsilon_N\label{eq:sumz13}.\end{aligned}$$
Normalizing both sides of (\[eq:sumz13\]) by $N \log(P)$, and letting both $P$ and $N$ go to infinity, we obtain (\[eq:bond2\]).
Replacing $W_{11}$ with $W_{12}$, $Y_{1,b_2}^N$ with $Y_{1,b_1}^N$, and $Y_{2,z_2}^N$ with $Y_{2,z_3}^N$ in (\[eq:y2b2z2\]), we obtain another bound for $N(R_{21}+R_{22})$. Replacing $W_{22}$ with $W_{21}$, $Y_{1,b_1}^N$ with $Y_{1,b_2}^N$, and $Y_{1,z_4}^N$ with $Y_{1,z_1}^N$ in (\[eq:y1b1z4\]), we have another bound for $N(R_{11}+R_{12})$. Adding these two new upper bounds yields the sum-DoF upper bound in (\[eq:bond3\]).
Appendix II\
Sketch of the Proof of Theorem 2 {#appendix-ii-sketch-of-the-proof-of-theorem2 .unnumbered}
================================
Achievability {#achievability .unnumbered}
-------------
The achievable scheme of Theorem 2 for the general $2 \times 2 $ X-channel with varying topologies (not necessarily symmetric) further extends the scheme of the symmetric setting presented in Section III-C to include the following 2 additional cases:
- Case 3: $\lambda_{z_1} \leq \lambda_{z_2}$ and $\lambda_{z_3} > \lambda_{z_4}$,
- Case 4: $\lambda_{z_1} > \lambda_{z_2}$ and $\lambda_{z_3} \leq \lambda_{z_4}$.
Employing the same coding scheme as in Section III-C to exhaustively utilize the instances of coding opportunity 1 then the instances of coding opportunity 2 for Case 1 and 2, we can achieve the following sum-DoFs for the general asymmetric setting:
- Case 1: $\lambda_{z_1} \leq \lambda_{z_2}$ and $\lambda_{z_3} \leq \lambda_{z_4}$ $$\begin{aligned}
d_{\Sigma} \geq& 1+\lambda_{i_1}+\lambda_{i_2} \nonumber \\
&+\min\{\lambda_{z_1}+\lambda_{z_4},\lambda_{z_2}+\lambda_{z_3},\lambda_{z_1}+\lambda_{z_3}+\lambda_f\},\end{aligned}$$
- Case 2: $\lambda_{z_1} > \lambda_{z_2}$ and $\lambda_{z_3} > \lambda_{z_4}$ $$\begin{aligned}
d_{\Sigma} \geq& 1+\lambda_{i_1}+\lambda_{i_2} \nonumber \\
&+\min\{\lambda_{z_1}+\lambda_{z_4},\lambda_{z_2}+\lambda_{z_3},\lambda_{z_2}+\lambda_{z_4}+\lambda_f\},\end{aligned}$$
For Case 3 and 4, after all possible instances of coding opportunity 1 are utilized (as in Step i of the scheme in Section III-C), delivering $3\min\{\lambda_{z_1},\lambda_{z_2}\}N+3\min\{\lambda_{z_3},\lambda_{z_4}\}N$ symbols, there is no coding opportunity 2 to take advantage of and all the remaining topologies are coded separately to deliver one symbol at each channel use. Therefore, the achieved sum-DoF is $1+\lambda_{i_1}+\lambda_{i_2}+\lambda_{z_1}+\lambda_{z_4}$ for Case 3 and $1+\lambda_{i_1}+\lambda_{i_2}+\lambda_{z_2}+\lambda_{z_3}$ for Case 4.
Overall, the achieved sum-DoF without the symmetric constraint is: $$\begin{aligned}
&1+\lambda_{i_1}+\lambda_{i_2} \nonumber \\
&+\min\{\lambda_{z_1}\!+\! \lambda_{z_4},\lambda_{z_2}\!+\!\lambda_{z_3},\lambda_{z_1}\!+\!\lambda_{z_3}\!+\!\lambda_f,\lambda_{z_2}\!+\!\lambda_{z_4}\!+\!\lambda_f\}.\end{aligned}$$
Upper Bound {#upper-bound .unnumbered}
-----------
The converse of Theorem 2 includes the following 3 upper bounds on the sum-DoF of the general asymmetric setting:
$$\begin{aligned}
d_{\Sigma} &\leq 1+\lambda_{i_1}+\lambda_{i_2}+ \frac{\lambda_{z_1}+\lambda_{z_2}+\lambda_{z_3}+\lambda_{z_4}}{2}, \label{eq:Abond1}\\
d_{\Sigma} &\leq 1+\lambda_{i_1}+\lambda_{i_2}+\lambda_{z_1}+\lambda_{z_3}+\lambda_{f},\label{eq:Abond2}\\
d_{\Sigma} &\leq 1+\lambda_{i_1}+\lambda_{i_2}+\lambda_{z_1}+\lambda_{z_3}+\lambda_{f},\label{eq:Abond3}\end{aligned}$$
where (\[eq:Abond2\]) and (\[eq:Abond3\]) are the same as (\[eq:bond2\]) and (\[eq:bond3\]) respectively and hold no matter if the topology variation is symmetric or not (see proofs in Appendix I). Here we demonstrate how the upper bound (\[eq:Abond1\]) is derived.
Following the same steps as proving the upper bound (\[eq:bond1\]) for the symmetric setting until (\[eq:w1ty1b\]), we have for Receivers 1 and $\tilde{1}$: $$\begin{aligned}
&I (W_1;Y_{1,\mathcal{B}}^N|\tilde{{\boldsymbol h}}^N)+ I(W_1;Y_{\tilde{1},\mathcal{B}}^N|\tilde{{\boldsymbol h}}^N)\nonumber\\
\leq & 2N(\lambda_{i_1}+\lambda_{i_2}+\lambda_{z_1}+\lambda_{z_4})(\log P + o(\log P))\nonumber \\
&+2I(W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N)\nonumber \\
&-h(Y_{1,z_1}^N,Y_{1,z_4}^N,Y_{\tilde{1},z_1}^N,Y_{\tilde{1},z_4}^N|W_1,Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N,\tilde{{\boldsymbol h}}^N)\\
= & 2N(\lambda_{i_1}+\lambda_{i_2}+\lambda_{z_1}+\lambda_{z_4})(\log P + o(\log P)) \nonumber \\
&+\! 2I(W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N\!)\nonumber \\
&+ h(Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N\!)\nonumber \\
&-h(Y_{1,z_1}^N,Y_{1,z_4}^N,Y_{\tilde{1},z_1}^N,Y_{\tilde{1},z_4}^N,Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N)\\
\leq & 2N(\lambda_{i_1}+\lambda_{i_2}+\lambda_{z_1}+\lambda_{z_4})(\log P + o(\log P)) \nonumber \\
&+ 2I(W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N) \nonumber \\
&+ h(Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N) \nonumber \\
&-h(Y_{1,z_1}^N,Y_{1,z_4}^N,Y_{\tilde{1},z_1}^N,Y_{\tilde{1},z_4}^N,Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N)\nonumber\\
& + h(Y_{1,z_1}^N,Y_{1,z_4}^N,Y_{\tilde{1},z_1}^N,Y_{\tilde{1},z_4}^N,Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|W_1,W_2,\tilde{{\boldsymbol h}}^N)\nonumber \\
&-h(Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|W_1,W_2,\tilde{{\boldsymbol h}}^N)\nonumber\\
= & 2N(\lambda_{i_1}+\lambda_{i_2}+\lambda_{z_1}+\lambda_{z_4})(\log P + o(\log P)) \nonumber \\
&+ 2I (W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N) \nonumber \\
&+ I(W_2;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N)\nonumber \\
&-\! I(W_2;Y_{1,z_1}^N,Y_{1,z_4}^N,Y_{\tilde{1},z_1}^N,Y_{\tilde{1},z_4}^N,Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N\!)\nonumber\\
\leq & 2N(\lambda_{i_1}+\lambda_{i_2}+\lambda_{z_1}+\lambda_{z_4})(\log P + o(\log P)) \nonumber \\
&+ I(W_1,W_2;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N) \nonumber \\
&+I(W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N) \nonumber \\
&- I(W_2;Y_{1,z_1}^N,Y_{1,z_4}^N,Y_{\tilde{1},z_1}^N,Y_{\tilde{1},z_4}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N)\\
\overset{(a)}{\leq} & N\left[2(\lambda_{i_1}\!+\!\lambda_{i_2}\!+\!\lambda_{z_1}\!+\!\lambda_{z_4})\!+\!\lambda_{z_2}\!+\!\lambda_{z_3}\!+\!\lambda_{b_1}\!+\!\lambda_{b_2}\!+\!\lambda_f\right]\nonumber \\&\left(\log P+o(\log P)\right)+ I(W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|W_2,\tilde{{\boldsymbol h}}^N )\nonumber \\
&- I(W_2;Y_{2,z_1}^N, Y_{2,z_4}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N )+No(\log P),\label{eq:Aint1}\end{aligned}$$ where (a) holds following the same derivation from (\[eq:EUint3\]) to (\[eq:w1y1bw1y1t\]) in Section IV except that no symmetric constraint is imposed on the topology variation.
Similarly for Receivers 2 and $\tilde{2}$, we have $$\begin{aligned}
I&(W_2;Y^N_{1,\mathcal{E}},Y_{2,\mathcal{B}\backslash\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N )+ I(W_2;Y^N_{1,\mathcal{E}},Y_{\tilde{2},\mathcal{B}\backslash\mathcal{E}}^N|\tilde{{\boldsymbol h}}^N )\nonumber\\
\leq & N\left[2(\lambda_{i_1}\!+\! \lambda_{i_2}\!+\! \lambda_{z_2}\!+\! \lambda_{z_3})\!+\! \lambda_{z_1}\!+\! \lambda_{z_4}\!+\! \lambda_{b_1}\!+\! \lambda_{b_2}\!+\! \lambda_{f}\right]\nonumber \\
&\left(\log P+o(\log P)\right) \nonumber + I(W_2;Y_{2,z_1}^N, Y_{2,z_4}^N,Y_{1,\mathcal{E}}^N|W_1,\tilde{{\boldsymbol h}}^N ) \nonumber \\
&- I(W_1;Y_{1,z_2}^N,Y_{1,z_3}^N,Y_{1,\mathcal{E}}^N|W_2,\tilde{{\boldsymbol h}}^N )+No(\log P).\label{eq:Aint2}\end{aligned}$$
Adding both sides of (\[eq:R1\]) and (\[eq:R2\]) in Section IV, and by (\[eq:Aint1\]) and (\[eq:Aint2\]), we arrive at $$\begin{aligned}
2&N(R_1+R_2) \nonumber \\
\leq & N\!\left[2 \! \left( \! 2 \lambda_{i_1} + 2\lambda_{i_2} + \sum\limits_{k=1}^4 \lambda_{z_k}+\lambda_{b_1}+\lambda_{b_2}+\lambda_f \! \right) \!+\! \sum\limits_{k=1}^4 \lambda_{z_k}\right] \nonumber \\
&(\log P+o(\log P))\!+\!No(\log P)\!+\! N \epsilon_N\nonumber\\
& +2N\left(\sum\limits_{k=1}^4 \lambda_{s_k} + \lambda_{m_1} + \lambda_{m_2}\right)(\log P+o(\log P)) \nonumber\\
=& N \! \left[2\left(1+\lambda_{i_1}+\lambda_{i_2}\right)+ \sum\limits_{k=1}^4 \lambda_{z_k}\right] \! (\log P+o(\log P))\nonumber \\
& + \! No(\log P)\!+\! N \epsilon_N\label{eq:Aint3}.\end{aligned}$$
Dividing both sides of (\[eq:Aint3\]) by $2N \log P$ and let both $N$ and $P$ go to infinity, we obtain the sum-DoF upper bound in (\[eq:Abond1\]).
Acknowledgement {#acknowledgement .unnumbered}
===============
The authors would like to thank Dr. Dariush Divsalar at JPL for useful discussions.
[^1]: This work was partly supported by JPL R&TD grant RSA-1517455, and NSF grants CAREER 1408639, CCF-1408755, NETS-1419632, EARS-1411244, and ONR award N000141310094. Materials in this paper were presented in part at the IEEE International Symposium on Information Theory, Hong Kong, June 2015.
[^2]: S. Li and A.S. Avestimehr are with the Department of Electrical Engineering, University of Southern California, Los Angeles, CA, 90089, USA (e-mail: [email protected]; [email protected]).
[^3]: This research work was completed while D.T.H. Kao was part of the Universtiy of Southern California (e-mail: [email protected]). D.T.H. Kao is now a part of Google Inc.
|
---
abstract: |
The aim of this article is to describe necessary and sufficient conditions for simplicity of Ore extension rings, with an emphasis on differential polynomial rings. We show that a differential polynomial ring, $R[x;\operatorname{id}_R,\delta]$, is simple if and only if its center is a field and $R$ is $\delta$-simple. When $R$ is commutative we note that the centralizer of $R$ in ${R[x;\sigma,\delta]}$ is a maximal commutative subring containing $R$ and, in the case when $\sigma=\operatorname{id}_R$, we show that it intersects every non-zero ideal of ${R[x;\operatorname{id}_R,\delta]}$ non-trivially. Using this we show that if $R$ is $\delta$-simple and maximal commutative in ${R[x;\operatorname{id}_R,\delta]}$, then ${R[x;\operatorname{id}_R,\delta]}$ is simple. We also show that under some conditions on $R$ the converse holds.\
[**Keywords**]{}: Ore extension rings, maximal commutativity, ideals, simplicity\
[**Mathematical Subject classification 2010**]{}: 16S32,16S36,16D25
author:
- 'Johan [Ö]{}inert[^1]'
- 'Johan Richter[^2]'
- 'Sergei D. Silvestrov[^3]'
title: Maximal commutative subrings and simplicity of Ore extensions
---
Introduction
============
A topic of interest in the field of operator algebras is the connection between properties of dynamical systems and algebraic properties of crossed products associated with them. More specifically the question when a certain canonical subalgebra is maximal commutative and has the ideal intersection property, i.e. each non-zero ideal of the algebra intersects the subalgebra non-trivially. For a topological dynamical systems $(X,\alpha)$ one may define a crossed product C\*-algebra $C(X) \rtimes_{\tilde{\alpha}} {\mathbb{Z}}$ where $\tilde{\alpha}$ is an automorphism of $C(X)$ induced by $\alpha$. It turns out that the property known as topological freeness of the dynamical system is equivalent to $C(X)$ being a maximal commutative subalgebra of $C(X) \rtimes_{\tilde{\alpha}} {\mathbb{Z}}$ and also equivalent to the condition that every non-trivial closed ideal has a non-zero intersection with $C(X)$. An excellent reference for this correspondence is [@TomiyamaBook]. For analogues, extensions and applications of this theory in the study of dynamical systems, harmonic analysis, quantum field theory, string theory, integrable systems, fractals and wavelets see [@ArchbordSpielberg; @CarlsenSS; @DutkayJorg3; @DutkayJorg4; @MR2195591ExelVershik; @MACbook3; @OstSam-book; @TomiyamaBook].
For any class of graded rings, including gradings given by semigroups or even filtered rings (e.g. Ore extensions), it makes sense to ask whether the ideal intersection property is related to maximal commutativity of the degree zero component. For crossed product-like structures, where one has a natural action, it further makes sense to ask how the above mentioned properties of the degree zero component are related to properties of the action.
These questions have been considered recently for algebraic crossed products and Banach algebra crossed products, both in the traditional context of crossed products by groups as well as generalizations to graded rings, crossed products by groupoids and general categories in [@deJeu; @LundstromOinert; @OinertSimplegroupgradedrings; @OinertLundstrom; @OinertLundstrom2; @OinertSS1; @OinertSS2; @OinertSS3; @OinertSS4; @OinertSS5; @SSD1; @SSD2; @SSD3; @SvT].
Ore extensions constitute an important class of rings, appearing in extensions of differential calculus, in non-commutative geometry, in quantum groups and algebras and as a uniting framework for many algebras appearing in physics and engineering models. An Ore extension of $R$ is an overring with a generator $x$ satisfying $xr=\sigma(r)x+\delta(r)$ for $r \in R$ for some endomorphism $\sigma$ and a $\sigma$-derivation $\delta$.
This article aims at studying the centralizer of the coefficient subring of an Ore extension, investigating conditions for the simplicity of Ore extensions and demonstrating the connections between these two topics.
Necessary and sufficient conditions for a differential polynomial ring (an Ore extension with $\sigma = \operatorname{id}_R$) to be simple have been studied before. An early paper by Jacobson [@Jacobson] studies the case when $R$ is a division ring of characteristic zero. His results are generalized in the textbook [@CozzensFaith Chapter 3] in which Cozzens and Faith prove that if $R$ is a ${\mathbb{Q}}$-algebra and $\delta$ a derivation on $R$ then $R[x;\operatorname{id}_R, \delta]$ is simple if and only if $\delta$ is a so called outer derivation and the only ideals invariant under $\delta$ are $\{0\}$ and $R$ itself. In his PhD thesis [@Jordan] Jordan shows that if $R$ is a ring of characteristic zero and with a derivation $\delta$ then $R[x;\operatorname{id}_R,\delta]$ is simple if and only if $R$ has no non-trivial $\delta$-invariant ideals and $\delta$ is an outer derivation. In $\cite{Jordan}$ Jordan also shows that if ${R[x;\operatorname{id}_R,\delta]}$ is simple, then $R$ has zero or prime characteristic and gives necessary and sufficient conditions for ${R[x;\operatorname{id}_R,\delta]}$ to be simple when $R$ has prime characteristic. (See also [@Jordan2].)
In [@CozzensFaith] Cozzens and Faith also prove that if $R$ is an commutative domain, then $R[x;\operatorname{id}_R,\delta]$ is simple if and only if the subring of constants, $K$, is a field (the constants are the elements in the kernel of the derivation) and $R$ is infinite-dimensional as a vector space over $K$. In [@Goodearl82 Theorem 2.3] Goodearl and Warfield prove that if $R$ is a commutative ring and $\delta$ a derivation on $R$, then $R[x;\operatorname{id}_R,\delta]$ is simple if and only if there are no non-trivial $\delta$-invariant ideals (implying that the ring of constants, $K$, is a field) and $R$ is infinite-dimensional as a vector space over $K$.
McConnell and Sweedler [@McConnell] study simplicity criteria for smash products, a generalization of differential polynomial rings.
Conditions for a general Ore extension to be simple have been studied in [@LamLeroy] by Lam and Leroy. Their Theorem 5.8 says that $S=R[x;\sigma,\delta]$ is non-simple if and only if there is some $R[y;\sigma',0]$ that can be embedded in $S$. See also [@LamLeroy Theorem 4.5] and [@JainLamLeroy Lemma 4.1] for necessary and sufficient conditions for $R[x;\sigma,\delta]$ to be simple. In [@CozzensFaith Chapter 3] Cozzens and Faith provide an example of a simple Ore extension $R[x;\sigma,\delta]$ which is not a differential polynomial ring.
If one has a family of commuting derivations, $\delta_1,\ldots, \delta_n$, one can form a differential polynomial ring in several variables. The articles [@Malm; @Posner; @Voskoglou] consider the question when such rings are simple. In [@Hauger] a class of rings with a definition similar, but not identical to, the defintion of differential polynomial rings of this article, are studied and a characterization of when they are simple is obtained.
None of the articles cited have studied the simplicity of Ore extensions from the perspective pursued in this article. In particular for differential polynomial rings the connection between maximal commutativity of the coefficient subring and simplicity of the differential polynomial ring (Theorem \[MainResult\]) appears to be new, as well as the result that the centralizer of the center of the coefficient subring has the ideal intersection property (Proposition \[IdealIntersection\]). We also show that a differential ring is simple if and only if its center is a field and the coefficient subring has no non-trivial ideals invariant under the derivation (Theorem \[MainResult2\]). In Theorem \[SimpleOre\] we note that simple Ore extensions over commutative domains are necessarily differential polynomial rings, and hence can be treated by the preceding characterization.
In Section \[sec:defnotOreExt\], we recall some notation and basic facts about Ore extension rings used throughout the rest of the article. In Section \[sec:centralizerMaxcom\], we describe the centralizer of the coefficient subring in general Ore extension rings and then use this description to provide conditions for maximal commutativity of the coefficient subring. These conditions of maximal commutativity of the coefficient subring are further detailed for two important classes of Ore extensions, the skew polynomial rings and differential polynomial rings in Subsections \[subsec:skewpolringscentermaxcom\] and \[subsec:Differentialpolringscentermaxcom\]. In Section \[sec:simplicity\], we investigate when an Ore extension ring is simple and demonstrate how this is connected to maximal commutativity of the coefficient subring for differential polynomial rings (Subsection \[subsec:simplicityDifferentialPolrings\]).
Ore extensions. Definitions and notations {#sec:defnotOreExt}
=========================================
Throughout this paper all rings are assumed to be unital and associative, and ring morphisms are assumed to respect multiplicative identity elements.
For general references on Ore extensions, see [@Goodearl2004; @McConnellRobson; @RowenRing1]. For the convenience of the reader, we recall the definition. Let $R$ be a ring, $\sigma : R \to R$ a ring endomorphism (not necessarily injective) and $\delta : R \to R$ a $\sigma$-derivation, i.e. $$\delta(a+b)=\delta(a)+\delta(b)
\quad \text{ and } \quad
\delta(ab) = \sigma(a) \delta(b) + \delta(a)b$$ for all $a,b\in R$.
The Ore extension ${R[x;\sigma,\delta]}$ is defined as the ring generated by $R$ and an element $x \notin R$ such that $1,x, x^2, \ldots$ form a basis for ${R[x;\sigma,\delta]}$ as a left $R$-module and all $r \in R$ satisfy $$\label{MultiplicationRule}
x r = \sigma(r)x + \delta(r).$$
Such a ring always exists and is unique up to isomorphism (see [@Goodearl2004]). Since $\sigma(1)=1$ and $\delta(1 \cdot 1) = \sigma(1) \cdot \delta(1) +\delta(1) \cdot 1$, we get that $\delta(1)=0$ and hence $1_R$ will be a multiplicative identity for ${R[x;\sigma,\delta]}$ as well.
If $\sigma= \operatorname{id}_R$, then we say that ${R[x;\operatorname{id}_R,\delta]}$ is a *differential polynomial ring*. If $\delta \equiv 0$, then we say that ${R[x;\sigma,0]}$ is a *skew polynomial ring*. The reader should be aware that throughout the literature on Ore extensions the terminology varies.
An arbitrary non-zero element $P \in {R[x;\sigma,\delta]}$ can be written uniquely as $P = \sum_{i=0}^n a_i x^i$ for some $n \in {\mathbb{Z}}_{\geq 0}$, with $a_i \in R$ for $i \in \{0,1,\ldots, n\}$ and $a_n \neq 0$. The *degree* of $P$ will be defined as $\deg(P):=n$. We set $\deg(0) := -\infty$.
A $\sigma$-derivation $\delta$ is said to be *inner* if there exists some $a \in R$ such that $\delta(r) = ar-\sigma(r)a$ for all $r \in R$. A $\sigma$-derivation that is not inner is called *outer*.
Given a ring $S$ we denote its center by $Z(S)$ and its characteristic by $\operatorname{char}(S)$. The *centralizer* of a subset $T\subseteq S$ is defined as the set of elements of $S$ that commute with every element of $T$. If $T$ is a commutative subring of $S$ and the centralizer of $T$ in $S$ coincides with $T$, then $T$ is said to be a *maximal commutative* subring of $S$.
The centralizer and maximal commutativity of $R$ in ${R[x;\sigma,\delta]}$ {#sec:centralizerMaxcom}
==========================================================================
In this section we shall describe the centralizer of $R$ in the Ore extension ${R[x;\sigma,\delta]}$ and give conditions for when $R$ is a maximal commutative subring of ${R[x;\sigma,\delta]}$. We start by giving a general description of the centralizer and then derive some consequences in particular cases.\
In order to proceed we shall need to introduce some notation. We will define functions $\pi_k^l: R \to R$ for $k,l \in {\mathbb{Z}}$. We define $\pi_0^0 = \operatorname{id}_R$. If $m,n$ are non-negative integers such that $m>n$, or if at least one of $m,n$ is negative, then we define $\pi_m^n \equiv 0$. The remaining cases are defined by induction through the formula $$\pi_m^n= \sigma \circ \pi_{m-1}^{n-1} + \delta \circ \pi_m^{n-1}.$$ These maps turn out to be useful when it comes to writing expressions in a compact form. We find by a straightforward induction that for all $n \in {\mathbb{Z}}_{\geq 0}$ and $r \in R$ we may write $$x^n r = \sum_{m=0}^{n} \pi_m^n(r) x^m.$$
\[CommutantR\] $\sum_{i=0}^n a_i x^i \in {R[x;\sigma,\delta]}$ belongs to the centralizer of $R$ in ${R[x;\sigma,\delta]}$ if and only if $$r a_i = \sum_{j=i}^{n} a_j \pi_i^j(r)$$ holds for all $i \in \{0,\ldots,n\}$ and all $r \in R$.
For an arbitrary $r\in R$ we have $r \sum_{i=0}^{n} a_i x^i = \sum_{i=0}^{n} r a_i x^i$ and $$\begin{aligned}
\sum_{i=0}^{n} a_i x^i r = \sum_{i=0}^n a_i \sum_{j=0}^{i} \pi_j^i(r) x^j = \sum_{i=0}^{n} \sum_{j=0}^{n} a_i \pi_j^i(r) x^j = \\
\sum_{j=0}^n \sum_{i=0}^{n} a_i \pi_j^i(r) x^j = \sum_{i=0}^n \sum_{j=0}^n a_j \pi_i^j(r) x^i.\end{aligned}$$ By equating the expressions for the coefficient in front of $x^i$, for $i\in \{0,\ldots,n\}$, the desired conclusion follows.
The above description of the centralizer of R holds in a completely general setting. We shall now use it to obtain conditions for when $R$ is a maximal commutative subring of the Ore extension ring.
Note that if $R$ is commutative, then the centralizer of $R$ in ${R[x;\sigma,\delta]}$ is also commutative, hence a maximal commutative subring of ${R[x;\sigma,\delta]}$. Indeed, take two arbitrary elements $\sum_{i=0}^n c_i x^i$ and $\sum_{j=0}^m d_j x^j$ in the centralizer of $R$ and compute $$\begin{aligned}
\left( \sum_{i=0}^n c_i x^i \right) \left( \sum_{j=0}^m d_j x^j \right) = \sum_{j=0}^m d_j \left( \sum_{i=0}^n c_i x^i \right) x^j = \sum_{j=0}^m \sum_{i=0}^n d_j c_i x^{i+j} =\\
\sum_{i=0}^n \sum_{j=0}^m c_i d_j x^j x^i = \sum_{i=0}^n c_i \left( \sum_{j=0}^m d_j x^j \right) x^i = \left( \sum_{j=0}^m d_j x^j\right ) \left( \sum_{i=0}^n c_i x^i \right). \end{aligned}$$
\[MaxCommWithDelta\] Let $R$ be a commutative ring. If for every $n \in \mathbb{Z}_{>0}$ there is some $r \in R$ such that $\sigma^n(r)-r$ is a regular element, then $R$ is a maximal commutative subring of ${R[x;\sigma,\delta]}$. In particular, if $R$ is an commutative domain and $\sigma$ is of infinite order, then $R$ is maximal commutative.
Suppose that $P = \sum_{k=0}^n a_k x^k$ is an element of degree $n>0$ which commutes with every element of $R$. Let $r$ be an element of $R$ such that $\sigma^n(r)-r$ is regular. By Proposition \[CommutantR\] and the commutativity of $R$, we get that $r a_n = \sigma^n(r) a_n$ or equivalently $(\sigma^n(r)-r)a_n=0$. Since $\sigma^n(r)-r$ is regular this implies $a_n=0$, which is a contradiction. This shows that $R$ is a maximal commutative subring of ${R[x;\sigma,\delta]}$.
\[qWeyl\] Let $k$ be an arbitrary field of characteristic zero and let $R:=k[y]$ be the polynomial ring in one indeterminate over $k$.
Define $\sigma(y)=qy$ for some $q\in k \setminus \{0,1\}$. Then for any $p(y) \in k[y]$ we have $\sigma(p(y))=p(qy)$ and $\sigma$ is an automorphism of $R$. Define a map $\delta : R \to R$ by $$\delta(p(y)) = \frac{\sigma(p(y))-p(y)}{\sigma(y)-y} = \frac{p(qy)-p(y)}{qy-y}$$ for $p(y) \in k[y]$. One easily checks that $\delta$ is a well-defined $\sigma$-derivation of $R$. The ring $k[y][x;\sigma, \delta]$ is known as the *$q$-Weyl algebra* or the *$q$-deformed Heisenberg algebra* [@HSbook]. If $q$ is not a root of unity, then by Proposition \[MaxCommWithDelta\], $k[y]$ is maximal commutative. If $q$ is a root of unity of order $n$, then $x^n$ and $y^n$ are central and in particular $R$ is not maximal commutative.
Example \[WeylAlgebra\] demonstrates that infiniteness of the order of $\sigma$ in Proposition \[MaxCommWithDelta\] is not a necessary condition for $R$ to be a maximal commutative subring.
Skew polynomial rings {#subsec:skewpolringscentermaxcom}
---------------------
Many of the formulas simplify considerably if we take $\delta \equiv 0$, and as a consequence we can say more about maximal commutativity of $R$ in $R[x;\sigma,0]$.
\[MaxCommSkewPol\] Let $R$ be a commutative domain and ${R[x;\sigma,0]}$ a skew polynomial ring. $R$ is a maximal commutative subring of ${R[x;\sigma,0]}$ if and only if $\sigma$ is of infinite order.
One direction is just a special case of Proposition \[MaxCommWithDelta\]. If $n \in {\mathbb{Z}}_{>0}$ is such that $\sigma^n=\operatorname{id}_R$, then the element $x^n$ commutes with each $r \in R$ since $x^n r = \sigma^n(r) x^n = r x^n$.
With the same notation as in Example \[qWeyl\] form the ring $k[y][x;\sigma,0]$. It is known as the *quantum plane*. By Proposition \[MaxCommSkewPol\] $k[y]$ is a maximal commutative subring if and only if $\sigma$ is of infinite order , which is the same as saying that $q$ is not a root of unity. If $q$ is a root of unity of order $n$ then it is easy to see that $x^n$ and $y^n$ will belong to the center, hence $R$ is not a maximal commutative subring.
The following example shows that the conclusion of Proposition \[MaxCommSkewPol\] is no longer valid if one removes the assumption that $R$ is a commutative domain.
\[Ex\_MaxComm\] Let $R$ be the ring $\mathbb{Q}^\mathbb{N}$ of functions from the non-negative integers to the rationals. Define $\sigma : R \to R$ such that, for any $f\in R$, we have $\sigma(f)(0)=f(0)$ and $\sigma(f)(n) = f(n-1)$ if $n>0$. Then $\sigma$ is an injective endomorphism. But $d_0$, the characteristic function of $\{ 0 \}$, satisfies $d_0(n) (\sigma(f)(n)-f(n))=0$ for all $f \in R$ and $n \in {\mathbb{N}}$. Thus, as in the proof of Propostion \[MaxCommSkewPol\], it follows that the element $d_0 x$ of ${R[x;\sigma,0]}$ commutes with everything in $R$.
Differential polynomial rings {#subsec:Differentialpolringscentermaxcom}
-----------------------------
We shall now direct our attention to the case when $\sigma=\operatorname{id}_R$. The following useful lemma appears in [@Goodearl2004 p. 27].
\[CommutationRuleDiffRing\] In ${R[x;\operatorname{id}_R,\delta]}$ we have $$x^n r = \sum_{i=0}^n \binom{n}{i} \delta^{n-i}(r) x^i$$ for any non-negative integer $n$ and any $r\in R$.
We will make frequent reference to the following lemma, which is an easy consequence of Lemma \[CommutationRuleDiffRing\].
\[commutantLemma\] Let $q= \sum_{i=0}^n q_i x^i \in {R[x;\operatorname{id}_R,\delta]}$ and $r \in Z(R)$. The following assertions hold:
1. if $n=0$, then $rq-qr=0$;
2. if $n \geq 1$, then $rq-qr$ has degree at most $n-1$ and $(n-1)$:th coefficient $-nq_n \delta(r)$;
3. $xq-qx= \sum_{i=0}^n \delta(q_i) x^i$.
The following proposition gives some sufficient conditions for $R$ to be a maximal commutative subring of ${R[x;\operatorname{id}_R,\delta]}$. Note that in the special case when $R$ is commutative and $\sigma= \operatorname{id}_R$, an outer derivation is the same as a non-zero derivation.
\[MaxCommutativityDiffRing\] Let $R$ be a commutative domain of characteristic zero. If the derivation $\delta$ is non-zero, then $R$ is a maximal commutative subring of ${R[x;\operatorname{id}_R,\delta]}$.
Suppose that $R$ is not a maximal commutative subring of ${R[x;\operatorname{id}_R,\delta]}$. We want to show that $\delta$ is zero. By our assumption, there is some $n \in {\mathbb{Z}}_{>0}$ and some $q = b x^n + a x^{n-1} + [\text{lower terms}]$ with $a,b \in R$ and $b\neq 0$ such that $rq-qr=0$ for all $r\in R$. By Lemma \[commutantLemma\] and the commutativity of $R$, we get $ rq -qr = (-nb\delta(r))x^{n-1} + [\text{lower terms}].$
Hence, for any $r\in R$, $nb\delta(r)=0$ which yields $n\delta(r)=0$ since $R$ is a commutative domain and $\delta(r)=0$ since $R$ is of characteristic zero.
Let $k$ be a field of characteristic $p >0$ and let $R=k[y]$. If we take $\delta$ to be the usual formal derivative, then we note that $x^p$ is a central element in ${R[x;\operatorname{id}_R,\delta]}$. This shows that the assumption on the characteristic of $R$ in Proposition \[MaxCommutativityDiffRing\] can not be relaxed.
Simplicity conditions for ${R[x;\sigma,\delta]}$ {#sec:simplicity}
================================================
Now we proceed to the main topic of this article. We investigate when ${R[x;\sigma,\delta]}$ is simple and demonstrate how this is related to maximal commutativity of $R$ in ${R[x;\sigma,\delta]}$.
In any skew polynomial ring $R[x;\sigma,0]$, the ideal generated by $x$ is proper and hence skew polynomial rings can never be simple. In contrast, there exist simple skew Laurent rings (see e.g. [@JordanLaurent]).
\[innerderivationnonsimple\] If $\delta$ is an inner derivation, then ${R[x;\sigma,\delta]}$ is isomorphic to a skew polynomial ring and hence not simple (see [@Goodearl92 Lemma 1.5]).
We are very interested in finding an answer to the following question.
\[question\_injective\] Let $R[x;\sigma,\delta]$ be a general Ore extension ring where $\sigma$ is, a priori, not necessarily injective. Does the following implication always hold? $$R[x;\sigma,\delta] \text{ is a simple ring.} \Longrightarrow \sigma \text{ is injective}.$$
So far, we have not been able to find an answer in the general situation. However, it is clear that the implication holds in the particular case when $\delta(\ker \sigma) \subseteq \ker \sigma$, for example when $\sigma$ and $\delta$ commute.
The following unpublished partial answer to the question has been communicated by Steven Deprez (see [@MathOverflowQuestion]).
\[Steven\] Let $R$ be a commutative and reduced ring. If ${R[x;\sigma,\delta]}$ is simple, then $\sigma$ is injective.
Suppose that $\sigma$ is not injective. Take $a \in \ker(\sigma) \setminus \{ 0\}$. By assumption $a^k \neq 0$ for all $k \in {\mathbb{Z}}_{>0}$. Define $I= \{ p \in {R[x;\sigma,\delta]}\mid \exists k \in {\mathbb{Z}}_{>0} : pa^k =0 \}.$ It is clear that $I$ is a left ideal of ${R[x;\sigma,\delta]}$, and a right $R$-module. It is non-zero since it contains $ax-\delta(a)$. Since $a$ is not nilpotent, $I$ does not contain $1$. If we show that $I$ is closed under right multiplication by $x$, then we have shown that it is a non-trivial ideal of ${R[x;\sigma,\delta]}$. Take any $p \in I$ and $k$ such that $pa^k=0$. We compute $$(px)a^{k+1} = p(xa)a^k = p(\sigma(a)x+\delta(a))a^k = p\delta(a)a^k=0.$$ This shows that $px \in I$.
Lemma 1.3 in [@Goodearl92] implies as a special case the following.
\[SigmaDeltaExtension\] If $R$ is a commutative domain, $k$ its field of fractions, $\sigma$ an injective endomorphism of $R$ and $\delta$ a $\sigma$-derivation of $R$, then $\sigma$ and $\delta$ extends uniquely to $k$ as an injective endomorphism, respectively a $\sigma$-derivation.
Using Proposition \[Steven\] we are able to generalize a result proved by Bavula in [@Bavula], using his technique.
\[SimpleOre\] If $R$ is a commutative domain and ${R[x;\sigma,\delta]}$ is a simple ring, then $\sigma =\operatorname{id}_R$.
By Proposition \[Steven\], $\sigma$ must be injective.
Let $k$ be the field of fractions of $R$. By Lemma \[SigmaDeltaExtension\], $\sigma$ and $\delta$ extend uniquely to $k$. ${R[x;\sigma,\delta]}$ can be seen as a subring of $k[x;\sigma,\delta]$. If $\sigma \neq \operatorname{id}_R$, then there is some $\alpha \in R$ such that $\sigma(\alpha)-\alpha \neq 0$. For every $\beta \in k$ we have $\delta(\alpha \beta) = \delta(\beta \alpha)$. Hence, for every $\beta \in k$ the following three equivalent identities hold. $$\begin{aligned}
\sigma(\alpha) \delta(\beta) +\delta(\alpha)\beta = \sigma(\beta)\delta(\alpha)+\delta(\beta)\alpha \Leftrightarrow (\sigma(\alpha)-\alpha)\delta(\beta) = (\sigma(\beta)-\beta)\delta(\alpha) \\
\Leftrightarrow \delta(\beta) = \frac{\delta(\alpha)}{\sigma(\alpha)-\alpha}(\sigma(\beta)-\beta).\end{aligned}$$ Hence $\delta$ is an inner $\sigma$-derivation. This implies that $k[x;\sigma,\delta]$ is not simple since it is isomorphic to a skew polynomial ring. Letting $I$ be a proper ideal of $k[x;\sigma,\delta]$ one can easily check that $I \cap {R[x;\sigma,\delta]}$ is a proper ideal of ${R[x;\sigma,\delta]}$, which is a contradiction.
An example in [@CozzensFaith Chapter 3] shows that Theorem \[SimpleOre\] need not hold if $R$ is a non-commutative domain.
An ideal $J$ of $R$ is said to be *$\sigma$-$\delta$-invariant* if $\sigma(J)\subseteq J$ and $\delta(J)\subseteq J$. If $\{0\}$ and $R$ are the only $\sigma$-$\delta$-invariant ideals of $R$, then $R$ is said to be *$\sigma$-$\delta$-simple*.
The following necessary condition for ${R[x;\sigma,\delta]}$ to be simple is presumably well-known but we have not been able to find it in the existing literature. For the convenience of the reader, we provide a proof.
\[SimpleInvariant\] If ${R[x;\sigma,\delta]}$ is simple, then $R$ is $\sigma$-$\delta$-simple.
Suppose that $R$ is not $\sigma$-$\delta$-simple and let $J$ be a non-trivial $\sigma$-$\delta$-invariant ideal of $R$. Let $A={R[x;\sigma,\delta]}$. Consider the set $I=JA$ consisting of finite sums of elements of the form $ja$ where $j\in J$ and $a\in A$. We claim that $I$ is a non-trivial ideal of $A$, and therefore ${R[x;\sigma,\delta]}$ is not simple;
Indeed, $I$ is clearly a right ideal of $A$, but it is also a left ideal of $A$. To see this, note that for any $r\in R$, $j\in J$ and $a\in A$ we have $rja \in I$ and by the $\sigma$-$\delta$-invariance of $J$ we conclude that $xja = \sigma(j)xa+\delta(j)a \in I$. By repeating this argument we conclude that $I$ is a two-sided ideal of $A$. Furthermore, $I$ is non-zero, since $A$ is unital and $J$ is non-zero, and it is proper; otherwise we would have $1=\sum_{i=0}^n j_i a_i$ for some $n\in {\mathbb{Z}}_{\geq 0}$, $j_i \in J$ and $a_i \in R$ for $i\in\{0,\ldots,n\}$, which implies that $1 \in J$ and this is a contradiction.
One can always, even if ${R[x;\sigma,\delta]}$ is simple, find a (non-zero) left ideal $I$ of ${R[x;\sigma,\delta]}$ such that $I\cap R = \{0\}$. Take some $n \in {\mathbb{Z}}_{>0}$ and let $I$ be the left ideal generated by $1-x^n$. This left ideal clearly has the desired property.
\[CenterField\] Recall that the center of a simple ring is a field.
\[CenterIntersection2\] If $R$ is a domain and $\delta$ is non-zero, then $r \in R$ belongs to $Z({R[x;\sigma,\delta]})$ if and only if the following two assertions hold:
1. $\delta(r)=0$;
2. $r \in Z(R)$.
Since every element of $Z({R[x;\sigma,\delta]})$ commutes with $x$ and with each $r' \in R$, it is clear that conditions (i) and (ii) are necessary for $r \in R$ to belong to $Z({R[x;\sigma,\delta]})$. We also see that they are sufficient if they imply that $\sigma(r)=r$.
Now, suppose that (i) and (ii) hold. Since $\delta$ is non-zero there is some $b$ such that $\delta(b) \neq 0$. We compute $\delta(rb)$ and $\delta(br)$ which must be equal since $r \in Z(R)$. A calculation yields $$\begin{aligned}
\delta(br) =& \sigma(b) \delta(r)+\delta(b) r = r \delta(b), \\
\delta(rb) =& \sigma(r) \delta(b) +\delta(r) b = \sigma(r) \delta(b).\end{aligned}$$ So $(\sigma(r)-r)\delta(b) =0$. This implies that $\sigma(r)=r$.
\[SimpleCenter\] Let $R$ be a domain and $\sigma$ injective. The following assertions hold:
1. ${R[x;\sigma,\delta]}\setminus R$ contains no invertible element;
2. if ${R[x;\sigma,\delta]}$ is simple, then the center of ${R[x;\sigma,\delta]}$ is contained in $R$ and consists of those $r \in Z(R)$ such that $\delta(r)=0$.
\(i) This follows by the same argument as in [@McConnellRobson Theorem 1.2.9(i)]. (ii) This follows from (i), Remark \[CenterField\] and Lemma \[CenterIntersection2\] since $\delta$ must be non-zero.
Differential polynomial rings {#subsec:simplicityDifferentialPolrings}
-----------------------------
We shall now focus on the case when $\sigma = \operatorname{id}_R$.
Note that for a derivation $\delta$ on $R$ we have the *Leibniz rule*: $$\delta^n(rs)=\sum_{i=0}^n \binom{n}{i} \delta^{n-i}(r) \delta^i(s)$$ for $n\in {\mathbb{Z}}_{\geq 0}$ and $r,s\in R$.
\[IdealIntersection\] $I \cap Z(R)' \neq \{ 0\}$ holds for any non-zero ideal $I$ of ${R[x;\operatorname{id}_R,\delta]}$, where $Z(R)'$ denotes the centralizer of $Z(R)$ in ${R[x;\operatorname{id}_R,\delta]}$.
Let $I$ be an arbitrary non-zero ideal of ${R[x;\operatorname{id}_R,\delta]}$. Take $a \in I \setminus \{0\}$ such that $n:=\deg(a)$ is minimal. If $n=0$ then we are done. Otherwise, if $a$ is of degree $n>0$ it follows from Lemma \[commutantLemma\](ii) that $\deg(ra-ar) < \deg(a)$. Since $ra-ar \in I$ we conclude by the minimality of $\deg(a)$ that $ra-ar=0$. Hence $I \cap Z(R)' \neq \{0\}$.
\[MaxCommIdealIntersection\] If $R$ is a maximal commutative subring of ${R[x;\operatorname{id}_R,\delta]}$, then $I \cap R \neq \{0\}$ holds for any non-zero ideal $I$ of ${R[x;\operatorname{id}_R,\delta]}$.
We have seen that if ${R[x;\operatorname{id}_R,\delta]}$ is a simple ring, then its center is a field and $R$ is $\delta$-simple. These necessary conditions are well-known, see e.g. [@Goodearl2004]. We will now show that they are also sufficient and begin with the following lemma.
\[monic\] Let $S={R[x;\operatorname{id}_R,\delta]}$ be a differential polynomial ring where $R$ is $\delta$-simple. For every element $b \in S \setminus \{0 \}$ we can find an element $b' \in S$ such that:
1. $ b' \in SbS$;
2. $\deg(b') = \deg(b)$;
3. $b'$ has $1$ as its highest degree coefficient.
Let $J$ be an arbitrary ideal of ${R[x;\operatorname{id}_R,\delta]}$ and $n$ an arbitrary non-negative integer. Define the following set $$H_n(J) = \{ a \in R \ | \ \exists \, c_0,c_1,\cdots c_{n-1} \in R : \ ax^n + \sum_{i=0}^{n-1} c_i x^i \in J \},$$ consisting of the $n$:th degree coefficients of all elements in $J$ of degree *at most* $n$.
Clearly, $H_n(J)$ is an additive subgroup of $R$. Take any $r \in R$. If $ax^n +\sum_{i=0}^{n-1} c_i x^i$ belongs to $J$, then so does $ra x^n +\sum_{i=0}^{n-1} r c_i x^{i}$. Thus, $H_n(J)$ is a left ideal of $R$. Furthermore, if $c = ax^n +\sum_{i=0}^{n-1} c_i x^i$ is an element of $J$ then so is $cr$, and it is not difficult to see that $cr$ has degree at most $n$ and that its $n$:th degree coefficient is $ar$. Thus, $H_n(J)$ is also a right ideal of $R$ and hence an ideal.
We claim that $H_n(J)$ is a $\delta$-invariant ideal. Indeed, take any $a \in H_n(J)$ and a corresponding element $ax^n + \sum_{i=0}^{n-1} c_i x^i \in J$. Then we get $$\begin{aligned}
x(ax^n + \sum_{i=0}^{n-1} c_i x^i) -(ax^n + \sum_{i=0}^{n-1} c_i x^i)x
= \delta(a) x^n + \sum_{i=0}^{n-1} \delta(c_i) x^i \in J.\end{aligned}$$ This implies that $\delta(a) \in H_n(J)$ and that $H_n(J) $ is $\delta$-invariant.
Now, take any $b\in S \setminus \{0\}$ and put $n=\deg(b)$. Let $b_n$ denote the $n$:th degree coefficient of $b$. Put $J=SbS$ and note that $b_n \in H_n(SbS)$. Since $R$ is $\delta$-simple and $H_n(SbS)$ is non-zero we conclude that $H_n(SbS)=R$. Thus, $1 \in H_n(SbS)$ and the proof is finished.
We now show that the assumption that $R$ is $\delta$-simple allows us to reach a stronger conclusion than in Proposition \[IdealIntersection\].
\[CenterIdealIntersection\] Let $S={R[x;\operatorname{id}_R,\delta]}$ be a differential polynomial ring where $R$ is $\delta$-simple. Then $I \cap Z(S) \neq \{ 0\}$ holds for every non-zero ideal $I$ of $S$.
Let $I$ be any non-zero ideal of $S$ and choose a non-zero element $b \in I$ of minimal degree $n$. By Lemma \[monic\] we may assume that its highest degree coefficient is $1$. Let us write $b = x^n +\sum_{i=0}^{n-1} c_i x^i$. Since $b_n=1$ we have that $\deg(rb-br) < \deg(b)$ for all $r$ in $R$. Since $rb-br \in I$ it follows from the minimality of $\deg(b)$ that $rb-br=0$, i.e. $b$ commutes with each element in $R$.
Similarly note that $xb-bx \in I$. By Lemma \[commutantLemma\](iii) and the fact that $\delta(1)=0$ we get $\deg(xb-bx) < \deg(b)$ which implies that $xb-bx=0$. Since $R$ and $x$ generate $S$, $b$ must lie in the center of $S$.
We now obtain the promised characterization of when ${R[x;\operatorname{id}_R,\delta]}$ is simple. In [@Hauger] Hauger obtains a similar result for a class of rings that are similar to, but distinct from, the ones studied in the present article. Hauger’s method of proof is also different from ours.
\[MainResult2\] Let $R$ be a ring and $\delta$ a derivation of $R$. The differential polynomial ring ${R[x;\operatorname{id}_R,\delta]}$ is simple if and only if $R$ is $\delta$-simple and $Z({R[x;\operatorname{id}_R,\delta]})$ is a field.
If $R$ is $\delta$-simple and $Z({R[x;\operatorname{id}_R,\delta]})$ is a field then, by Proposition \[CenterIdealIntersection\], ${R[x;\operatorname{id}_R,\delta]}$ is simple. The converse follows from Proposition \[SimpleInvariant\] and Remark \[CenterField\].
A different sufficient condition for ${R[x;\operatorname{id}_R,\delta]}$ to be simple is given by the following.
\[DeltaMaxCommSimple\] If $R$ is $\delta$-simple and a maximal commutative subring of ${R[x;\operatorname{id}_R,\delta]}$, then ${R[x;\operatorname{id}_R,\delta]}$ is a simple ring.
Let $J$ be an arbitrary non-zero ideal of ${R[x;\operatorname{id}_R,\delta]}$. Using the notation of the proof of Lemma \[monic\] we see that $H_0(J)=J \cap R$. By Corollary \[MaxCommIdealIntersection\] and the proof of Lemma \[monic\], it follows that $H_0(J)$ is a non-zero $\delta$-invariant ideal of $R$. By the assumptions we get $H_0(J)=R$, which shows that $1_{{R[x;\operatorname{id}_R,\delta]}} \in J$. Thus, $J={R[x;\operatorname{id}_R,\delta]}$.
Under the assumptions of Proposition \[DeltaMaxCommSimple\], the center of ${R[x;\operatorname{id}_R,\delta]}$ consists of the constants in $R$.
In the following example we verify the well-known fact that the Weyl algebra is simple as an application of Proposition \[DeltaMaxCommSimple\].
\[WeylAlgebra\] Take $R=k[y]$ for some field $k$ of characteristic zero. Let $\sigma=\operatorname{id}_R$ and define $\delta$ to be the usual formal derivative of polynomials. Then $R[x;\sigma,\delta]$ is the Weyl algebra. It is easy to see that $k[y]$ is $\delta$-simple and a maximal commutative subring of the Weyl algebra and thus ${R[x;\sigma,\delta]}$ is simple by Proposition \[DeltaMaxCommSimple\].
Maximal commutativity of $R$ in ${R[x;\operatorname{id}_R,\delta]}$ does not imply $\delta$-simplicity of $R$, as demonstrated by the following example.
Let $k$ be a field of characteristic zero and take $R=k[y]$. Define $\delta$ to be the unique derivation on $R$ satisfying $\delta(y)=y$ and $\delta(c)=0$ for $c\in k$. No element outside of $R$ commutes with $y$ and thus $R$ is a maximal commutative subring of ${R[x;\operatorname{id}_R,\delta]}$. However, $R$ is not $\delta$-simple since $Ry$ is a proper $\delta$-invariant ideal of $R$.
The following lemma appears as [@Jordan Lemma 4.1.3] and also follows from Proposition \[SimpleInvariant\] and Remark \[innerderivationnonsimple\].
\[SimpleDiffRing\] If ${R[x;\operatorname{id}_R,\delta]}$ is simple, then $R$ is $\delta$-simple and $\delta$ is outer.
\[cor\_deltasimple2\] Let $R$ be a commutative domain of characteristic zero. If ${R[x;\operatorname{id}_R,\delta]}$ is simple, then $R$ is a maximal commutative subring of ${R[x;\operatorname{id}_R,\delta]}$.
This follows from Lemma \[SimpleDiffRing\] and Proposition \[MaxCommutativityDiffRing\].
The following proposition follows from [@Jordan Theorem 4.1.4].
\[EquivOreGen\] Let $R$ be a commutative ring that is torsion-free as a module over ${\mathbb{Z}}$. The following assertions are equivalent:
1. $R[x;\operatorname{id}_R, \delta]$ is a simple ring;
2. $R$ is $\delta$-simple and $\delta$ is non-zero.
(i)$\Rightarrow$(ii): This follows from Lemma \[SimpleDiffRing\].\
(ii)$\Rightarrow$(i): Suppose that $R$ is $\delta$-simple and $\delta$ is non-zero. Let $J$ be an arbitrary non-zero ideal of $R[x;\operatorname{id}_R, \delta]$. Choose some $q\in J\setminus \{0\}$ of lowest possible degree, which we denote by $n$. Seeking a contradiction, suppose that $n>0$. By Lemma \[monic\] we may assume that $q$ has $1$ as its highest degree coefficient.
Let $r \in R$ be arbitrary. Lemma \[commutantLemma\](ii) yields $rq -qr = -n\delta(r)x^{n-1} + [\text{lower terms}]$. By minimality of $n$ and the fact that $rq-qr \in I$, we get $rq-qr=0$. Since $R$ is torsion-free, we conclude that $\delta(r)=0$. This is a contradiction and hence $n=0$. Thus, $q=1$ and hence $J=R[x;\operatorname{id}_R,\delta]$.
Example \[ex\_EquivOreGen\] demonstrates that assertion (ii) in Proposition \[EquivOreGen\] does not imply assertion (i) for a general commutative ring $R$.
\[ex\_EquivOreGen\] Let $\mathbb{F}_2$ be the field with two elements and put $R= \mathbb{F}_2 [y]/\langle y^2\rangle $. The ideal of $\mathbb{F}_2 [y]$ generated by $y^2$ is invariant under $\frac{d}{dy}$. Thus, $\frac{d}{dy}$ induces a derivation $\delta$ on $R$ such that $\delta(y)=1$.
$R$ is clearly $\delta$-simple but ${R[x;\operatorname{id}_R,\delta]}$ is not simple. To see this note that $x^2$ is a central element. From that it is easy to see that the ideal generated by $x^2$ is proper.
We are now ready to state and prove one of the main results of this article. Note that by Theorem \[SimpleOre\] all simple Ore extensions over commutative domains of characteristic zero are differential polynomial rings.
\[MainResult\] Let $R$ be a commutative domain of characteristic zero. The following assertions are equivalent:
1. $R[x;\operatorname{id}_R, \delta]$ is a simple ring;
2. $R$ is $\delta$-simple and a maximal commutative subring of $R[x;\operatorname{id}_R, \delta]$.
(i)$\Rightarrow$(ii): By Lemma \[SimpleDiffRing\] $\delta$ is non-zero and $R$ is $\delta$-simple. The result now follows from Proposition \[MaxCommutativityDiffRing\].\
(ii)$\Rightarrow$(i): This follows from Proposition \[DeltaMaxCommSimple\].
Acknowledgements {#acknowledgements .unnumbered}
================
The first author was partially supported by The Swedish Research Council (postdoctoral fellowship no. 2010-918) and The Danish National Research Foundation (DNRF) through the Centre for Symmetry and Deformation. The first and second authors were both supported by the Mathematisches Forschungsinstitut Oberwolfach as Oberwolfach Leibniz graduate students.
This research was also supported in part by the Swedish Foundation for International Cooperation in Research and Higher Education (STINT), The Swedish Research Council (grant no. 2007-6338), The Royal Swedish Academy of Sciences, The Crafoord Foundation and the NordForsk Research Network “Operator Algebras and Dynamics” (grant no. 11580). The authors are extremely grateful to David A. Jordan for making the effort to scan [@Jordan] and kindly providing them with a PDF-copy. The authors are also grateful to Steven Deprez, Ken Goodearl and Anna Torstensson for comments on some of the problems and results considered in this paper. The authors would like to thank an anonymous referee for making several helpful comments that lead to improvements of this article.
[99]{} Archbold, R. J., Spielberg, J. S., Topologically free actions and ideals in discrete $C^{*}$-dynamical systems, Proc. Edinburgh Math. Soc. (2) [**37**]{}, no. 1, 119–124 (1994)
Bavula, V., The simple modules of the Ore extensions with coefficients from a Dedekind ring. Comm. Algebra 27 (1999), no. 6, 2665–2699.
Carlsen, T. M., Silvestrov, S., On the Exel crossed product of topological covering maps, Acta Appl. Math. **108**, no. 3, 573–583 (2009)
Cozzens, J., Faith, C., Simple Noetherian rings. Cambridge Tracts in Mathematics, No. 69. Cambridge University Press, Cambridge-New York-Melbourne, 1975. xvii+135 pp.
Dutkay, D. E., Jorgensen, P. E. T., Martingales, endomorphisms, and covariant systems of operators in Hilbert space, J. Operator Theory [**58**]{}, no. 2, 269–310 (2007)
Dutkay, D. E., Jorgensen, P. E. T., Silvestrov, S., Decomposition of wavelet representations and Martin boundaries, J. Funct. Anal. **262**, no. 3, 1043–1061 (2012)
Exel, R., Vershik, A., $C^*$-algebras of irreversible dynamical systems, Canad. J. Math. [**58**]{}, no. 1, 39–63 (2006)
Goodearl, K.R., Warfield, R.B., Primitivity in differential operator rings, Math. Z., **180**, no. 4, 503–523 (1982)
Goodearl, K. R., Prime Ideals in Skew Polynomial Rings and Quantized Weyl Algebras, Journal of Algebra 150 (1992), 324-377
Goodearl, K. R., Warfield, R. B., An introduction to noncommutative Noetherian rings. Second edition. London Mathematical Society Student Texts, 61. Cambridge University Press, Cambridge, 2004.
Hauger, G., Einfache Derivationspolynomringe, Arch. Math. (Basel),**29**, no. 5, 491–496 (1977)
Hellström, L., Silvestrov, S., Commuting elements in q-deformed Heisenberg algebras. World Scientific Publishing Co., Inc., River Edge, NJ, 2000.
Jacobson, N., Pseudo-linear transformations, Annals of Math., **38**, 484–507 (1937)
Jain, S.K., Lam, T.Y., Leroy, A., Ore extensions and $V$-domains, Rings, Modules and Representations, Contemp. Math. Series AMS, 480, 263–288 (2009)
de Jeu, M., Svensson, C., Tomiyama, J., On the Banach $*$-algebra crossed product associated with a topological dynamical system, J. Funct. Anal. **262**, no. 11, 4746–4765 (2012)
Jordan, D. A., Ore extensions and Jacobson rings, Ph.D. Thesis, University of Leeds, 1975
Jordan, D. A., Primitive Ore extensions, Glasgow Math. J., **18**, no. 1, 93–97 (1977)
Jordan, D. A., Simple skew Laurent polynomial rings, Comm. Algebra, **12**, no. 1-2, 135–137 (1984)
Lam, T. Y., Leroy, A., Homomorphisms between Ore extensions. Azumaya algebras, actions, and modules (Bloomington, IN, 1990), 83–110, Contemp. Math., 124, Amer. Math. Soc., Providence, RI, 1992.
Lundström, P., Öinert, J., Skew category algebras associated with partially defined dynamical systems, Internat. J. Math. **23**, no. 4, pp. 16 (2012)
McConnell, J. C., Robson, J. C., Noncommutative Noetherian rings, Pure and Applied Mathematics (New York), John Wiley & Sons, Ltd., Chichester, 1987.
McConnell, J. C., Sweedler, M. E., Simplicity of smash products, Proc. London Math. Soc. no. 3, **23**, 251–266, (1971)
Mackey, G. W., Unitary Group Representations in Physics, Probability, and Number Theory, xxviii+402 pp. Second edition. Advanced Book Classics. Addison-Wesley Publishing Company, Advanced Book Program, Redwood City (1989)
Malm, D. R., Simplicity of partial and Schmidt differential operator rings, Pacific J. Math. **132**, no. 1, 85–112 (1988)
MathOverflow, “Simple Ore extensions”,\
http://mathoverflow.net/questions/68233/ Accessed on 2011-11-28.
inert, J., Simple group graded rings and maximal commutativity, Operator structures and dynamical systems, 159–175, Contemp. Math., 503, Amer. Math. Soc., Providence, RI, 2009. Öinert, Johan ; Silvestrov, Sergei D. Commutativity and ideals in pre-crystalline graded rings. Acta Appl. Math. 108 (2009), no. 3, 603–615.
inert, J., Lundstr[ö]{}m, P., Commutativity and ideals in category crossed products, Proc. Est. Acad. Sci. 59 (2010), no. 4, 338–346
Öinert, J., Lundström, P., The Ideal Intersection Property for Groupoid Graded Rings, Comm. Algebra **40**, no. 5, 1860-1871 (2012)
Öinert, J., Silvestrov, S. D., Commutativity and ideals in algebraic crossed products, J. Gen. Lie Theory Appl. **2**, no. 4, 287–302 (2008)
Öinert, J., Silvestrov, S. D., On a correspondence between ideals and commutativity in algebraic crossed products, J. Gen. Lie Theory Appl. **2**, no. 3, 216–220 (2008)
Öinert, J., Silvestrov, S. D., Crossed product-like and pre-crystalline graded rings, in *Generalized Lie theory in mathematics, physics and beyond*, 281–296, Springer, Berlin, (2009).
Öinert, J., Silvestrov, S. D., Commutativity and ideals in pre-crystalline graded rings, Acta Appl. Math. **108**, no. 3, 603–615 (2009)
Öinert, J., Silvestrov, S., Theohari-Apostolidi, T., Vavatsoulas, H., Commutativity and ideals in strongly graded rings, Acta Appl. Math. **108**, no. 3, 585–602 (2009)
Ostrovskyĭ, V., Samoĭlenko, Yu., Introduction to the Theory of Representations of Finitely Presented $*$-Algebras. I, iv+261 pp. Representations by bounded operators. Reviews in Mathematics and Mathematical Physics, [**11**]{}, pt.1. Harwood Academic Publishers, Amsterdam (1999)
Posner, C. E., Differentiable Simple Rings, Proc. Amer, Math. Soc. **11**, 337–343 (1960)
Rowen, L. H., Ring theory. Vol. I. Pure and Applied Mathematics, 127. Academic Press, Inc., Boston, MA, 1988.
Svensson, C., Silvestrov, S., de Jeu, M., Dynamical Systems and Commutants in Crossed Products, Internat. J. Math. **18**, no. 4, 455–471 (2007)
Svensson, C., Silvestrov, S., de Jeu, M., Connections between dynamical systems and crossed products of Banach algebras by $\mathbb{Z}$, in *Methods of spectral analysis in mathematical physics*, 391–401, Oper. Theory Adv. Appl., 186, Birkhäuser Verlag, Basel, 2009.
Svensson, C., Silvestrov, S., de Jeu, M., Dynamical systems associated with crossed products, Acta Appl. Math. **108**, no. 3, 547–559 (2009)
Svensson, C., Tomiyama, J., On the commutant of $C(X)$ in $C^*$-crossed products by $\mathbb{Z}$ and their representations, J. Funct. Anal. 256 , no. 7, 2367–2386 (2009)
Tomiyama, J., Invitation to $C\sp *$-algebras and topological dynamics, x+167 pp. World Scientific Advanced Series in Dynamical Systems, 3. World Scientific, Singapore (1987)
Voskoglou, M. G., Simple Skew Polynomial Rings, Publ. Inst. Math. (Beograd) (N.S.), **37(51)**, 37–41 (1985)
[^1]: Department of Mathematical Sciences, University of Copenhagen, Universitetsparken 5, DK-2100 Copenhagen Ø, Denmark, E-mail: [email protected]
[^2]: Centre for Mathematical Sciences, Lund University, Box 118, SE-22100 Lund, Sweden, E-mail: [email protected]
[^3]: Division of Applied Mathematics, The School of Education, Culture and Communication, M[ä]{}lardalen University, Box 883, SE-72123 V[ä]{}ster[å]{}s, Sweden, E-mail: [email protected]
|
---
author:
- Peng Wang
title: '**[A characterization of the Ejiri torus in $S^{5}$]{}**'
---
[**Abstract**]{}
Ejiri’s torus in $S^5$ is the first example of Willmore surface which is not conformally equivalent to any minimal surface in any space forms. Li and Vrancken classified all Willmore surfaces of tensor product in $S^{n}$ by reducing them into elastic curves in $S^3$, and the Ejiri torus appeared as a special example. In this paper, we first prove that among all Willmore tori of tensor product, the Willmore functional of the Ejiri torus in $S^5$ attains the minimum $2\pi^2\sqrt{3}$. Then we show that all Willmore tori of tensor product are unstable when the co-dimension is big enough. We also show that the Ejiri torus is unstable in $S^5$. Moreover, similar to Li and Vrancken, we classify all constrained Willmore surfaces of tensor product by reducing them with elastic curves in $S^3$. All constrained Willmore tori obtained this way are also shown to be unstable when the co-dimension is big enough.
We conjecture that a Willmore torus having Willmore functional between $2\pi^2$ and $2\pi^2\sqrt{3}$ is either the Clifford torus, or the Ejiri torus (up to some conformal transform).\
[**Keywords:**]{} Willmore functional; Ejiri’s Willmore torus; surfaces of tensor product; elastic curves; constrained Willmore surfaces.\
[**MSC(2000): 53A30, 53C15**]{}
Introduction
============
For a surface $x:M\rightarrow S^n$ with mean curvature $\vec{H}$ and Gauss curvature $K$, the Willmore functional can be defined as $$W(x)=\int_M (|\vec{H}|^2-K+1)dM.$$ Here $1$ is the curvature of $S^n$. Since Willmore [@Willmore1965] introduced this functional and his famous conjecture, it has played an important role in the study of global differential geometry and many new insights and methods have been developed for the proof of Willmore conjecture, see for example [@Li-y; @Marques; @Marques2] and reference therein. The Willmore conjecture states that it attains the minimum of Willmore functional among all tori in $S^3$, which has been shown to be true in [@Marques; @Marques2] recently. As a consequence, this shows that Clifford torus is a stable Willmore torus in $S^3$.
Examples of Willmore surfaces, the critical surfaces of Willmore functional, provide also many interesting objects in geometry. Minimal surfaces in space forms give important special examples of Willmore surfaces [@Weiner; @Ejiri1988]. In [@Ejiri1982], Ejiri gave the first example of Willmore surface in $S^5$, which is not conformally equivalent to any minimal surface in any space forms. Later, in [@Pinkall1985] Pinkall provided the first examples of Willmore tori in $S^3$ which are different from minimal surfaces in space forms by the famous Hopf map and elastic curves. Using tensor product of curves in spheres, in [@Li-V], Li and Vrancken derived many new examples of Willmore tori, including Ejiri’s example as a special case.
In this paper, we show that the Ejiri torus can be viewed as the torus having lowest Willmore functional among all Willmore tori of tensor product. To be concrete, by a simple observation, we prove that for all Willmore tori of tensor product, the Willmore functional $ W(y)\geq2\pi^2\sqrt{3}$, with the equality holding if and only the torus is the Ejiri torus (Theorem \[th-main\]). This result also indicates that Ejiri torus is stable in the set of tori of tensor product in $S^5$.
The Clifford torus is the first and the only known stable Willmore torus in $S^3$ [@Weiner] (See also [@GLW], [@Palmer]). It is an interesting question that whether Clifford torus is the only stable Willmore torus in $S^n$ or not. Note that if this is true, then Willmore conjecture follows directly. The un-stability of the Ejiri torus shown in this paper supports this conjecture partially. To be concrete, we construct a family of tori in $S^7$ containing the Ejiri torus, showing that the Ejiri torus is unstable. Similarly, we show that all Willmore tori derived by tensor product are unstable, when the co-dimension is big enough. Moreover, we also find a family of homogeneous tori in $S^5$ which contains the Ejiri torus and the Ejiri torus attains the maximal Willmore energy among them, which also shows the un-stability of the Ejiri torus.
Noticing that the Willmore functional of Ejiri torus is very small, it is natural have a conjecture as below, which is true for Willmore tori of tensor product by Theorem \[th-main\]:\
[ **Conjecture.**]{} [*Let $y$ be a Willmore torus $y$ in $S^n$, $n\geq 5$. If the Willmore functional $W(y)$ of $y$ satisfies $$2\pi^2\leq W(y) \leq 2\pi^2\sqrt{3},$$ then either $y$ is conformally equivalent to the Clifford torus with $W(y)=2\pi^2$, or $y$ is conformally equivalent to the Ejiri torus with $W(y)= 2\pi^2\sqrt{3}$.*]{}
Note that $2\pi^2\sqrt{3}\approx 10.88 \pi$ is between $10\pi$ and $12\pi$. So one can derive by careful discussions that it is not possible to derive minimal torus in $\mathbb{R}^n$ with planer ends and with Willmore functional low than $2\pi^2\sqrt{3}$. In fact for a minimal torus in $\mathbb{R}^n$ with planer ends to have total curvature $\geq-12\pi$, it has to have at most two ends. Then it has to be located in some $\mathbb{R}^4$ and has two ends. It is not possible to write down a Weierstrass representation via elliptic functions on a torus (See for example [@Bryant1984] and [@Costa] for similar discussions).
We remark also that in [@GL], they constructed a $\mathbb{CP}^3-$family Willmore tori in $S^4$ with Willmore functional $2\pi^2 n$. Here $n$ is the largest number of the Pythagorean triples $(p, q, n)\in \mathbb{Z}^3$.
The value distribution of Willmore two-spheres in $S^4$ has been proved to be $4\pi n$ in [@Bryant1984; @Montiel], $n\in \mathbb{Z}^+$. While the study of value distribution of Willmore tori are still very few. We hope this will induce more works on this direction.
We notice that in the construction of variations for the unstability of Willmore tori of tensor product, one can keep the conformal structure invariant. So such surfaces are in fact unstable under the variation with constraint. It is therefore natural to consider constrained Willmore surfaces of tensor product. For such surfaces, we obtain the similar results as Willmore tori of tensor product.
The paper is organized as follows. We will first recall the basic facts of surfaces in $S^n$ in Section 2 and then provide the proof in Section 3. Then we classify all constrained Willmore surfaces of tensor product in Section 4, which generalizes the result of Li and Vrancken [@Li-V]. Section 5 ends this paper by showing the un-stability of the Ejiri torus in $S^5$.
Willmore surfaces in $S^n$
==========================
The projective lightlike cone model of $S^n$ maps a point $x\in S^n$ into the projective lightlike cone $Q^n\subset \mathbb{R}P^{n+1}$ with $Q^n=\{[X]|X=(1,x)\in \mathbb R^{n+2}, x\in S^n\}.$ Here $X$ is contained in the light cone $\mathcal{C}^{n+1}$ of $\mathbb R^{n+2}$ equipped with a Lorenzian metric $\langle Y,Y\rangle=-y_{0}^2+y_1^2+\cdots+y_n^2$ for $Y=(y_0,y_1,\cdots,y_n)$, which we will denote by $\mathbb R^{n+2}_1$.
Let $y:M\rightarrow S^n$ be a conformal surface with $M$ a Riemann surface. Let $U\subset M$ be an open subset with complex coordinate $z$. Then $< y_{z},y_{z}>=0 \hbox{ and } <
y_{z},y_{\bar{z}}> =\frac{1}{2}e^{2\omega}.$ Here $<\cdot,\cdot>$ is the standard inner product. Then one has a natural lift of $y$ into $\mathbb R^{n+2}_1$ by $$Y=e^{-\omega}(1,y): U\rightarrow \mathcal{C}^{n+1} \hbox{ with } |{\rm d}Y|^2=|{\rm d}z|^2,$$ called a canonical lift with respect to $z$. Moreover, there is a natural decomposition $M\times
\mathbb{R}^{n+2}_{1}=V\oplus V^{\perp}$, where $$V={\rm Span}\{Y, Re(Y_z), Im(Y_z),Y_{z\bar{z}}\}$$ is a Lorentzian rank-4 subbundle independent to the choice of $Y$ and $z$. Denoted by $V_{\mathbb{C}}=V\otimes \mathbb C$ and $V^{\perp}_{\mathbb{C}}=V^{\perp}\otimes \mathbb C$. Let $\{Y,Y_{z},Y_{\bar{z}},N\}$ be a frame of $V_{\mathbb{C}}$, with $N\in\Gamma(V)$ uniquely determined by $$\label{eq-N}
\langle N,Y_{z}\rangle=\langle N,Y_{\bar{z}}\rangle=\langle
N,N\rangle=0,\langle N,Y\rangle=-1.$$ Since $Y_{zz}$ is orthogonal to $Y$, $Y_{z}$ and $Y_{\bar{z}}$, there exists a local complex function $s$ and a local section $\kappa\in \Gamma(V_{\mathbb{C}}^{\perp})$ such that $$Y_{zz}=-\frac{c}{2}Y+\kappa.$$ This defines two basic invariants $\kappa$ and $c$, *the conformal Hopf differential* and *the Schwarzian* of $y$ (for more details, see [@BPP]). Let $D$ denote normal connection of $V_{\mathbb{C}}^{\perp}$ and $\psi\in
\Gamma(V_{\mathbb{C}}^{\perp})$ be a section of $V_{\mathbb{C}}^{\perp}$, the structure equations are as follows: $$\label{eq-moving}
\left\{\begin {array}{lllll}
Y_{zz}&=&-\frac{c}{2}Y+\kappa, \\
Y_{z\bar{z}}&=&-\langle \kappa,\bar\kappa\rangle Y+\frac{1}{2}N,\\
N_{z}&=&-2\langle \kappa,\bar\kappa\rangle Y_{z}-sY_{\bar{z}}+2D_{\bar{z}}\kappa,\\
\psi_{z}&=&D_{z}\psi-2\langle \psi,D_{\bar{z}}\kappa\rangle Y-2\langle
\psi,\kappa\rangle Y_{\bar{z}},
\end {array}\right.$$ with the following integrable conditions (the conformal Gauss, Codazzi and Ricci equations):
\[eq-integ:1\] $$\begin{aligned}
& c_{\bar{z}}=6\langle
\kappa,D_z\bar\kappa\rangle +2\langle D_z\kappa,\bar\kappa\rangle, \label{eq-integ:1A} \\
&{\rm Im}\left(D_{\bar{z}}D_{\bar{z}}\kappa+\frac{\bar{c}}{2}\kappa\right)=0, \label{eq-integ:1B} \\
& R^{D}_{\bar{z}z}=D_{\bar{z}}D_{z}\psi-D_{z}D_{\bar{z}}\psi =
2\langle \psi,\kappa\rangle\bar{\kappa}- 2\langle
\psi,\bar{\kappa}\rangle\kappa. \label{eq-integ:1C}\end{aligned}$$
We define the Willmore functional and Willmore surfaces as below:
*The Willmore functional* of $y:M\rightarrow S^n$ is defined as the area of M with respect to the metric above: $$W(y):=2i\int_{M}\langle \kappa,\bar{\kappa}\rangle dz\wedge
d\bar{z}.$$ We call $y$ a *Willmore surface*, if it is a critical surface of the Willmore functional with respect to any variation of the map $y:M\rightarrow S^n$.
Denote by $\vec{H}$ and $K$ the mean curvature and Gauss curvature of $y$ in $S^n$. Then it is direct to verify that $W(y)=\int_M (|\vec{H}|^2-K+1)dM$, coinciding with the original definition of Willmore functional.
Willmore surfaces can be characterized as [@Bryant1984; @BPP; @Ejiri1988]
\[thm-willmore\] $y$ is a Willmore surface if and only if the conformal Hopf differential $\kappa$ of $y$ satisfies the Willmore condition $$\label{eq-willmore}
D_{\bar{z}}D_{\bar{z}}\kappa+\frac{\bar{c}}{2}\kappa=0.$$
Surfaces of tensor product and the Ejiri torus
==============================================
In [@Li-V], a full description of Willmore surfaces of tensor product has been provided by the moving frame methods of surfaces in $S^n$ and the discussions of solution to special elastic curve equation. In this section, we will first recall the properties of surfaces of tensor product and then characterize the Ejiri torus in [@Ejiri1982] as the Willmore torus with lowest Willmore functional $2\pi^2\sqrt{3}$ among all tori of tensor product.
Surfaces of tensor product
--------------------------
To begin with, let us collect some basic descriptions of surfaces of tensor product.
Let $\gamma(s):S^1(L)\rightarrow S^n$ be a closed curve with Frenet frame $\{\gamma,\beta_i,0\leq i\leq n-1\}$ as follows: $$\gamma'=\beta_0,\ \beta_0'=k_1\beta_1-\gamma,\ \beta_1'=k_2\beta_2-k_1\beta_0,\ \cdots,\
\beta_{n-1}'=-k_{n-2}\beta_{n-2},$$ where $\langle\gamma,\beta_i\rangle=0,\langle\beta_i,\beta_j\rangle=\delta_{ij}.$
Let $\hat\gamma(\hat{s}):S^1(\hat{L})\rightarrow S^m$ be a closed curve with Frenet frame $\{\hat\gamma,\hat\beta_i,0\leq i\leq m-1\}$ as follows: $$\hat\gamma'=\beta_0,\ \hat\beta_0'=\hat{k}_1\hat\beta_1-\hat\gamma,\ \hat\beta_1'=\hat{k}_2\hat\beta_2-\hat{k}_1\hat\beta_0,\ \cdots,\
\hat\beta_{m-1}'=-\hat{k}_{m-2}\hat\beta_{m-2},$$ where $\langle\hat\gamma,\hat\beta_i\rangle=0,\langle\hat\beta_i,\hat\beta_j\rangle=\delta_{ij}.$
Recall that the tensor product of $x\in \mathbb{R}^n$ and $\hat{x}\in \mathbb{R}^m$ is of the form ([@Chen], [@Li-V]) $$x\otimes\hat{x}:=(x_1\hat{x}_1,\cdots,x_1\hat{x}_m,x_2\hat{x}_1,\cdots,x_2\hat{x}_m,\cdots,x_n\hat{x}_1,\cdots,
x_n\hat{x}_m)\in \mathbb{R}^{nm}.$$ It is direct to derive this lemma ([@Li-V])
[@Li-V] $\langle f\otimes\hat{f},g\otimes\hat{g}\rangle=\langle f, g\rangle\langle\hat{f},
\hat{g}\rangle,$ $\forall f,g\in \mathbb{R}^{n},\ \forall \hat{f},\ \hat{g}\in \mathbb{R}^{m}$.
So we obtain a torus via the tensor product $$y:=\gamma(s)\otimes\hat\gamma(\hat s):T^2\rightarrow S^{(n+1)(m+1)-1}.$$ We also have that $$Y=\left(1,\gamma(s)\otimes\hat\gamma(\hat {s})\right)$$ is a canonical lift of $y$ into $\mathbb{R}^{(n+1)(m+1)+1}_{1}$ with respect to the complex coordinate $z=s+i\hat{s}$. So $$\label{eq-yzz}
\left\{\begin{array}{llll}
Y_z&=&\frac{1}{2}(0,\beta_0\otimes\hat\gamma)-\frac{i}{2}(0,\gamma\otimes\hat\beta_0),\\
Y_{z\bar{z}}&=&\frac{1}{4}k_1(0,\beta_1\otimes\hat\gamma)
+\frac{1}{4}\hat{k}_1(0,\gamma\otimes\hat\beta_1)
-\frac{1}{2}(0,\gamma\otimes\hat\gamma),\\
Y_{zz}&=& \frac{1}{4}k_1(0,\beta_1\otimes\hat\gamma)
-\frac{1}{4}\hat{k}_1(0,\gamma\otimes\hat\beta_1)
-\frac{i}{2}(0,\beta_0\otimes\hat\beta_0).
\end{array}\right.$$ And from the structure equations we have that $$\label{eq-c-wp}
c=4\langle Y_{zz},Y_{z\bar{z}}\rangle =\frac{1}{4}(k_1^2-\hat k_1^2),$$ and $$\label{eq-kappa-wp}\langle\kappa,\bar\kappa\rangle=\langle Y_{zz},Y_{\bar{z}\bar{z}}\rangle=\langle Y_{z\bar{z}},Y_{z\bar{z}}\rangle=\frac{1}{4}+\frac{1}{16}(k_1^2+\hat{k}_1^{2}).$$
We need the following lemma, which can be found in the discussions above Theorem 3.1 on page 146 of [@Langer-Singer-agag](See also [@Langer-Singer1984; @Langer-Singer-BLMS] for more discussions). For the reader’s convenience, we give a proof as below.
\[lemma\] [@Langer-Singer-agag] Let $\gamma(s):[0,L]\rightarrow S^n$ be a $C^2$ closed curve with length $L$ and curvature $k$ in $S^n $. Let $a_0\geq 2$ be a constant. Then $$\mathcal{E}(\gamma):=\int_0^L(k^2+a_0)ds\geq 4\pi\sqrt{a_0-1}.$$ And the equality holds if and only if $\gamma$ is a circle in $S^2 $ with curvature $\sqrt{a_0-2}$.
It is direct to see $$\mathcal{E}(\gamma)=\int_0^L(k^2+a_0)ds\geq 2\int_0^L\sqrt{k^2+1}\sqrt{a_0-1}ds.$$ Since $\sqrt{k^2+1}$ is the curvature of $\gamma$ in $\mathbb{R}^{n+1}$, by Fenchel Theorem, $$\int_0^L\sqrt{k^2+1}\sqrt{a_0-1}ds\geq2\pi \sqrt{a_0-1},$$ and equality holds if and only if $\gamma$ is a convex closed curve contained in a 2-dimensional plane of $\mathbb{R}^{n+1}$. For the equality case, $\gamma$ is the intersection of a 2-dimensional plane and $S^n(1)$, hence $k=constant$. Moreover, we also have $k^2+1=a_0-1$, i.e., $k=\sqrt{a_0-2}$.
The Ejiri torus
---------------
\[th-main\] Let $y=\gamma\otimes\hat\gamma$ be a Willmore torus of tensor product. Then $$W(y)\geq2\pi^2\sqrt{3},$$ with the equality holding if and only if $y$ is the torus of Ejiri in [@Ejiri1982]: $$\label{eq-ejiri-t}
\sqrt{\frac{1}{3}}\left(\cos \hat s\cos \sqrt{3}s,\sin \hat s\cos \sqrt{3}s,\cos \hat s\sin \sqrt{3}s,\sin\hat s\sin \sqrt{3}s,\sqrt{2}\cos\hat s,\sqrt{2}\sin \hat s\right).$$
Since $y$ is a Willmore torus of tensor product, by Theorem 1 of [@Li-V], one of $\gamma$ and $\hat{\gamma}$ is a big circle in a sphere. Without loss of generality, we assume that $\hat{k}_1=0$. By , we have that $$W(y)=4\int_M\langle\kappa,\bar\kappa\rangle ds d\hat{s}=4\int_0^{2\pi}d\hat{s}\int_{\gamma}\left(\frac{1}{4}+\frac{1}{16}k_1^2\right)ds= \frac{\pi}{2}\int_{\gamma}(k_1^2+4)ds.$$ By Lemma \[lemma\], or the discussions above Theorem 3.1 in page 146 of [@Langer-Singer-agag], $$\int_{\gamma}(k_1^2+4)dt\geq 4\pi\sqrt{3}$$ with equality holding if and only if $k_1=\sqrt{2}, $ $k_2\equiv0$. Elementary computation shows that this gives the Ejiri torus .
From the proof we obtain that
Let $y=\gamma\otimes\hat\gamma$ be a torus of tensor product with $\hat\gamma(\hat{s})=(\cos\hat s, \sin\hat s)$. Then $$W(y)\geq2\pi^2\sqrt{3},$$ with the equality holding if and only if $y$ is the Ejiri’s torus .
Note that when $k_1\equiv0$, the image of $y$ is exactly a double cover of the classical Clifford torus, and hence its Willmore functional $W(y)=4\pi^2$. Below we see a family tori of tensor product, with the double cover of the classical Clifford torus as the one with highest Willmore functional and the limiting surface provides the classical Clifford torus. We also remark that for any torus $y$ of tensor product, $W(y)>2\pi^2$ (Proposition \[prop-wy\]).
\[example-inf\] Let $0<a\leq1$ and $a^2+b^2=1, \ b\geq0$. Set $$y_a=\gamma_a(s)\otimes \hat\gamma_a(\hat s), \hbox{ with } \gamma_a(s)=\left(a\cos \frac{s}{a}, a \sin \frac{s}{a} , b\right),\ \hat\gamma_a(\hat s)=\left(a\cos \frac{\hat s}{a}, a \sin \frac{\hat s}{a} , b\right).$$ We see that for all $a\in(0,1]$, $y_a$ is a flat torus in some $S^7(a)\subset S^8$. Moreover, when $a=1$, $y_a$ is a double cover of the classical Clifford torus in some $S^3(1)\subset S^8$, and when $a\rightarrow 0$, by some scaling, $y_a$ tends to the clifford torus $(\cos s, \sin s, \cos\hat s, \sin \hat s)$. Concerning the Willmore functional of $y_a$, by , we have that $$\begin{split}W(y_a)=4\int_{\hat\gamma_a}\int_{\gamma_a}\left(\frac{1}{4}+\frac{1}{16}k_1^2+\frac{1}{16}\hat k_1^2\right)dsd\hat s = 2\pi^2(1+a^2)>2\pi^2.
\end{split}$$
From Theorem \[th-main\], it is natural to conjecture that Ejiri’s torus is a stable Willmore torus in $S^n$. However, the following examples show that this is not true when $n\geq7$.
Let $0<a\leq1$ and $a^2+b^2=1, \ b\geq0$. Set $$\tilde{y}_a=\gamma (s)\otimes \hat\gamma_a(\hat s)$$ with $$\gamma (s)=\left(\sqrt{\frac{ 1 }{ 3}}\cos \sqrt{3}s, \sqrt{\frac{ 1 }{ 3}}\sin \sqrt{3}s, \sqrt{\frac{ 2 }{ 3}} \right),\ \hat\gamma_a(\hat s)=\left(a\cos \frac{\hat s}{a}, a \sin \frac{\hat s}{a} , b\right).$$ We see that for all $a\in(0,1]$, $\tilde{y}_a$ is a flat torus in some $S^7(r)\subset S^8$, $r=\sqrt{1-\frac{2b^2}{3}}$. Moreover, when $a=1$, $\tilde{y}_a$ is the Ejiri’s torus. Concerning the Willmore functional of $\tilde{y}_a$, by , we have that $$\begin{split}W(\tilde{y}_a)&=4\int_{\hat\gamma_a}d\hat s\int_{\gamma}\left(\frac{1}{4}+\frac{1}{16}k_1^2+\frac{1}{16}\hat k_1^2\right)ds= \frac{\pi^2\left(5a^2+1\right)}{a\sqrt{3}}.
\end{split}$$ So $W(\tilde{y}_a)$ is an increasing function in $a$ for $a\in\left[\frac{1}{\sqrt{5}},1\right]$, which means that Ejiri’s torus is unstable in $S^7$.
Let $y=\gamma\otimes\hat\gamma:S^1(1)\times S^1(\sqrt{\frac{1}{3}})\rightarrow S^5\subset S^7$ be the Ejiri’s torus in . Then $y$ is unstable in $S^7$.
Later in Section 5 we will see that the Ejiri torus is unstable in $S^5$.
Unstability of torus of tensor product
--------------------------------------
The one parameter family of tori deriving the unstability of the Ejiri torus can be deformed easily to show that all tori of tensor product are unstable.
\[example-unstable\] Let $y_1=\gamma (s)\otimes \hat\gamma (\hat s)$ be a torus of tensor product as before. Let $0<a\leq1$ and $a^2+b^2=1, \ b\geq0$. Set $$y_a=\gamma_a(\theta)\otimes \hat\gamma_a(\hat \theta),\ \hbox{ with }\ \gamma_a(\theta)=\left(a\gamma\left(\frac{\theta}{a}\right), b\right), \ \hat\gamma_a(\hat \theta)=\left(a\gamma\left(\frac{\hat \theta}{a}\right), b\right).$$ Then it is direct to derive that the curvature $k_{1,a}(\theta)$ of $\gamma_a(\theta)$ is $\frac{1}{a}k_1(\frac{\theta}{a})$ and the curvature $\hat k_{1,a}(\theta)$ of $\hat\gamma_a(\hat \theta)$ is $\frac{1}{a}\hat k_1(\frac{\hat \theta}{a})$. By and Lemma \[lemma\], we have that $$\begin{split}
W(y_a)&=\frac{1}{4}\iint_{\gamma_a\otimes\hat\gamma_a}\left(4+\frac{1}{a^2}k_1^2\left(\frac{\theta}{a}\right)+ \frac{1}{a^2}\hat k_1^2\left(\frac{\hat \theta}{a}\right)\right)d\theta d\hat{\theta}\\
&=\frac{1}{4}\iint_{\gamma\otimes\hat\gamma}\left(4a^2+ k_1^2(s)+ \hat k_1^2(\hat s)\right)dsd\hat{s}\\
&\leq \frac{1}{4}\iint_{\gamma\otimes\hat\gamma}\left(4+ k_1^2(s)+ \hat k_1^2(\hat s)\right)ds d\hat{s}=W(y_1).\\
\end{split}$$
So ignoring the co-dimensional restriction, we obtain the following
\[th-main-3\] Let $y=\gamma\otimes\hat\gamma$ be a (Willmore) torus of tensor product. Then it is unstable.
For Willmore tori of tensor product, we also have a simpler family to show the un-stability of them. Let $0<a\leq1$ and $a^2+b^2=1, \ b\geq0$. Let $y_1=\gamma (s)\otimes \hat\gamma (\hat s)$ be a Willmore torus of tensor product. So we have $\hat\gamma=(\cos \hat s,\sin \hat s)$. Let $0<a\leq1$ and $a^2+b^2=1, \ b\geq0$. Set $$y_a=\gamma_a(\theta)\otimes \hat\gamma(\hat \theta),\ \hbox{ with }\ \gamma_a(\theta)=\left(a\gamma\left(\frac{\theta}{a}\right), b\right).$$ So we have that $$\begin{split}
W(y_a)=\frac{1}{4}\iint_{\gamma_a\otimes\hat\gamma}\left(4+\frac{1}{a^2}k_1^2\left(\frac{\theta}{a}\right)\right)d\theta d\hat{s}=\frac{1}{4}\iint_{\gamma\otimes\hat\gamma}\left(4a^2+ k_1^2(s)\right)dsd\hat{s}.\\
\end{split}$$ So we have $W(y_a)\leq W(y)$ for all $a\in(0,1]$ and $W(y_a)\leq W(y)$ if and only if $a=1$. This indicates that all Willmore tori of tensor product are unstable in $S^9$ since by Theorem 1 of [@Li-V], such surfaces must be in some $S^7$.
We end this section by a bit more discussion on the values of Willmore functional for tori of tensor product. From Example \[example-inf\], we see that there exists a family tori of tensor product with their Willmore functional tending to $2\pi^2$. The following proposition shows that $2\pi^2$ is exactly the infimum of the Willmore functional for tori of tensor product. Note that it is also a corollary of a theorem of Chen [@Chen2], which states that for a closed flat surface $x$ in ${\mathbb{R}}^n$, its Willmore functional $W(x)\geq2\pi^2$, with equality holding when $x$ is the standard Clifford torus (See also [@Chen-book], [@Hou] for a proof).
\[prop-wy\] Let $y=\gamma\otimes\hat\gamma$ be a torus of tensor product. Then $W(y)> 2\pi^2$.
By and Fenchel Theorem, we have that $$\begin{split}
W(y)&=\frac{1}{4}\iint_{\gamma\otimes\hat\gamma}\left(4+k_1^2+\hat{k}_1^2\right)dsd\hat{s}\\
&>\frac{1}{4}\iint_{\gamma\otimes\hat\gamma}\left(2+k_1^2+\hat{k}_1^2\right)dsd\hat{s}\\
&\geq \frac{1}{2}\int_{\hat\gamma}\sqrt{\hat k_1^2+1}d\hat{s} \int_{\gamma} \sqrt{k^1+1}ds\\
&\geq 2\pi^2. \\
\end{split}$$
Constrained Willmore surfaces of tensor product
===============================================
This section provides all constrained Willmore surfaces of tensor product, which is a generalization of Theorem 1.1 of [@Li-V]. Applying to Willmore surfaces, we re-obtain Theorem 1.1 of [@Li-V], from the version of conformal geometry, which also indicates that the Ejiri torus is a Willmore torus [@Ejiri1982].
We recall that an immersion from a Riemann surface $M$ into $S^n$ is called a [*constrained Willmore surface* ]{} if it is critical surface of Willmore functional under all variations fixing the conformal structure of $M$ [@BoPP], [@BPP]. It is known that a surface $y$ is constrained Willmore if and only if there exists some holomorphic function $q$ such that the Hopf differential $\kappa$ and the Schwarzian $c$ satisfy the following equation ([@BoPP], [@BPP]) $$\label{eq-CWillmore}
D_{\bar{z}}D_{\bar{z}}\kappa+\frac{\bar c}{2}\kappa=Re(q\kappa).$$
We retain the notations in Section 2 for a torus $y$ of tensor product. To begin with, set $$\left\{
\begin{split}
&E_j=(0,\beta_j\otimes\hat\gamma),j=2,\cdots,n-1;\\
&F_j=(0,\gamma\otimes\hat\beta_j),j=2,\cdots,m-1;\\
&G_{jk}=(0,\beta_j\otimes\hat\beta_k),j=0,\cdots,n,\ k=0,\cdots,m;\\
&L_1=(0,\beta_1\otimes\hat\gamma)+\frac{k_1}{2}(1,\gamma\otimes\hat\gamma);\\
&L_2=(0,\gamma\otimes\hat\beta_1)+\frac{\hat{k}_1}{2}(1,\gamma\otimes\hat\gamma).\\
\end{split}\right.$$ It is easy to see that they provide an orthogonal basis of the conformal normal bundle of $Y$. By this framing, we have that from $$\label{eq-kappa-wp}
\kappa=\frac{k_1}{4}L_1-\frac{\hat{k}_1}{4}L_2-\frac{i}{2}G_{00}.$$ Direct computations show that $$\label{eq-dz}\left\{
\begin{split}
&D_{\bar{z}}L_1=\frac{k_2}{2}E_2+\frac{i}{2}G_{10},\\
&D_{\bar{z}}L_2=\frac{i\hat{k}_2}{2}F_2+\frac{1}{2}G_{01},\\
&D_{\bar{z}}G_{00}=\frac{k_1}{2}G_{10}+\frac{i\hat{k}_1}{2}G_{01},\\
&D_{\bar{z}}G_{10}=-\frac{k_1}{2}G_{00}+\frac{k_2}{2}G_{20}-\frac{i}{2}L_{1}+\frac{i\hat{k}_1}{2}G_{11},\\
&D_{\bar{z}}G_{01}=-\frac{i\hat{k}_1}{2}G_{00}+\frac{i\hat{k}_2}{2}G_{02}-\frac{1}{2}L_{2}+\frac{k_1}{2}G_{11},\\
&D_{\bar{z}}E_2=\frac{k_3}{2}E_3-\frac{k_2}{2}L_1+\frac{i}{2}G_{20},\\
&D_{\bar{z}}F_2=\frac{\hat{k}_3}{2}F_3-\frac{i\hat{k}_2}{2}L_1+\frac{1}{2}G_{02}.\\
\end{split}\right.$$ So $$\begin{split}
D_{\bar{z}}\kappa&=\frac{k_{1s}}{8}L_1-\frac{i\hat{k}_{1\bar s}}{8}L_2+\frac{k_1}{4}D_{\bar z}L_1-\frac{\hat{k}_1}{4}D_{\bar z}L_2-\frac{i}{2}D_{\bar z}G_{00}\\
&=\frac{k_{1s}}{8}L_1-\frac{i\hat{k}_{1\bar s}}{8}L_2+\frac{k_1k_2}{8} E_2+\frac{ik_1}{8}G_{10}-\frac{i\hat{k}_1\hat{k}_2}{ 8} F_2-\frac{\hat{k}_1}{8}G_{01}-\frac{ik_1}{4}G_{10}+\frac{\hat{k}_1}{4}G_{01}\\
&=\frac{k_{1s}}{8}L_1-\frac{i\hat{k}_{1\bar s}}{8}L_2+\frac{k_1k_2}{8} E_2-\frac{ik_1}{8}G_{10}-\frac{i\hat{k}_1\hat{k}_2}{ 8} F_2+\frac{\hat{k}_1}{8}G_{01}.\\
\end{split}$$ From this and we see that $$\langle D_{\bar z}D_{\bar{z}}\kappa, G_{11}\rangle=\langle -\frac{ik_1}{8}D_{\bar z}G_{10}+\frac{\hat{k}_1}{8}D_{\bar{z}}G_{01}, G_{11}\rangle=\frac{k_1\hat{k}_1}{8}.$$ Since $\langle \kappa,G_{11}\rangle=0,$ for any holomorphic function $q$, we have $$\langle D_{\bar z}D_{\bar{z}}\kappa+\frac{\bar{c}}{2}\kappa-Re(q\kappa), G_{11}\rangle=\frac{k_1\hat{k}_1}{8}.$$ The constrained Willmore equation $D_{\bar{z}\bar{z}}+\frac{\bar{c}}{2}\kappa=Re(q \kappa)$ then forces $$k_1\hat{k}_1=0.$$ Without lose of generality, assume that $\hat{k}_1=0$, that is, $\hat{\gamma}$ is a great circle. Now we obtain $$c=\frac{k_1^2}{4},\ \kappa=\frac{k_1}{4}L_1-\frac{i}{2}G_{00},\ D_{\bar{z}}\kappa=\frac{k_{1s}}{8}L_1+\frac{k_1k_2}{8} E_2-\frac{ik_1}{8}G_{10}.$$ So $$\label{eq-dzdzk}
\begin{split}
16D_{\bar{z}}D_{\bar{z}}\kappa&= k_{1ss}L_1+(k_{1}k_2)_{s} E_2-ik_{1s}G_{10}+ 2k_{1s}D_{\bar{z}}L_1+2k_1k_2D_{\bar{z}}E_2-2ik_1D_{\bar{z}}G_{10} \\
&= k_{1ss}L_1+(2k_{1s}k_2+k_1k_{2s}) E_2 +k_1k_2(k_3E_3-k_2L_1)+ik_1(k_1G_{00}+iL_{1})\\
&= (k_{1ss}-k_1k_2^2-k_1)L_1+(2k_{1s}k_2+k_1k_{2s}) E_2 +k_1k_2k_3E_3+ik_1^2G_{00}.\\
\end{split}$$ Then the constrained Willmore equation $D_{\bar{z}\bar{z}}+\frac{\bar{c}}{2}\kappa=Re(q \kappa)$ now reads $$\left\{
\begin{split}
&k_1''-k_1k_2^2-k_1+\frac{k_1^3}{2}=Re(\frac{qk_1}{4}),\\
&2k_1'k_2+k_1k_2'=0,\\
&0=Re(iq),\\
&k_1k_2k_3=0.\\
\end{split}\right.$$ Since $q$ is holomorphic and $Im(q)=0$, $q$ is a constant real number. Set $q=q_1\in {\mathbb{R}}$. We obtain $$\left\{
\begin{split}
&k_1''-k_1k_2^2-k_1+\frac{k_1^3}{2}=\frac{q_1k_1}{4},\\
&2k_1'k_2+k_1k_2'=0,\\
&k_1k_2k_3=0.\\
\end{split}\right.$$ In a sum, we have proved the following theorem.
The tensor product surface $y=\gamma\otimes\hat\gamma$ is a constrained Willmore surface if and only if one of $\gamma$ and $\hat\gamma$ is the great circle $S^1$, say, for example, $
\hat\gamma=S^1$, and the other one, say, $\gamma$, is a curve in some $S^3(1)$ satisfying the following equations $$\label{eq-elastic2}\left\{
\begin{split}
&k_1''-k_1k_2^2-a_0k_1+\frac{k_1^3}{2}=0,\\
&2k_1'k_2+k_1k_2'=0.\\
\end{split}\right.$$ Here $a_0\in {\mathbb{R}}$ is a constant. Moreover, $y$ is Willmore if and only if $a_0=1$.
Note that is exactly the special case $\lambda=a_0$ and $G=0$ in (1.3) of [@Langer-Singer1984]. We refer to Section 2 of [@Langer-Singer1984] and Section 3-5 of [@Li-V] for more discussions on the solutions to and corresponding constrained Willmore surfaces. We also refer to [@Heller] for another kind of relationships between constrained Willmore surfaces and elastic curves.
It is easy to see that in this case $k_1$ and $k_2$ being constant yields all homogeneous constrained Willmore surfaces of tensor product. So we obtain
Let $y=\gamma\otimes\hat\gamma$ be a homogeneous constrained Willmore surface of tensor product. Set $\hat\gamma = \left( \cos \hat s , \sin \hat s\right).$ Then $\gamma$ is of the form $$\label{eq-homo-c-W}
\gamma = \left(a\cos \frac{ s}{\sqrt{a^2+b^2\lambda^2}},a\sin \frac{s}{\sqrt{a^2+b^2\lambda^2 }},b \cos \frac{\lambda s}{\sqrt{a^2+b^2\lambda^2}}, b \sin \frac{\lambda s}{\sqrt{a^2+b^2\lambda^2}}\right),$$ with $$k_1^2=\frac{a^2b^2(\lambda^2-1)^2 }{ (a^2+b^2\lambda^2)^2},~\hbox{
and }\
k_2^2=\frac{ \lambda^2}{ (a^2+b^2\lambda^2)^2} \hbox { when } k_1\neq0.$$
This corollary indicates that not all homogeneous tori of tensor product are constrained Willmore surfaces.\
Concerning the stability of constrained Willmore surface of tensor product, noticing in Example \[example-unstable\] the conformal structure of tori are in fact invariant under the variation, we have the following
\[th-main-2\] Let $y=\gamma\otimes\hat\gamma$ be a constrained Willmore torus of tensor product. Then it is unstable.
Some related results on the stability of constrained Willmore surface can be found in [@KL; @NS].\
In the end, let us go back to Willmore surfaces. Setting $q_1=0$ in , we re-obtain the theorem due to Li and Vrancken in [@Li-V]
[@Li-V] The tensor product surface $y=\gamma\otimes\hat\gamma$ is a Willmore surface if and only if one of $\gamma$ and $\hat\gamma$ is the great circle $S^1$, say, for example, $
\hat\gamma=S^1$, and the other one, say, $\gamma$, is a curve in some $S^3$ satisfying the following equations $$\label{eq-elastic}\left\{
\begin{split}
&k_1''-k_1k_2^2-k_1+\frac{k_1^3}{2}=0,\\
&2k_1'k_2+k_1k_2'=0.\\
\end{split}\right.$$
Note that the above equations are exactly the equations of free elastic curves in the hyperbolic space $H^{3}(-1)$ [@Langer-Singer1984; @Langer-Singer-BLMS; @Li-V].\
Unstability of the Ejiri torus
==============================
Let $$y_{\theta}(s,\hat{s})=\left(a_1\cos(s+\hat{s}), a_1\sin(s+\hat{s}), a_2\cos(s-\hat{s}), a_2\sin(s-\hat{s}),a_3\cos\hat{s}, a_3\sin\hat{s}\right)$$ with $$a_1=\sqrt{\frac{1}{3}}\cos\theta,\ a_2=\sqrt{\frac{1}{3}}\sin\theta,\ \hbox{ and } a_3=\sqrt{\frac{2}{3}}.$$ Then $y_{\theta}$, with $\theta\in[0,\frac{\pi}{2}]$, provides one family of tori with $(s,\hat{s})\in[0,2\pi]\times[0,2\pi]$ in $S^5$. In particular, $y_{\theta}|_{\theta=\frac{\pi}{4}}$ gives the Ejiri torus (up to an isometry of $S^5$). We will show that the Ejiri torus is unstable by computation of the Willmore functional of $y_{\theta}$.
First we use the coordinate changing $$\phi=\frac{1}{\sqrt{3}}(s+\hat{b}_1\hat{s}), \ \ \varphi=\frac{1}{b_3 }\hat{s},$$ such that $$s+\hat{s}=\sqrt{3}\phi+b_1\varphi,\ s-\hat{s}=\sqrt{3}\phi-b_2\varphi. $$ Here $$\ b_1= 2b_3\sin^2\theta , \ b_2= 2b_3\cos^2\theta, \ b_3=\sqrt{\frac{3}{2+\sin^2 2\theta}},\ \hbox { and } \hat{b}_1=1-\frac{b_1}{b_3}.$$ We compute now $$y_{\theta, \phi}=\sqrt{3}\left(-a_1\sin(s+\hat{s}), a_1\cos(s+\hat{s}), -a_2\sin(s-\hat{s}), a_2\cos(s-\hat{s}),0, 0\right),$$ $$y_{\theta, \varphi}= \left(-a_1b_1\sin(s+\hat{s}), a_1b_1\cos(s+\hat{s}), a_2b_2\sin(s-\hat{s}), -a_2b_2\cos(s-\hat{s}),-a_3b_3\sin\hat{s}, a_3b_3\cos\hat{s}\right).$$ It is direct to see that $z=\phi+i\varphi$ is a complex coordinate of $y_{\theta}$ since $$|y_{\theta, \phi}|^2=1,\ |y_{\theta, \varphi}|^2=a_1^2b_1^2+a_2^2b_2^2+a_3^3b_3^2=b_3^2\left(\frac{4\cos^2\theta\sin^2\theta}{3}+\frac{2}{3}\right)=1,\
\langle y_{\theta, \phi},y_{\theta, \varphi}\rangle=a_1^2b_1-a_2^2b_2=0.$$ We also have $$y_{\theta, \phi\phi}=-3\left(a_1\cos(s+\hat{s}), a_1\sin(s+\hat{s}), a_2\cos(s-\hat{s}), a_2\sin(s-\hat{s}),0, 0\right),$$ $$y_{\theta, \varphi\varphi}=-\left(a_1b_1^2\cos(s+\hat{s}), a_1b_1^2\sin(s+\hat{s}), a_2b_2^2\cos(s-\hat{s}), a_2b_2^2\sin(s-\hat{s}),a_3b_3^2\cos\hat{s}, a_3b_3^2\sin\hat{s}\right).$$ So $$|y_{\theta, \phi\phi}|^2=3,\ |y_{\theta, \varphi\varphi}|^2=a_1^2b_1^4+a_2^2b_2^4+a_3^2b_3^4=\frac{b_3^4}{3}(16(\cos^2\theta\sin^8\theta+\cos^8\theta\sin^2\theta)+2),$$ and $$\langle y_{\theta, \phi\phi},y_{\theta, \varphi \varphi}\rangle=3a_1^2b_1^2+3a_2^2b_2^2=b_3^2\sin^2 2\theta.$$ Therefore we obtain $$\begin{split}|\vec{H}|^2+1&=\frac{1}{4}\left(|y_{\theta, \phi\phi}|^2+ |y_{\theta, \varphi\varphi}|^2+2\langle y_{\theta, \phi\phi},y_{\theta, \varphi \varphi}\rangle\right)\\
&=\frac{1}{4}\left(
3+\frac{b_3^4}{3}(16(\cos^2\theta\sin^8\theta+\cos^8\theta\sin^2\theta)+2)+2b_3^2\sin^22\theta\right)\end{split}$$ We have now $$\begin{split}W(y_{\theta})&=\int_M(|\vec{H}|^2+1)dM\\
&=\int_0^{2\pi}\int_0^{2\pi}(|\vec{H}|^2+1)\frac{1}{\sqrt{3}b_3}ds d\hat{s}\\
&=\frac{\pi^2\sqrt{2+\sin^22\theta}}{3}\left(
3+\frac{b_3^4}{3}(2+16(\cos^2\theta\sin^8\theta+\cos^8\theta\sin^2\theta))+2b_3^2\sin^2 2\theta\right)\\
&= \pi^2\sqrt{2+\sin^22\theta} \left(1+\frac{1}{(2+\sin^22\theta)^2}\left(2+4\sin^22\theta-3\sin^42\theta)\right)+\frac{2\sin^22\theta}{2+\sin^22\theta}\right)
\end{split}$$ Let $\rho=\sqrt{2+\sin^22\theta}$. So $\sin^22\theta=\rho^2-2$ and $$\begin{split}W(y_{\theta})&= \pi^2\rho \left(1+\frac{1}{\rho^4}\left(2+4(\rho^2-2)-3(\rho^2-2)^2)\right)+\frac{2(\rho^2-2)}{\rho^2}\right)\\
&= \pi^2\frac{1}{\rho^3}\left(12\rho^2-18\right)\\
&= 6\pi^2\left(\frac{2}{\rho}-\frac{3}{\rho^3}\right).\\
\end{split}$$ Since $ \frac{2}{\rho}-\frac{3}{\rho^3}$ is an increasing function for $\rho\in[\sqrt{2},\sqrt{3}]$, the Ejiri torus attains maximal Willmore functional. Hence we obtain
The Ejiri torus is unstable in $S^5$.
\
[**Acknowledgements**]{}: The author is thankful to Professor Xiang Ma for his many valuable discussions and suggestions. The author is supported by the Project 11201340 of NSFC and the Fundamental Research Funds for the Central Universities.
[10]{}
Bohle, C., Peters, G., Pinkall, U. [*Constrained Willmore surfaces,*]{} Calc. Var. Partial Differential Equations 32 (2008), no. 2, 263-277.
Bryant, R. [*A duality theorem for Willmore surfaces*]{} J. Diff. Geom. 20(1984), 23-53. Burstall, F., Pedit, F., Pinkall, U. [*Schwarzian derivatives and flows of surfaces,*]{} Contemporary Mathematics 308, 39-61, Providence, RI: Amer. Math. Soc., 2002.
Chen, B. Y., [*Differential geometry of tensor product immersions,*]{} Ann. Global Anal. Geom. 11 (1993), 345-359.
Chen, B. Y., [*On the total curvature of immersed manifolds. V. C-surfaces in Euclidean m-space,*]{} Bull. Inst. Math. Acad. Sinica 9 (1981), 509-516.
Chen, B. Y., [*Total mean curvature and submanifolds of finite type,*]{} World Scientific, 1984. xi+352 pp.
Costa, C. J. [*Complete minimal surfaces in ${\mathbb{R}}^3$ of genus one and four planar embedded ends,*]{} Proc. Amer. Math. Soc. 119 (1993), no. 4, 1279-1287. Ejiri, N. [*A counterexample for Weiner’s open question,*]{} Indiana Univ. Math. J., 31(1982), No.2, 209-211.
Ejiri, N. [*Willmore surfaces with a duality in $S^{n}(1)$,*]{} Proc. London Math. Soc. (3), 57(2)(1988), 383-416.
Gouberman A, Leschke K. [*New examples of Willmore tori in $S^4$,*]{} Journal of Physics A: Mathematical and Theoretical, 2009, 42(40): 404010.
Guo, Z., Li, H. and Wang, C. P.: The second variation formula for Willmore submanifolds in $S^n$ , Results in Math. 40 (2001), 205-225. Heller, L. [*Constrained Willmore tori and elastic curves in 2-dimensional space forms,*]{} Comm. Anal. Geom. 22 (2014), no. 2, 343-369.
Hou, Z. [*The total mean curvature of submanifolds in a Euclidean space*]{}. Michigan Math. J. 45 (1998), no. 3, 497-505.
Kuwert, E., Lorenz, J. [*On the stability of the CMC Clifford tori as constrained Willmore surfaces,*]{} Ann. Global Anal. Geom. 44 (2013), no. 1, 23-42.
Langer, J., Singer, D., [*The total squared curvature of closed curves,*]{} J. Diff. Geom. 20 (1984), 1-22. Langer, J., Singer, D., [*Curves in the hyperbolic plane and the mean curvatures of tori in 3-space*]{}, Bull. London Math. Soc., 16 (1984), 531-534.
Langer, J., Singer, D.: [*Curve-straightening in Riemannian manifolds*]{}, Ann. Global Anal. Geom. 5 (1987), no. 2, 133-150.
Li, H. Z., Vrancken, L. [*New examples of Willmore surfaces in $S\sp
n$.*]{} Ann. Global Anal. Geom. 23 (2003), no. 3, 205-225.
Li, P., Yau, S. T. [*A new conformal invariant and its applications to the Willmore conjecture and the first eigenvalue of compact surfaces.*]{} Invent. Math. 69 (1982), no. 2, 269-291.
Marques, F., Neves, A., [*Min-Max theory and the Willmore conjecture,*]{} Ann. Math. 179(2014), no. 2, 683-782. Marques, F., Neves, A., [*The Willmore conjecture,*]{} Jahresber. Dtsch. Math.-Ver. 116 (2014), no. 4, 201-222. arXiv:1409.7664. Montiel, S., [*Willmore two-spheres in the four-sphere,*]{} Trans. Amer. Math. Soc. 352(2000), 4469-4486. Ndiaye, C.; Schätzle, R. [*Explicit conformally constrained Willmore minimizers in arbitrary codimension,*]{} Calc. Var. Partial Differential Equations 51 (2014), no. 1-2, 291-314. Palmer, B.[*The conformal Gauss map and the stability of Willmore surfaces,*]{} Ann. Global Anal. Geom. 9 (1991), no. 3, 305-317. Pinkall, U. [*Hopf tori in $S^3$,*]{} Invent. Math. Vol. 81, no. 2(1985), 379-386.
Weiner, J. [*On a Problem of Chen, Willmore, et al,*]{} Indiana Univ. Math. J. 27 No. 1 (1978), 19-35. Willmore,T.J. [*Note on embedded surfaces,*]{} An. St. Univ. Iasi, s.I.a. Mat. 12B, 1965,493-496.
\
\
Peng Wang,\
Department of Mathematics,\
Tongji University,\
Siping Road 1239, Shanghai, 200092,\
P. R. China.\
[*E-mail address*]{}: [[email protected]]{}
|
---
abstract: 'We calculate the leading order in $\alpha_s$ QCD amplitude for exclusive neutrino and antineutrino production of a $D$ pseudoscalar charmed meson on an unpolarized nucleon. We work in the framework of the collinear QCD approach where generalized parton distributions (GPDs) factorize from perturbatively calculable coefficient functions. We include both $O(m_c)$ terms in the coefficient functions and $O(M_D)$ mass term contributions in the heavy meson distribution amplitudes. We emphasize the sensitivity of specific observables on the transversity quark GPDs.'
author:
- 'B. Pire'
- 'L. Szymanowski'
- 'J. Wagner'
title: 'Exclusive neutrino-production of a charmed meson '
---
Introduction.
=============
The now well established framework of collinear QCD factorization [@fact1; @fact2; @fact3] for exclusive reactions mediated by a highly virtual photon in the generalized Bjorken regime describes hadronic amplitudes using generalized parton distributions (GPDs) which give access to a 3-dimensional analysis [@3d] of the internal structure of hadrons. Neutrino production is another way to access (generalized) parton distributions [@weakGPD; @PS]. Although neutrino induced cross sections are orders of magnitudes smaller than those for electroproduction and neutrino beams are much more difficult to handle than charged lepton beams, they have been very important to scrutinize the flavor content of the nucleon and the advent of new generations of neutrino experiments will open new possibilities. Using them would improve in a significant way future extraction of GPDs from the data [@extraction]. Moreover, charged current neutrino production is mediated by a massive vector boson exchange which is always highly virtual; one is thus tempted to apply a factorized description of the process amplitude down to small values of the momentum transfer $Q^2=-q^2$ carried by the $W^\pm$ boson.
Heavy quark production allows to extend the range of validity of collinear factorization, the heavy quark mass playing the role of the hard scale. Indeed kinematics (detailed below) shows that the relevant scale is $O(Q^2+m_c^2)$. Some data [@data] exist for charm production in medium and high energy neutrino and antineutrino experiments and some specific channels ($D^0, D^\pm, D^*$) have been identified.
We shall thus write the scattering amplitude $W ~N \to D ~N'$ in the collinear QCD framework as a convolution of leading twist quark and gluon GPDs with a coefficient function calculated in the collinear kinematics taking heavy quark mass effects into account. This will allow a non-vanishing transverse amplitude $W_T ~N \to D ~N'$ with a leading contribution of order $\frac{m_c}{Q^2+m_c^2}$ [@PS], built from the convolution of chiral odd leading twist quark GPDs with a coefficient function of order $m_c$. In order to be consistent, we shall complement these leading terms with the order $\frac{M_D}{Q^2+M_D^2}$ contributions related to mass term in the distribution amplitudes of heavy mesons (see Eq. (\[DA\])).
In this paper we consider the exclusive production of a pseudoscalar $D-$meson through the reactions on a proton (p) or a neutron (n) target: $$\begin{aligned}
\nu_l (k)p(p_1) &\to& l^- (k')D^+ (p_D)p'(p_2) \,,\\
\nu_l (k)n(p_1) &\to& l^- (k')D^+ (p_D)n'(p_2) \,,\\
\nu_l (k)n(p_1) &\to& l^- (k')D^0 (p_D)p'(p_2) \,,\\
\bar\nu_l (k) p(p_1) &\to& l^+ (k') D^-(p_D) p' (p_2)\,,\\
\bar\nu_l (k) p(p_1) &\to& l^+ (k') \bar D^0(p_D) n' (p_2)\,,\\
\bar\nu_l (k) n(p_1) &\to& l^+ (k') D^-(p_D) n' (p_2)\,,\end{aligned}$$ in the kinematical domain where collinear factorization leads to a description of the scattering amplitude in terms of nucleon GPDs and the $D-$meson distribution amplitude, with the hard subprocesses: $$\begin{aligned}
W^+ d \to D^+ d~~~&,&~~~W^+ d \to D^0 u~~~~,~~~~ W^- \bar d \to D^- \bar d~~~~,~~~~ W^- \bar d \to \bar D^0 \bar u\,,\end{aligned}$$ described by the handbag Feynman diagrams of Fig.1 convoluted with chiral-even or chiral-odd quark GPDs, and the hard subprocesses: $$\begin{aligned}
W^+ g \to D^+ g~~~~&,&~~~~ W^- g \to D^- g\,,\end{aligned}$$ convoluted with gluon GPDs (see Fig.2).
![Feynman diagrams for the factorized amplitude for the $ \nu_l N \to l^- D^+ N'$ or the $ \nu_l N \to l^- D^0 N'$ process involving the quark GPDs; the thick line represents the heavy quark. In the Feynman gauge, diagram (a) involves convolution with both the transversity GPDs and the chiral even ones, whereas diagram (b) involves only chiral even GPDs.[]{data-label="Fig1"}](Fig1.pdf){width="80.00000%"}
![Feynman diagrams for the factorized amplitude for the $W^+ N \to D^+ N'$ process involving the gluon GPDs; the thick line represents the heavy quark.[]{data-label="Fig2"}](Dgluon.pdf){width="80.00000%"}
Our kinematical notations are as follows ($m$ and $M_D$ are the nucleon and $D-$meson masses): $$\begin{aligned}
&&q=k-k'~~~~~; ~~~~~Q^2 = -q^2~~~~~; ~~~~~\Delta = p_2-p_1 ~~~~~; ~~~~~\Delta^2=t \,;\nonumber\\
&&q^\mu= -2\xi' p^\mu +\frac{Q^2}{4\xi'} n^\mu ~;~\epsilon_L^\mu(q)= \frac{1}{Q} [2\xi' p^\mu +\frac{Q^2}{4\xi'} n^\mu] ~;~p_D^\mu= 2(\xi-\xi') p^\mu +\frac{M_D^2-\Delta_T^2}{4(\xi-\xi')} n^\mu -\Delta_T^\mu \,; \\
&&p_1^\mu=(1+\xi)p^\mu +\frac{1}{2} \frac{m^2-\Delta_T^2/4}{1+\xi} n^\mu -\frac{\Delta_T^\mu}{2}~~~~;~~~~ p_2^\mu=(1-\xi)p^\mu +\frac{1}{2} \frac{m^2-\Delta_T^2/4}{1-\xi} n^\mu +\frac{\Delta_T^\mu}{2}
\,,\nonumber\end{aligned}$$ with $p^2 = n^2 = 0$ and $p.n=1$. As in the double deeply virtual Compton scattering case [@DDVCS], it is meaningful to introduce two distinct momentum fractions: $$\begin{aligned}
\xi = - \frac{(p_2-p_1).n}{2} ~~~~~,~~~~~\xi' = - \frac{q.n}{2} \,.\end{aligned}$$ Momentum conservation leads to the relation: $$\frac{Q^2}{\xi'}-\frac{\xi(4m^2-\Delta_T^2)}{1-\xi^2} = \frac{M_D^2-\Delta_T^2}{\xi-\xi'} \,.$$ Neglecting the nucleon mass and $\Delta_T$, the approximate values of $\xi$ and $\xi'$ are $$\begin{aligned}
\xi \approx \frac{Q^2+M_D^2}{4p_1.q-Q^2-M_D^2} ~~~~,~~~ \xi' \approx \frac{Q^2}{4p_1.q-Q^2-M_D^2} \,.\end{aligned}$$ To unify the description of the scaling amplitude, we thus define a modified Bjorken variable $$\begin{aligned}
x_B^D \equiv \frac {Q^2+M_D^2}{2p_1.q} \neq x_B \equiv \frac {Q^2}{2p_1.q}\,,\end{aligned}$$ which allows to express $\xi$ and $\xi'$ in a compact form: $$\begin{aligned}
\xi \approx \frac{x_B^D }{2-x_B^D } ~~~~~,~~~~~\xi' \approx \frac{x_B }{2-x_B^D } \,.\end{aligned}$$ The difference $\xi'-\xi = - \frac{p_D .n}{2}$ vanishes (in the strictly collinear case) when one neglects the meson mass. In this case, one gets the usual relations $$\begin{aligned}
Q >>M_D ~~;~~\xi' \approx \xi ~~~~;~~~~ \xi \approx \frac{x_B}{2-x_B} \,.\end{aligned}$$ On the other hand, if the meson mass is the relevant large scale (for instance in the limiting case where $Q^2$ vanishes as in the timelike Compton scattering kinematics [@TCS]) : $$\begin{aligned}
Q^2\to0 ~~;~~\xi' \to 0 ~;~ \xi \approx \frac{\tau}{2-\tau} ~~;~~\tau = \frac{M_D^2}{s_{WN}-m^2}\,.\end{aligned}$$
Distribution amplitudes and GPDs
================================
In the collinear factorization framework, the hadronization of the quark-antiquark pair is described by a distribution amplitude (DA) which obeys a twist expansion and evolution equations. Much work has been devoted to this subject [@heavyDA]. The charmed meson distribution amplitudes are less known than the light meson ones. The limiting behavior of heavy mesons, which gives some constraints on the $B$ meson wave functions, is not very relevant for the charmed case. Here, we shall follow Ref. [@heavyDA2] and include some mass terms which will lead to order $\frac{M_D}{Q^2+M_D^2}$ contributions to the amplitudes; omitting the path-ordered gauge link, the relevant distribution amplitude reads for the pseudo scalar $D^+$ meson : $$\begin{aligned}
\langle D^+(P_D) | \bar c_\beta(y) d_\gamma(-y) |0 \rangle & =&
i \frac{f_D}{4} \int_0^1 dz e^{i(z-\bar z)P_D.y} [(\hat P_D-M_D)\gamma^5]_{\gamma \beta} \phi_D(z)\,,
\label{DA}
\end{aligned}$$ with $z=\frac{P_D^+-k^+}{P_D^+}$ and where $\int_0^1 dz ~ \phi_D(z) = 1$, $f_D= 0.223$ GeV. As usual, we denote $\bar z=1-z$ and $\hat p = p_\mu \gamma^\mu$ for any vector $p$. It has been argued that a heavy-light meson DA is strongly peaked around $z_0 = \frac {m_c}{M_D}$. We will parametrize $\phi_D(z) $ as in Ref. [@heavyDA2], [*i.e.*]{} $\phi_D(z) =6z(1-z)(1+C_D(2z-1))$ with $C_D\approx1.5$, which has a maximum around $z=0.7$.
We define the gluon and quark generalized parton distributions of a parton $q$ (here $q = u,\ d$) in the nucleon target with the conventions of [@MD]. To get the quantitative predictions for the neutrino-production observables, we use for the chirally even GPDs the Goloskokov-Kroll (G-K) model, based on the fits to deeply virtual meson production. Details of the model can be found in [@CEGPD].
With respect to chiral odd twist 2 transversity GPDs, let us remind the reader that they correspond to the tensorial Dirac structure $\bar \psi^q\, \sigma^{\mu\nu} \,\psi^q $. The leading GPD $H_T(x,\xi,t)$ is equal to the transversity PDF in the ($\xi=0~;~ t=0$) limit. The experimental access to these GPDs [@transGPDdef] has been much discussed [@transGPDno; @transGPDacc] but much remains to be done. Models have been proposed [@models] and some lattice calculation results exist [@lattice] for $H_T(x,\xi,t)$ and for the combination $\bar E_T(x, \xi, t) = 2\tilde H_T(x, \xi, t) +E_T(x, \xi, t)$. Since $\tilde E_T(x, \xi, t)$ is odd under $\xi \to -\xi$, most models find it vanishingly small. We will put it to zero in our numerical estimates of the observables. Since we are lacking decisive arguments about the relative sizes of $ \tilde H_T(x, \xi, t)$ and $E_T(x, \xi, t)$, we shall propose three quite extreme yet plausible models based on G-K parametrizations [@Goloskokov:2011rd] for $H_T(x, \xi, t) $ and $ \bar E_T(x, \xi, t)$ :
- [model 1 : $\tilde H_T(x, \xi, t) = 0; E_T(x, \xi, t) = \bar E_T(x, \xi, t) $.]{}
- [model 2 : $\tilde H_T(x, \xi, t) = H_T(x, \xi, t) ; E_T(x, \xi, t) = \bar E_T(x, \xi, t) -2 H_T(x, \xi, t) $.]{}
- [model 3 : $\tilde H_T(x, \xi, t) = - H_T(x, \xi, t) ; E_T(x, \xi, t) = \bar E_T(x, \xi, t) + 2 H_T(x, \xi, t) $.]{}
![Our model for the $x-$dependence of generalized parton distributions : $u-$quark (dashed lines), $d-$quark (dotted lines) and gluon (solid lines), for $\xi=0.2$ and $t= -0.15$ GeV$^2$. []{data-label="GPD"}](GPDs_text.pdf){width="15cm"}
We show on Fig. \[GPD\] the $x-$dependence of the GPDs that we use here for $\xi=0.2$ and $t= -0.15$ GeV$^2$, which are characteristic values for our process in the present experimentally accessible domain.
The transverse amplitude.
=========================
It is straightforward to show that the transverse amplitude vanishes at the leading twist level in the zero quark mass limit. For chiral-even GPDs, this comes from the colinear kinematics appropriate to the calculation of the leading twist coefficient function; for chiral-odd GPDs, this comes from the odd number of $\gamma$ matrices in the Dirac trace. This vanishing is related to the known results for the light meson electroproduction amplitudes [@transGPDno].
To estimate the transverse amplitude, one thus needs to evaluate quark mass effects in the coefficient function and add the part of the heavy meson DA which is proportional to the meson mass.
With respect to the quark mass effects, it has been demonstrated [@Collins:1998rz] that hard-scattering factorization of meson leptoproduction [@fact3] is valid at leading twist with the inclusion of heavy quark masses in the hard amplitude. This proof is applicable independently of the relative sizes of the heavy quark masses and $Q$, and the size of the errors is a power of $\Lambda/ \sqrt{Q^2+M_D^2}$ when $\sqrt{Q^2+M_D^2}$ is the large scale. In our case, this means including the part $\frac{m_c}{k_c^2-m_c^2}$ in the off-shell heavy quark propagator (see the Feynman graph on Fig. 1a) present in the leading twist coefficient function. We of course keep the term $m_c^2$ in the denominator. Adding this part leads straightforwardly to a non-zero transverse amplitude when a chiral-odd transversity GPD is involved. Including the mass term in the heavy meson DA may be a case of concern, since it is not the only twist 3 component and collinear factorization has not been proven for our process beyond the leading twist 2. We shall nonetheless make the assumption that our evaluation is legitimate. It turns out that this contribution, which we will trace by its proportionality to the meson mass $M_D$ is twice as large as the contribution coming form the inclusion the quark mass in the coefficient function.
In the Feynman gauge, the non-vanishing $m_c$ or $M_D$ dependent part of the Dirac trace in the hard scattering part depicted in Fig. 1a reads:$$\begin{aligned}
Tr[\sigma^{pi}\gamma^\nu (\hat p_D-M_D) \gamma^5 \gamma^{\nu'} \frac{\hat k_c+m_c}{k_c^2-m_c^2+ i \epsilon}(1+ \gamma^5)\hat \epsilon \frac{-g_{\nu \nu'}}{k_g^2+ i \epsilon} ]
= \frac{2 Q^2}{\xi'} \epsilon_\mu[ \epsilon^{\mu p i n} + i g^{\mu i}_\perp] \frac{m_c-2M_D}{k_c^2-m_c^2+ i \epsilon} \frac{1}{k_g^2 + i \epsilon} \, ,
\end{aligned}$$ where $k_c$ ($k_g$) is the heavy quark (gluon) momentum and $\epsilon$ the polarization vector of the $W-$boson. The fermionic trace vanishes for the diagram shown on Fig. 1b thanks to the identity $\gamma^\rho \sigma^{\alpha \beta}\gamma_\rho = 0$. The denominators of the propagators read: $$\begin{aligned}
&&k_c^2 - m_c^2 +i\epsilon= \frac{Q^2 }{2 \xi'} (x+\xi-2\xi') - m_c^2 +i\epsilon = \frac{Q^2+M_D^2 }{2 \xi} (x+\xi) - Q^2 - m_c^2 +i\epsilon \,,\\
&& k_g^2 +i\epsilon= \bar z [\bar z M_D^2 + \frac{Q^2 + M_D^2}{2 \xi} (x-\xi) +i\epsilon]\, . \nonumber
\end{aligned}$$ The transverse amplitude is then written as ($\tau = 1-i2$): $$\begin{aligned}
T_{T} & = &\frac{- i 2C_q \xi (2M_D-m_c)}{\sqrt 2 (Q^2+M_D^2)} \bar{N}(p_{2}) \left[ {\mathcal{H}}_{T} i\sigma^{n\tau} +\tilde {\mathcal{H}}_{T}\frac{\Delta^{\tau}}{m_N^2} \right. \nonumber \\
&& + {\mathcal E}_{T} \frac{\hat n \Delta ^{\tau}+2\xi \gamma ^{\tau}}{2m_N} - \tilde {\mathcal E}_{T}\frac{\gamma ^{\tau}}{m_N}] N(p_{1}), \end{aligned}$$ with $C_q= \frac{2\pi}{3}C_F \alpha_s V_{dc}$, in terms of transverse form factors that we define as : $$\begin{aligned}
{\cal F }_T=f_{D}\int \frac{\phi_D(z)dz}{\bar z}\hspace{-.1cm}\int \frac{F^d_T(x,\xi,t) dx }{(x-\xi+\beta \xi+i\epsilon) (x-\xi +\alpha \bar z+i\epsilon)},
\label{TFF}
\end{aligned}$$ where $F^d_T$ is any d-quark transversity GPD, $\alpha = \frac {2 \xi M_D^2}{Q^2+M_D^2}$, $\beta = \frac {2 (M_D^2-m_c^2)}{Q^2+M_D^2}$.
![The $Q^2$ dependence of the transverse contribution to the cross section $\frac{d\sigma(\nu N \to l^- N D^+)}{dy\, dQ^2\, dt}$ (in pb GeV$^{-4}$) for $y=0.7, \Delta_T = 0$ and $s=20$ GeV$^2$ for a proton (dashed curve and lower band) and neutron (solid curve and upper band) target.[]{data-label="sigma–_dy_D+_on_proton_and_neutron"}](sigma_--_dy_D+_on_proton_and_neutron.pdf){width="80.00000%"}
![The $y$ dependence of the transverse contribution to the cross section $\frac{d\sigma(\nu N \to l^- N D^+)}{dy\, dQ^2\, dt}$ (in pb GeV$^{-4}$) for $Q^2 = 1$ GeV$^2$, $\Delta_T = 0$ and $s=20$ GeV$^2$ for a proton (dashed curve) and neutron (solid curve) target.[]{data-label="sigma–_dy_D+_on_proton_and_neutron_Q2_1_y"}](sigma_--_dy_pnu_to_plDpl_Q_1_t0_y.pdf){width="80.00000%"}
![The $Q^2$ dependence of the quark (dashed curve) contribution compared to the total (quark and gluon, solid curve) longitudinal cross section $\frac{d\sigma(\nu N \to l^- N D^+)}{dy\, dQ^2\, dt}$ (in pb GeV$^{-4}$) for $D^+$ production on a proton (left panel) and neutron (right panel) target for $y=0.7, \Delta_T = 0$ and $s=20$ GeV$^2$.[]{data-label="sigma00_dy_D+_on_proton_neutron_quarks_gluons"}](sigma_00_dy_D+_on_proton_neutron_quarks_gluons.pdf){width="95.00000%"}
![The $y$ dependence of the longitudinal contribution to the cross section $\frac{d\sigma(\nu N \to l^- N D^+)}{dy\, dQ^2\, dt}$ (in pb GeV$^{-4}$) for $Q^2 = 1$ GeV$^2$, $\Delta_T = 0$ and $s=20$ GeV$^2$ for a proton (left panel) and neutron (right panel) target : total (quark and gluon, solid curve) and quark only (dashed curve) contributions.[]{data-label="sigma00_dy_D+_on_proton_neutron_quarks_gluons_y"}](sigma_00_dy_D+_on_proton_neutron_quarks_gluons_y.pdf){width="95.00000%"}
The longitudinal amplitude.
===========================
When there is a change in the baryonic flavor, as in the reaction $\nu_l (k)n(p_1) \to l^- (k')D^0 (p_D)p'(p_2) $, the amplitude does not depend on gluon GPDs. In the other cases, namely $\nu_l (k)p(p_1) \to l^- (k')D^+ (p_D)p'(p_2) $ and $\nu_l (k)n(p_1) \to l^- (k')D^+ (p_D)n'(p_2) $, there is a gluonic contribution coming from the diagrams of Fig. 2.
The quark contribution.
-----------------------
The flavor sensitive electroweak vertex selects the $d \to c$ transition in the case of neutrino production, and the $\bar d \to \bar c$ transition in the case of antineutrino production. The fact that isospin relates the $d-$quark content of the neutron to the $u-$quark content of the proton, gives thus access to the $u-$quark GPDs in the proton when one scatters on a neutron. Moreover, reactions such as $\nu n \to l^- D^0 p$ give access to the neutron $\to$ proton GPDs ($H^{du}, E^{du}$...) which are related to the differences of GPDs in the proton through $H^{du}(x,\xi,t)=H^d(x,\xi,t)-H^u(x,\xi,t)$. The two Feynman diagrams of Fig. 1 contribute to the coefficient function. The contribution of diagram (a) to the hard amplitude involves one heavy quark propagator and thus has a part proportional to $m_c M_D$. The contribution of diagram (b) to the hard amplitude does not involve heavy quark propagator and the mass term in the meson DA does not contribute. The chiral-odd GPDs do not contribute to the longitudinal amplitude since the coefficient function does not depend on any transverse vector.
The vector and axial hard amplitudes (without the coupling constants) read: $$\begin{aligned}
{\cal M}_H^V &=& \left\{ \frac{Tr_a}{D^q_1 D^q_2} + \frac{Tr_b}{D^q_1 D^q_3} \right\} \\
{\cal M}_H^5 &=& \left\{ \frac{Tr_a^5}{D^q_1 D^q_2} + \frac{Tr_b^5}{D^q_1 D^q_3} \right\} \,,\end{aligned}$$ where the propagators are : $$\begin{aligned}
D^q_1 &=& [ (x-\xi)p+ \bar z p_D ]^2+i\epsilon = \bar z ^2 M_D^2+\bar z (x-\xi) \frac{Q^2+M_D^2}{2\xi }+i\epsilon\,,\nonumber\\
D^q_2 &=& [ (x+\xi)p+q ]^2-m_c^2+i\epsilon = \frac{Q^2+M_D^2}{2\xi} (x-\xi+\beta \xi+i\epsilon)\,,\nonumber\\
D^q_3 &=& [ 2 \xi p- \bar z p_D ]^2+i\epsilon = \bar z ^2 M_D^2-\bar z (Q^2+M_D^2)+i\epsilon\,= -\bar z (Q^2+z M_D^2),\end{aligned}$$ and the traces are : $$\begin{aligned}
Tr_a &=& Tr_a^5 = 2\frac{Q^2+M_D^2}{\xi Q} \left[M_D^2-2M_Dm_c+(Q^2+M_D^2)\frac{x-\xi}{2\xi}\right] \,,\nonumber\\
Tr_b &=& Tr_b^5 = - 2\bar z Q^2 \frac{Q^2+M_D^2}{\xi Q}\,.
$$
The quark contribution to the amplitude is a convolution of chiral-even GPDs $H^d(x,\xi,t), \tilde H^d(x,\xi,t), E^d(x,\xi,t)$ and $\tilde E^d(x,\xi,t)$ and reads: $$\begin{aligned}
T_{L}^q &=& \frac{- iC_q}{2Q} \bar{N}(p_{2}) \left[ {\mathcal{H}}_{L} \hat n - \tilde {\mathcal{H}}_{L} \hat n \gamma^5
+ {\mathcal E}_{L} \frac{i\sigma^{n\Delta}}{2m_N} - \tilde {\mathcal E}_{L} \frac{\gamma ^5 \Delta.n}{2m_N}\right] N(p_{1}),
$$ with the chiral-even form factors defined with a quite more intricate formula than in Eq. (\[TFF\]) by $$\begin{aligned}
{\cal F }_L=f_{D}\int \frac{\phi_D(z)dz}{\bar z}\hspace{-.1cm}\int dx \frac{F^d(x,\xi,t)}{x-\xi +\alpha \bar z+i\epsilon} \left[\frac{x-\xi +\gamma \xi }{x-\xi+\beta \xi+i\epsilon}+\frac{Q^2}{Q^2+zM_D^2}\right],
\label{LFF}
\end{aligned}$$ with $\gamma = \frac {2M_D(M_D-2m_c)}{Q^2+M_D^2}$, for any chiral even $d-$quark GPD in the nucleon $F^d(x,\xi,t)$.
The gluonic contribution.
-------------------------
The six Feynman diagrams of Fig. 2 contribute to the coefficient function when there is no charge exchange. Note that contrarily to the case of electroproduction of light pseudoscalar mesons, there is no $C-$parity argument to cancel the gluon contribution for the neutrino production of a pseudoscalar charmed meson.The last three ones correspond to the first three ones with the substitution $x\leftrightarrow -x$, and an overall minus sign for the axial case. The contribution of diagrams (a) and (d) to the hard amplitude does not involve any heavy quark propagator and the mass term in the meson DA does not contribute. The contribution of diagrams (b) and (e) to the hard amplitude involves two heavy quark propagators and they thus have a part proportional to $m_c^2$ as well as a part proportional to $m_c M_D$. The contribution of diagrams (c) and (f) involve one heavy quark propagator and they thus have a part proportional to $m_c M_D$. The transversity gluon GPDs do not contribute to the longitudinal amplitude since there there is no way to flip the helicity by two units when producing a (pseudo)scalar meson. This will not be the case for the production of a vector meson $D^*$.
The symmetric and antisymmetric hard amplitudes read: $$\begin{aligned}
g_\perp^{ij} {\cal M}_H^S &=& \left\{ \frac{Tr_a^S}{D_1 D_2} + \frac{Tr_b^S}{D_3 D_4} + \frac{Tr_c^S}{D_4 D_5}\right\} + \left\{x\rightarrow -x\right\} \\
i \epsilon_\perp^{ij}{\cal M}_H^{A} &=& \left\{ \frac{Tr_a^A}{D_1 D_2} + \frac{Tr_b^A}{D_3 D_4} + \frac{Tr_c^A}{D_4 D_5} \right\}- \left\{x\rightarrow -x\right\}\end{aligned}$$ where the traces are: $$\begin{aligned}
Tr_a^S &=& \frac{2 \bar z }{Q} g_T^{ij}\left[z M_D^4 +Q^4+Q^2M_D^2(1+ z) - \frac{x+\xi}{2\xi} Q^2 (Q^2+M_D^2)\right]\,,\\
Tr_a^A &=& \frac{2i \bar z \epsilon^{npij}}{Q} \left[z M_D^4 +Q^2M_D^2(1+\bar z) + \frac{x-\xi}{2\xi} Q^2 (Q^2+M_D^2)\right]\,,\\
Tr_b^S &=& \frac{2(Q^2+M_D^2) }{Q} g_T^{ij}\left[- \frac{x+\xi}{2\xi} m_cM_D - z \frac{x-\xi}{2\xi}Q^2 +m_c^2+z \bar z M_D^2 \right]\,,\\
Tr_b^A &=& \frac{2i \epsilon^{npij}}{Q} \left[-z\frac{x-\xi}{2\xi} Q^4 +Q^2M_D^2(1+z) (-z- \frac{x-\xi}{2\xi})+ M_D^4(z^2-z-1- \frac{x+\xi}{2\xi})\right.\nonumber \\
&+& \left. (M_D^2-m_c^2) (M_D^2-Q^2) +M_D(M_D+m_c)(Q^2+M_D^2)\frac{x+\xi}{2\xi} \right]\,,\\
Tr_c^S &=& -\frac{Q^2+M_D^2}{\xi Q}g_T^{ij} \left[ (Q^2+M_D^2) \frac{x^2-\xi^2}{2\xi} +2 z M_D^2(\xi z+x) -M_D(m_c+M_D)(x-\xi+2\xi z) \right] \,,\\
Tr_c^A &=& \frac{2i \epsilon^{npij}}{Q} \left[(z^2-1)Q^2M_D^2 +(\bar z M_D^2-(Q^2+M_D^2)\frac{x+\xi}{2\xi})((1+ z) M_D^2+(Q^2+M_D^2)\frac{x-\xi}{2\xi})\right.\nonumber\\
&+& \left.M_D(m_c+M_D)[(Q^2+M_D^2)\frac{x+\xi}{2\xi} +\bar z(Q^2-M_D^2)]\right]\,,\end{aligned}$$ and the denominators read $$\begin{aligned}
D_1 &=& \bar z [ -z M_D^2 -Q^2 +i \varepsilon] \,,\\
D_2 &=& \bar z [ \bar z M_D^2 +\frac{x-\xi}{2\xi}(Q^2+M_D^2) +i \varepsilon] = \bar z \frac{Q^2+M_D^2}{2\xi} (x-\xi+\alpha\bar z +i\epsilon)\,,\\
D_3 &=& -z Q^2-z \bar z M_D^2- m_c^2 +i \varepsilon = -z(Q^2+M_D^2) +z^2M_D^2-m_c^2 +i\epsilon \,,\\
D_4 &=& z^2 M_D^2 -m_c^2+\frac{z(x-\xi)}{2\xi}(Q^2+M_D^2) +i \varepsilon\,,\\
D_5 &=& \bar z [ \bar z M_D^2 -\frac{x+\xi}{2\xi}(Q^2+M_D^2) +i \varepsilon] = \bar z \frac{Q^2+M_D^2}{2\xi} (-x-\xi+\alpha\bar z +i\epsilon)\,.\end{aligned}$$ The gluonic contribution to the amplitude thus reads: $$\begin{aligned}
T_L^g &=& \frac{ i C_g}{2} \int_{-1}^{1}dx \frac{- 1}{(x+\xi-i\epsilon)(x-\xi+i\epsilon)} \int_0^1 dz f_D \phi_D(z) \cdot \nonumber \\
&& \left[ \bar{N}(p_{2})[H^g\hat n +E^g\frac{i\sigma^{n\Delta}}{2m} ]N(p_{1}) {\cal M}_H^S+\bar{N}(p_{2})[{\tilde H}^g \hat n \gamma^5+{\tilde E}^g\frac{\gamma^5n.\Delta}{2m} ]N(p_{1}) {\cal M}_H^{A} \right] \, \\
&\equiv& \frac{- i C_g}{2Q} \bar{N}(p_{2}) \left[ {\cal H}^g\hat n +{\cal E}^g\frac{i\sigma^{n\Delta}}{2m} +{\tilde {\cal H}}^g \hat n \gamma^5+{\tilde {\cal E}}^g\frac{\gamma^5n.\Delta}{2m} \right] N(p_{1}) \,, \end{aligned}$$ where the last line defines the gluonic form factors ${\cal H}^g$, $\tilde {\cal H}^g$, ${\cal E}^g$, $\tilde {\cal E}^g$ and $C_g= T_f\frac{\pi}{3} \alpha_s V_{dc}$ with $T_f=\frac{1}{2}$ and the factor $\frac{- 1}{(x+\xi-i\epsilon)(x-\xi+i\epsilon)}$ comes from the conversion of the strength tensor to the gluon field. Note that there is no singularity in the integral over $z$ if the DA vanishes like $z \bar z$ at the limits of integration.
![The $Q^2$ dependence of the $<cos~ \varphi> $ (solid curves) and $<sin~ \varphi> $ (dashed curves) moments normalized by the total cross section, as defined in Eq.\[moments\], for $\Delta_T = 0.5$ GeV, $y=0.7$ and $s=20$ GeV$^2$. The three curves correspond to the three models explained in the text, and quantify the theoretical uncertainty of our estimates.[]{data-label="cosPhi-proton"}](CosPhi_SinPhi_D+_proton_y_07_s_20_DeltaT_05.pdf){width="80.00000%"}
![The same $Q^2$ dependence of the $<cos~ \varphi> $ (solid curves) and $<sin~ \varphi> $ (dashed curves) moments as in Fig. \[cosPhi-proton\] but for a neutron target.[]{data-label="cosPhi-neutron"}](CosPhi_SinPhi_D+_neutron_y_07_s_20_DeltaT_05.pdf){width="80.00000%"}
Observables
===========
The differential cross section for neutrino production of a pseudoscalar charmed boson is written as [@Arens]: $$\begin{aligned}
\label{cs}
\frac{d^4\sigma(\nu N\to l^- N'D)}{dy\, dQ^2\, dt\, d\varphi}
= \bar\Gamma
\Bigl\{ ~\frac{1+ \sqrt{1-\varepsilon^2}}{2} \sigma_{- -}&+&\varepsilon\sigma_{00}+ \sqrt{\varepsilon}(\sqrt{1+\varepsilon}+\sqrt{1-\varepsilon} )(\cos\varphi\
{\rm Re}\sigma_{- 0} + \sin\varphi\
{\rm Im}\sigma_{- 0} )\ \Bigr\},\end{aligned}$$ with $y= \frac{p \cdot q}{p\cdot k}$ , $Q^2 = x_B y (s-m^2)$, $\varepsilon \approx \frac{1-y}{1-y+y^2/2}$ and $$\bar \Gamma = \frac{G_F^2}{(2 \pi)^4} \frac{1}{32y} \frac{1}{\sqrt{ 1+4x_B^2m_N^2/Q^2}}\frac{1}{(s-m_N^2)^2} \frac{Q^2}{1-\epsilon}\,, \nonumber$$ where the “cross-sections” $\sigma_{lm}=\epsilon^{* \mu}_l W_{\mu \nu} \epsilon^\nu_m$ are product of amplitudes for the process $ W(\epsilon_l) N\to D N' $, averaged (summed) over the initial (final) hadron polarizations. Integrating over $\varphi$ yields the differential cross section : $$\begin{aligned}
\label{csphi}
\frac{d\sigma(\nu N\to l^- N'D)}{dy\, dQ^2\, dt}
= 2 \pi \bar\Gamma
\Bigl\{ ~\frac{1+ \sqrt{1-\varepsilon^2}}{2} \sigma_{- -}&+&\varepsilon\sigma_{00} \Bigr\}.\end{aligned}$$
We now calculate from $T_{L} $ and $T_{T} $ the quantities $\sigma_{00}$, $\sigma_{--}$ and $\sigma_{-0}$. The longitudinal cross section $\sigma_{00}$ is straightforwardly obtained by squaring the sum of the amplitudes $T^q_{L}+ T^g_{L}$; at zeroth order in $\Delta_T$, it reads : $$\begin{aligned}
\sigma_{00} = \frac{1} { Q^2}\biggl\{ && [\, |C_q{\mathcal{H}}_{L} + C_g{\mathcal{H}}_{g}|^2 + |C_q\tilde{\mathcal{H}}_{L} - C_g\tilde{\mathcal{H}}_{g}|^2 ] (1-\xi^2) +\frac{\xi^4}{1-\xi^2} [\, |C_q\tilde{\mathcal{E}}_{L}-C_g\tilde{\mathcal{E}}_{g} |^2 + |C_q {\mathcal{E}}_{L}+C_g {\mathcal{E}}_{g} |^2] \nonumber \\
&& -2 \xi^2 {\mathcal R}e [C_q{\mathcal{H}}_{L} + C_g{\mathcal{H}}_{g}] [C_q {\mathcal{E}}^*_{L}+C_g {\mathcal{E}}^*_{g}] -2 \xi^2 {\mathcal R}e [C_q\tilde{\mathcal{H}}_{L} - C_g\tilde{\mathcal{H}}_{g}] [C_q \tilde{\mathcal{E}}^*_{L}-C_g \tilde{\mathcal{E}}^*_{g}] \biggr\} .\, \end{aligned}$$ At zeroth order in $\Delta_T$, $\sigma_{--}$ reads: $$\begin{aligned}
\sigma_{--} = \frac{16\xi^2 C_q^2 (m_c-2M_D)^2}{(Q^2+M_D^2)^2}\biggl\{(1-\xi^2)|{\mathcal{H}_T}|^2 + \frac{\xi^2}{1-\xi^2} | { {\mathcal{E'}}_T}|^2 -2\xi \mathcal{R}e [ \mathcal{H}_T { {\mathcal{E'}}_T^{ *}}]\biggr\} \,, \end{aligned}$$ where we denote ${{\mathcal{E'}}_T}=\xi{\mathcal{E}}_T-{\tilde {\mathcal{E}}_T}$.
![The $Q^2$ dependence of the longitudinal contribution to the cross section $\frac{d\sigma(\nu n \to l^- p D^0)}{dy\, dQ^2\, dt}$ (in pb GeV$^{-4}$) for $y=0.7, \Delta_T = 0$ and $s=20$ GeV$^2$.[]{data-label="sigma00_dy_D0"}](sigma_00_dy_D0.pdf){width="80.00000%"}
![The $Q^2$ dependence of the transverse contribution to the cross section $\frac{d\sigma(\nu n \to l^- p D^0)}{dy\, dQ^2\, dt }$ (in pb GeV$^{-4}$) for $y=0.7, \Delta_T = 0$ and $s=20$ GeV$^2$. The band corresponds to the three models explained in the text.[]{data-label="sigma–_dy_nnu_to_plD0"}](sigma_--_dy_D0.pdf){width="80.00000%"}
![ The $Q^2$ dependence of the $<cos~ \varphi> $ (solid curves) and $<sin~ \varphi> $ (dashed curves) moments normalized by the total cross section, as defined in Eq.\[moments\], for the reaction $\nu n \to l^- p D^0$ when $\Delta_T = 0.5$ GeV, $y=0.7$ and $s=20$ GeV$^2$ . The three curves correspond to the three models explained in the text.[]{data-label="CosPhi_SinPhi_D0"}](CosPhi_SinPhi_D0.pdf){width="80.00000%"}
The interference cross section $\sigma_{-0}$ vanishes at zeroth order in $\Delta_T$. The first non-vanishing contribution is linear in $\Delta_T/m$ and reads (with $\lambda = \tau^* = 1+i2$): $$\begin{aligned}
\sigma_{-0} &=&
\frac{ \xi \sqrt 2 C_q}{m} \frac{ 2M_D- m_c}{Q(Q^2+M_D^2)} \,
\biggl\{-i \mathcal{H}_T^{*} [C_q\tilde{\mathcal{E}}_{L}-C_g\tilde{\mathcal{E}}_{g}] \xi \epsilon^{pn\Delta \lambda } + i {{\mathcal{E'}}_T^{*}}\epsilon^{p n \Delta \lambda } [C_q\tilde {\mathcal{H}}_{L}-C_g \tilde {\mathcal{H}}_{g}] \nonumber \\
& + & 2 \mathcal{\tilde H}_T^{*} \Delta^\lambda \{C_q{\mathcal{H}}_{L}+C_g{\mathcal{H}}_{g} -\frac{\xi^2}{1-\xi^2} [C_q{\mathcal{E}}_{L}+C_g{\mathcal{E}}_{g}]\} + \mathcal{E}_T ^{*} \Delta^\lambda \{(1-\xi^2) [C_q{\mathcal{H}}_{L}+C_g{\mathcal{H}}_{g}] -\xi^2 [C_q{\mathcal{E}}_{L}+C_g{\mathcal{E}}_{g}]\} \nonumber \\
&-& \mathcal{ H}_T^{*} \Delta^\lambda [C_q{\mathcal{E}}_{L}+C_g{\mathcal{E}}_{g}] + {{\mathcal{E'}}_T^{*}} \Delta^\lambda \xi [C_q{\mathcal{H}}_{L}+C_g{\mathcal{H}}_{g} +C_q{\mathcal{E}}_{L}+C_g{\mathcal{E}}_{g} ] \biggr\}. \\end{aligned}$$
As discussed in [@PS] the transversity GPDs are best accessed through the moments $<cos~ \varphi>$ and $<sin ~\varphi>$ defined as
$$\begin{aligned}
<cos ~\varphi>&=&\frac{\int cos ~\varphi ~d\varphi ~d^4\sigma}{\int d\varphi ~d^4\sigma} \approx K_\epsilon\, \frac{{\cal R}e \sigma_{- 0}}{\sigma_{0 0}+K_\epsilon^2 \sigma_{--}} \,, \nonumber \\
<sin~ \varphi>&=&\frac{\int sin ~\varphi ~d\varphi ~d^4\sigma}{\int d\varphi ~d^4\sigma}\approx K_\epsilon \frac{{\cal I}m \sigma_{- 0}}{\sigma_{0 0}+K_\epsilon^2 \sigma_{--}} \,,
\label{moments}
\end{aligned}$$
with $K_\epsilon =\frac{\sqrt{1+\varepsilon}+\sqrt{1-\varepsilon} }{2 \sqrt{\epsilon} }$.
$\nu N \to l^- D^+ N$
---------------------
Let us now estimate various cross sections which may be accessed with a neutrino beam on a nucleus. Firstly, $$\begin{aligned}
\nu_l (k)p(p_1) &\to& l^- (k')D^+ (p_D)p'(p_2) \,,\nonumber\\
\nu_l (k)n(p_1) &\to& l^- (k')D^+ (p_D)n'(p_2) \,, \nonumber\end{aligned}$$ allow both quark and gluon GPDs to contribute. Neglecting the strange content of nucleons leads to selecting $d$ quarks in the nucleon, thus accessing the $d$ (resp. $u$) quark GPDs in the proton for the scattering on a proton (resp. neutron) target, after using isospin relation between the proton and neutron. The transverse contribution is plotted in Fig. \[sigma[–]{}\_dy\_D+\_on\_proton\_and\_neutron\] as a function of $Q^2$ for $y=0.7$ and $\Delta_T=0$. The dependence on $y$ is shown on Fig. \[sigma[–]{}\_dy\_D+\_on\_proton\_and\_neutron\_Q2\_1\_y\] for $Q^2=1$ GeV$^2$ and $\Delta_T=0$. The cross section is reasonably flat in $y$ and $Q^2$ so that an integration over the regions $0.45 < y <1$ and $0.5 < Q^2 < 3$ GeV$^2$ does not require much care.
The longitudinal cross sections dominate the transverse ones, mostly because of the larger values of the chiral-even GPDs, and specifically of the gluonic ones. The relative importance of quark and gluon contributions to the longitudinal cross sections are shown on Fig. \[sigma[00]{}\_dy\_D+\_on\_proton\_neutron\_quarks\_gluons\] as a function of $Q^2$ for a specific set of kinematical variables. The $y$ dependence is displayed on Fig. \[sigma[00]{}\_dy\_D+\_on\_proton\_neutron\_quarks\_gluons\_y\]. The longitudinal cross section vanishes as $y\to 1$ as is obvious from Eq. (\[cs\]).
Access to the interference term $\sigma_{-0}$ needs an harmonic analysis of the cross section. This allows access to the transversity GPDs in a linear way but requires to consider $\Delta_T \ne 0$ kinematics. We show on Fig. \[cosPhi-proton\] the $<cos\varphi>$ and $<sin\varphi>$ moments for the proton and on Fig. \[cosPhi-neutron\] for the neutron target, for the kinematical point defined as $y=0.7, \Delta_T = 0.5$ GeV and $s=20$ GeV$^2$.
$\nu n \to l^- D^0 p$
---------------------
The reaction $$\begin{aligned}
\nu_l (k)n(p_1) &\to& l^- (k')D^0 (p_D)p(p_2) \,, \nonumber\end{aligned}$$ does not benefit from gluon GPDs contributions, but only from the flavor changing $F_{du} (x, \xi, t) =F_d (x, \xi, t)-F_u(x, \xi, t)$ GPDs ($F$ denotes here any GPD). We show on Fig. \[sigma[00]{}\_dy\_D0\] the longitudinal cross section and on Fig. \[sigma[–]{}\_dy\_nnu\_to\_plD0\] the transverse one. This transverse contribution is noteworthy of the same order of magnitude as the longitudinal one and even dominates for $y$ large enough. Accessing the chiral-odd transversity GPDs indeed seems feasible in this reaction.
We plot on Fig. \[CosPhi\_SinPhi\_D0\] the $<cos~\varphi>$ and $<sin~\varphi>$ moments defined in Eq. (\[moments\]) for this reaction. These moments are definitely smaller than in the $D^+$ production case.
Antineutrino cross sections : $\bar \nu p \to l^+ \bar D^0 n$ and $\bar \nu N \to l^+ \bar D^- N$
-------------------------------------------------------------------------------------------------
For completeness, let us now present some results for the antineutrino case. Although smaller than the neutrino flux, the antineutrino flux is always sizable, as discussed recently in [@Ren:2017xov].
![The $Q^2$ dependence of the transverse contribution to the cross section $\frac{d\sigma(\bar \nu N \to l^+ N \bar D^-)}{dy\, dQ^2\, dt}$ on a proton (dashed curve) or neutron (solid curve) target (in pb GeV$^{-4}$) for $y=0.7, \Delta_T = 0$ and $s=20$ GeV$^2$. The three curves correspond to the three models explained in the text, and quantify the theoretical uncertainty of our estimates.[]{data-label="anti_sigma–_dy_D-_on_proton_and_neutron"}](anti_sigma_--_dy_D-_on_proton_and_neutron.pdf){width="80.00000%"}
![The $Q^2$ dependence of the longitudinal contribution to the cross section $\frac{d\sigma(\bar \nu N \to l^+ N \bar D^-)}{dy\, dQ^2\, dt}$ on a proton (left panel) or neutron (right panel) target (in pb GeV$^{-4}$) for $y=0.7, \Delta_T = 0$ and $s=20$ GeV$^2$. The total (solid curve) is small with respect to the quark contribution (dashed curve), showing a large cancellation of quark and gluon contributions.[]{data-label="anti_sigma00_dy_D-_on_proton_neutron_quarks_gluons"}](anti_sigma_00_dy_D-_on_proton_neutron_quarks_gluons.pdf){width="80.00000%"}
![The $Q^2$ dependence of the transverse contribution to the cross section $\frac{d\sigma(\bar \nu p \to l^+ n \bar D^0)}{dy\, dQ^2\, dt}$ (in pb GeV$^{-4}$) for $y=0.7, \Delta_T = 0$ and $s=20$ GeV$^2$. The three curves correspond to the three models explained in the text, and quantify the theoretical uncertainty of our estimates.[]{data-label="anti_sigma–_D0"}](anti_sigma_--_D0.pdf){width="80.00000%"}
![The $Q^2$ dependence of the longitudinal contribution to the cross section $\frac{d\sigma(\bar \nu p \to l^+ n \bar D^0)}{dy\, dQ^2\, dt}$ (in pb GeV$^{-4}$) for $y=0.7, \Delta_T = 0$ and $s=20$ GeV$^2$.[]{data-label="anti_sigma00_dy_D0"}](anti_sigma_00_dy_D0.pdf){width="80.00000%"}
Going from the neutrino to the antineutrino case essentially leads to a transformation $z \to \bar z$ and $x \to -x$ in the expression of the amplitude. Using the fact that $\phi^{D^-}(\bar z) = \phi^{D^+}( z)$, the amplitudes can be written in terms of the same DA, but taking the GPD as $H(-x, \xi, t), \tilde H(-x, \xi, t), ...$. For obvious reasons, the gluon contributions are the same as for the neutrino case but the quark contributions are quite different since the weak negatively charged current selects $\bar d$-antiquark rather than the $d$-quark contributions. Moreover, there is a relative sign change between the gluon and quark contributions so that the antineutrino cross sections now read (with the same approximations as for the neutrino case): $$\begin{aligned}
\sigma^{\bar \nu}_{00} = \frac{1} { Q^2}\biggl\{ && [\, |C_q{\mathcal{H}}_{L} - C_g{\mathcal{H}}_{g}|^2 + |C_q\tilde{\mathcal{H}}_{L} + C_g\tilde{\mathcal{H}}_{g}|^2 ] (1-\xi^2) +\frac{\xi^4}{1-\xi^2} [\, |C_q\tilde{\mathcal{E}}_{L}+C_g\tilde{\mathcal{E}}_{g} |^2 + |C_q {\mathcal{E}}_{L}-C_g {\mathcal{E}}_{g} |^2] \nonumber \\
&& -2 \xi^2 {\mathcal R}e [C_q{\mathcal{H}}_{L} - C_g{\mathcal{H}}_{g}] [C_q {\mathcal{E}}^*_{L}-C_g {\mathcal{E}}^*_{g}] -2 \xi^2 {\mathcal R}e [C_q\tilde{\mathcal{H}}_{L} + C_g\tilde{\mathcal{H}}_{g}] [C_q \tilde{\mathcal{E}}^*_{L}+C_g \tilde{\mathcal{E}}^*_{g}] \biggr\} ,\, \end{aligned}$$ while $\sigma^{\bar \nu}_{--}$ is identical to the neutrino case: $$\begin{aligned}
\sigma^{\bar \nu}_{--} = \frac{16\xi^2 C_q^2 (m_c-2M_D)^2}{(Q^2+M_D^2)^2}\biggl\{(1-\xi^2)|{\mathcal{H}_T}|^2 + \frac{\xi^2}{1-\xi^2} | { {\mathcal{E'}}_T}|^2 -2\xi \mathcal{R}e [ \mathcal{H}_T { {\mathcal{E'}}_T^{ *}}]\biggr\} \,, \end{aligned}$$ but with the chiral-odd GPDs taken as $H_T(-x, \xi, t), E'_T(-x, \xi, t), ...$ when calculating $\mathcal{H}_T , \mathcal{E'}_T ...$ We show on Fig.\[anti\_sigma[–]{}\_dy\_D-\_on\_proton\_and\_neutron\] the transverse cross sections for the production of a $D^-$ which is an order of magnitude larger for the neutron (solid curve and upper band) than for the proton target. On Fig. \[anti\_sigma[00]{}\_dy\_D-\_on\_proton\_neutron\_quarks\_gluons\] the plot of the longitudinal cross sections for the production of a $D^-$ on a proton and on a neutron shows an important partial cancellation of the quark contribution (dashed curve) by the gluon contribution into the total (solid line) cross section. On Fig. \[anti\_sigma[–]{}\_D0\] and Fig. \[anti\_sigma[00]{}\_dy\_D0\], we plot the transverse and longitudinal cross sections for the reaction $\bar \nu p \to l^+ \bar D^0 n$ where there is no gluon contribution. The transverse cross section dominates for $Q^2 > 2$ GeV$^2$ making this process the most sensitive probe of the chiral-odd GPDs.
Conclusion.
===========
Collinear QCD factorization has allowed us to calculate exclusive neutrino production of $D-$mesons in terms of GPDs. Let us stress that at medium energy, this exclusive channel should dominate $D-$meson production. Our study complements the previous calculations [@Kopeliovich:2012dr] which were dedicated to the production of pseudoscalar light meson, but were omitting gluon contributions.
We have demonstrated that gluon and both chiral-odd and chiral-even quark GPDs contribute in specific ways to the amplitude for different polarization states of the W. The $y-$dependence of the cross section allows to separate different contributions and the measurement of the azimuthal dependence, through the moments $<cos \varphi>$ and $<sin \varphi>$ singles out the transversity chiral-odd GPDs contributions. The flavor dependence, and in particular the difference between $D^+$ and $D^0$ production rates, allows to test the importance of gluonic contributions. The behaviour of the proton and neutron target cross sections enables to separate the $u$ and $d$ quark contributions.
Experimental data[@data] already demonstrated their ability to distinguish different channels for charm production in neutrino and anti neutrino experiments. The statistics were however too low to separate longitudinal and transverse contributions. Moreover their analysis was not undertaken in the recent appropriate theoretical framework where skewness effects are taken into account. Planned medium and high energy neutrino facilities [@NOVA] and experiments such as Miner$\nu$a [@Aliaga:2013uqz] and MINOS+ [@Timmons:2015laq] which have their scientific program oriented toward the understanding of neutrino oscillations or to the discovery of the presently elusive sterile neutrinos will collect more statistics and will thus allow - without much additional equipment - some important progress in the realm of hadronic physics.
#### Acknowledgements. {#acknowledgements. .unnumbered}
We thank Katarzyna Grzelak, Trung Le and Luis Alvarez Ruso for useful discussions and correspondence. This work is partly supported by grant No 2015/17/B/ST2/01838 from the National Science Center in Poland, by the Polish-French collaboration agreements Polonium and COPIN-IN2P3, and by the French grant ANR PARTONS (Grant No. ANR-12-MONU-0008-01).
[99]{}
D. M[ü]{}ller [*et al.*]{}, Fortsch. Phys. [**42**]{}, 101 (1994). X. Ji, Phys. Rev. [**D55**]{}, 7114 (1997); A. V. Radyushkin, Phys. Rev. D [**56**]{}, 5524 (1997). J. C. Collins, L. Frankfurt, M. Strikman, Phys. Rev. D [**56**]{}, 2982 (1997).
M. Burkardt, Phys. Rev. D [**62**]{}, 071503 (2000) \[Phys. Rev. D [**66**]{}, 119903 (2002)\]; J. P. Ralston and B. Pire, Phys. Rev. D [**66**]{}, 111501 (2002); M. Diehl and P. Hagler, Eur. Phys. J. C [**44**]{}, 87 (2005). B. Lehmann-Dronke and A. Schafer, Phys. Lett. B [**521**]{} (2001) 55; C. Coriano and M. Guzzi, Phys. Rev. D [**71**]{} (2005) 053002; P. Amore, C. Coriano and M. Guzzi, JHEP [**0502**]{} (2005) 038; A. Psaker, W. Melnitchouk and A. V. Radyushkin, Phys. Rev. D [**75**]{} (2007) 054001. B. Pire and L. Szymanowski, Phys. Rev. Lett. [**115**]{}, 092001 (2015). doi:10.1103/PhysRevLett.115.092001; B. Pire, L. Szymanowski and J. Wagner, EPJ Web Conf. [**112**]{}, 01018 (2016) doi:10.1051/epjconf/201611201018 M. Guidal, Eur. Phys. J. A [**37**]{}, 319 (2008) Erratum: \[Eur. Phys. J. A [**40**]{}, 119 (2009)\]; M. Guidal, H. Moutarde and M. Vanderhaeghen, Rept. Prog. Phys. [**76**]{}, 066202 (2013); B. Berthou [*et al.*]{}, arXiv:1512.06174 \[hep-ph\]; K. Kumericki and D. Mueller, EPJ Web Conf. [**112**]{}, 01012 (2016); K. Kumericki, S. Liuti and H. Moutarde, Eur. Phys. J. A [**52**]{}, no. 6, 157 (2016). N. Ushida [*et al.*]{} \[Fermilab E531 Collaboration\], Phys. Lett. B [**206**]{}, 375 (1988); S. A. Rabinowitz [*et al.*]{}, Phys. Rev. Lett. [**70**]{}, 134 (1993); P. Vilain [*et al.*]{} \[CHARM II Collaboration\], Eur. Phys. J. C [**11**]{}, 19 (1999); P. Astier [*et al.*]{} \[NOMAD Collaboration\], Phys. Lett. B [**486**]{}, 35 (2000); A. Kayis-Topaksu [*et al.*]{}, New J. Phys. [**13**]{}, 093002 (2011) \[arXiv:1107.0613 \[hep-ex\]\].
M. Diehl, Phys. Rept. [**388**]{} (2003) 41. M. Guidal and M. Vanderhaeghen, Phys. Rev. Lett. [**90**]{}, 012001 (2003). doi:10.1103/PhysRevLett.90.012001
E. R. Berger, M. Diehl and B. Pire, Eur. Phys. J. C [**23**]{} (2002) 675. doi:10.1007/s100520200917
A. Szczepaniak, E. M. Henley and S. J. Brodsky, Phys. Lett. B [**243**]{}, 287 (1990); S. Descotes-Genon and C. T. Sachrajda, Nucl. Phys. B [**650**]{}, 356 (2003); V. M. Braun, D. Y. Ivanov and G. P. Korchemsky, Phys. Rev. D [**69**]{}, 034014 (2004); T. Feldmann, B. O. Lange and Y. M. Wang, Phys. Rev. D [**89**]{}, no. 11, 114001 (2014); V. M. Braun and A. Khodjamirian, Phys. Lett. B [**718**]{}, 1014 (2013). T. Kurimoto, H. n. Li and A. I. Sanda, Phys. Rev. D [**65**]{}, 014007 (2002); T. Kurimoto, H. n. Li and A. I. Sanda, Phys. Rev. D [**67**]{}, 054028 (2003). doi:10.1103/PhysRevD.67.054028
M. Diehl, Eur. Phys. J. C [**19**]{}, 485 (2001). M. Diehl [*et. al.*]{} Phys. Rev. D [**59**]{}, 034023 (1999); J. C. Collins [*et. al.*]{}, Phys. Rev. D [**61**]{}, 114015 (2000).
D. Yu. Ivanov [*et al.*]{}, Phys. Lett. B [**550**]{}, 65 (2002); R. Enberg, B. Pire and L. Szymanowski, Eur. Phys. J. C [**47**]{}, 87 (2006); M. E. Beiyad [*et al.*]{}, Phys. Lett. B [**688**]{}, 154 (2010); R. Boussarie, B. Pire, L. Szymanowski and S. Wallon, JHEP [**1702**]{}, 054 (2017) doi:10.1007/JHEP02(2017)054
S. V. Goloskokov and P. Kroll, Eur. Phys. J. C [**42**]{}, 281 (2005); S. V. Goloskokov and P. Kroll, Eur. Phys. J. C [**50**]{}, 829 (2007); S. V. Goloskokov and P. Kroll, Eur. Phys. J. C [**53**]{}, 367 (2008); P. Kroll, H. Moutarde and F. Sabatie, Eur. Phys. J. C [**73**]{}, no. 1, 2278 (2013).
S. V. Goloskokov and P. Kroll, Eur. Phys. J. A [**47**]{} (2011) 112 \[arXiv:1106.4897 \[hep-ph\]\]. B. Pire, K. Semenov-Tian-Shansky, L. Szymanowski and S. Wallon, Eur. Phys. J. A [**50**]{} (2014) 90; X. Xiong and J. H. Zhang, Phys. Rev. D [**92**]{}, no. 5, 054037 (2015).
M. Gockeler [*et al.*]{} \[QCDSF and UKQCD Collaborations\], Phys. Rev. Lett. [**98**]{}, 222001 (2007).
J. C. Collins, Phys. Rev. D [**58**]{}, 094002 (1998).
see for instance T. Arens, O. Nachtmann, M. Diehl and P. V. Landshoff, Z. Phys. C [**74**]{}, 651 (1997). B. Z. Kopeliovich, I. Schmidt and M. Siddikov, Phys. Rev. D [**86**]{}, 113018 (2012) and D [**89**]{}, 053001 (2014); G. R. Goldstein, O. G. Hernandez, S. Liuti and T. McAskill, AIP Conf. Proc. [**1222**]{} (2010) 248; M. Siddikov and I. Schmidt, arXiv:1611.07294 \[hep-ph\].
M. Diehl [*et al.*]{}, Phys. Lett. B [**411**]{} (1997) 193; A. V. Belitsky, D. Mueller and A. Kirchner, Nucl. Phys. B [**629**]{}, 323 (2002). L. Ren [*et al.*]{} \[Minerva Collaboration\], arXiv:1701.04857 \[hep-ex\]. D. S. Ayres [*et al.*]{} \[NOvA Collaboration\], hep-ex/0503053; see also M. L. Mangano, S. I. Alekhin, M. Anselmino [*et al.*]{}, CERN Yellow Report CERN-2004-002, pp.185-257 \[hep-ph/0105155\].
L. Aliaga [*et al.*]{} \[MINERvA Collaboration\], Nucl. Instrum. Meth. A [**743**]{}, 130 (2014) \[arXiv:1305.5199 \[physics.ins-det\]\]; J. Devan [*et al.*]{} \[MINERvA Collaboration\], Phys. Rev. D [**94**]{}, 112007 (2016), \[arXiv:1610.04746 \[hep-ex\]\]; C. L. McGivern [*et al.*]{} \[MINERvA Collaboration\], Phys. Rev. D [**94**]{}, no. 5, 052005 (2016), \[arXiv:1606.07127 \[hep-ex\]\]; C. M. Marshall [*et al.*]{} \[MINERvA Collaboration\], \[arXiv:1611.02224 \[hep-ex\]\].
A. Timmons, Adv. High Energy Phys. [**2016**]{}, 7064960 (2016) \[arXiv:1511.06178 \[hep-ex\]\].
|
---
abstract: 'We provide an $O(\log \log \opt)$-approximation algorithm for the problem of guarding a simple polygon with guards on the perimeter. We first design a polynomial-time algorithm for building $\eps$-nets of size $O\kern-2pt\left(\frac{1}{\eps}\log \log \frac{1}{\eps}\right)$ for the instances of [[Hitting Set]{}]{} associated with our guarding problem. We then apply the technique of Brönnimann and Goodrich to build an approximation algorithm from this $\eps$-net finder. Along with a simple polygon $P$, our algorithm takes as input a finite set of potential guard locations that must include the polygon’s vertices. If a finite set of potential guard locations is not specified, when guards may be placed anywhere on the perimeter, we use a known discretization technique at the cost of making the algorithm’s running time potentially linear in the ratio between the longest and shortest distances between vertices. Our algorithm is the first to improve upon $O(\log\opt)$-approximation algorithms that use generic net finders for set systems of finite VC-dimension.'
author:
- |
James King\
\
\
- |
David Kirkpatrick\
\
\
bibliography:
- 'art\_gallery\_epsilon-nets.bib'
title: 'Improved Approximation for Guarding Simple Galleries from the Perimeter[^1]'
---
Introduction
============
The art gallery problem
-----------------------
In computational geometry, art gallery problems are motivated by the question, “How many security cameras are required to guard an art gallery?” The art gallery is modeled as a connected polygon $P$. A camera, which we will henceforth call a *guard*, is modeled as a point in the polygon, and we say that a guard $g$ *sees* a point $q$ in the polygon if the line segment $\overline{gq}$ is contained in $P$. We call a set $G$ of points a *guarding set* if every point in $P$ is seen by some $g\in G$. Let $V(P)$ denote the vertex set of $P$ and let $\partial P$ denote the boundary of $P$. We assume that $P$ is closed and non-degenerate so that $V(P) \subset \partial P \subset P$.
We consider the minimization problem that asks, given an input polygon $P$ with $n$ vertices, for a minimum guarding set for $P$. Variants of this problem typically differ based on what points in $P$ must be guarded and where guards can be placed, as well as whether $P$ is simple or contains holes. Typically we want to guard either $P$ or $\partial P$, and our set of potential guards is typically $V(P)$ (vertex guards), $\partial P$ (perimeter guards), or $P$ (point guards). For results on art gallery problems not related to minimization problems we direct the reader to O’Rourke’s book [@orourke87], which is available for free online.
The problem was proved to be NP-complete first for polygons with holes by O’Rourke and Supowit [@orourke83]. For guarding simple polygons it was proved to be NP-complete for vertex guards by Lee and Lin [@lee86]; their proof was generalized to work for point guards by Aggarwal [@aggarwal84]. This raises the question of approximability. There are two major hardness results. First, for guarding simple polygons, Eidenbenz [@eidenbenz98] proved that the problem is APX-complete, meaning that we cannot do better than a constant-factor approximation algorithm unless ${\rm P}={\rm NP}$. Subsequently, for guarding polygons with holes, Eidenbenz [@Eidenbenz01] proved that the minimization problem is as hard to approximate as [[Set Cover]{}]{} in general if there is no restriction on the number of holes. It therefore follows from results about the inapproximability of [[Set Cover]{}]{} by Feige [@Feige98] and Raz and Safra [@Raz97] that, for polygons with holes, it is NP-hard to find a guarding set of size $o(\log n)$. These hardness results hold whether we are dealing with vertex guards, perimeter guards, or point guards.
Ghosh [@ghosh87] provided an $O(\log n)$-approximation algorithm for guarding polygons with or without holes with vertex guards. His algorithm decomposes the input polygon into a polynomial number of cells such that each point in a given cell is seen by the same set of vertices. This discretization allows the guarding problem to be treated as an instance of [[Set Cover]{}]{} and solved using general techniques. This will be discussed further in Section \[sec:setcover\]. In fact, applying methods for [[Set Cover]{}]{} developed after Ghosh’s algorithm, it is easy to obtain an approximation factor of $O(\log\opt)$ for vertex guarding simple polygons or $O(\log h \log OPT)$ for vertex guarding a polygon with $h$ holes.
When considering point guards or perimeter guards, discretization is far more complicated since two distinct points will not typically be seen by the same set of potential guards even if they are very close to each other. Deshpande [@deshpande07] obtain an approximation factor of $O(\log\opt)$ for point guards or perimeter guards by developing a sophisticated discretization method that runs in pseudopolynomial time[^2]. Efrat and Har-Peled [@efrat06] provided a randomized algorithm with the same approximation ratio that runs in fully polynomial expected time; their discretization technique involves only considering guards that lie on the points of a very fine grid.
Our contribution is an algorithm for guarding simple polygons, using either vertex guards or perimeter guards. Our algorithm has a guaranteed approximation factor of $O(\log\log\opt)$ and the running time is polynomial in $n$ and the number of potential guard locations. This is the best approximation factor obtained for vertex guards and perimeter guards. If no finite set of guard locations is given, we use the discretization technique of Deshpande and our algorithm is polynomial in $n$ and $\Delta$, where $\Delta$ is the ratio between the longest and shortest distances between vertices.
Guarding problems as instances of [[Hitting Set]{}]{} {#sec:setcover}
-----------------------------------------------------
### Set Cover and Hitting Set
[[Set Cover]{}]{} is a well-studied NP-complete optimization problem. Given a universe $\mathcal{U}$ of elements and a collection $\mathcal{S}$ of subsets of $\mathcal{U}$, [[Set Cover]{}]{} asks for a minimum subset $\mathcal{C}$ of $\mathcal{S}$ such that $\bigcup_{S\in\mathcal{C}}S = \mathcal{U}$. In other words, we want to cover all of the elements in ${\mathcal{U}}$ with the minimum number of sets from $\mathcal{S}$. In general, [[Set Cover]{}]{} is not only difficult to solve exactly (see, , [@Garey79]) but is also difficult to approximate—no polynomial time approximation algorithm can have a $o(\log n)$ approximation factor unless $\mathrm{P=NP}$ [@Raz97]. Conversely, a simple greedy heuristic[^3] [@chvatal79] for [[Set Cover]{}]{} attains an $O(\log n)$ approximation factor. Another problem, [[Hitting Set]{}]{}, asks for a minimum subset $\mathcal{H}$ of $\mathcal{U}$ such that $S \bigcap \mathcal{H} \neq \emptyset$ for any $S\in{\mathcal{S}}$. Any instance of [[Hitting Set]{}]{} can easily be formulated as an instance of [[Set Cover]{}]{} and vice versa.
### Set Systems of Guarding Problems
Guarding problems can naturally be expressed as instances of [[Set Cover]{}]{} or [[Hitting Set]{}]{}. We wish to model an instance of a guarding problem as an instance of [[Hitting Set]{}]{}. The desired set system $(\mathcal{U},\mathcal{S})$ is constructed as follows. $\mathcal{U}$ contains the potential guard locations. For each point $p$ that needs to be guarded, $S_p$ is the set of potential guards that see $p$, and ${\mathcal{S}}= \{S_p \;|\; p \in P \}$.
### $\eps$-Nets
Informally, if we wish to relax the [[Hitting Set]{}]{} problem, we can ask for a subset of $\mathcal{U}$ that hits all *heavy* sets in $\mathcal{S}$. This is the idea behind $\eps$-nets. For a set system $(\mathcal{U},\mathcal{S})$ and an additive weight function $w$, an $\eps$-net is a subset of $\mathcal{U}$ that hits every set in $\mathcal{S}$ having weight at least $\eps\cdot w(\mathcal{U})$.
It is known that set systems of VC-dimension $d$ admit $\eps$-nets of size $O\kern-2pt\left(\frac{d}{\eps}\log \frac{1}{\eps}\right)$ [@blumer89] and that this is asymptotically optimal without further restrictions [@komlos92]. It is also known that set systems associated with the guarding of simple polygons with point guards[^4] have constant VC-dimension [@kalai97; @valtr98]. Thus when guarding simple polygons we can construct $\eps$-nets of size $O\kern-2pt\left(\frac{1}{\eps}\log\frac{1}{\eps}\right)$ using general techniques. In a polygon with $h$ holes the VC-dimension is $O(\log h)$ [@valtr98] and therefore $\eps$-nets of size $O\kern-2pt\left(\frac{1}{\eps}\log\frac{1}{\eps}\log h\right)$ can be constructed.
Using techniques specific to vertex guarding or perimeter guarding a simple polygon, we are able to break through the general $\Theta\kern-2pt\left(\frac{d}{\eps}\log \frac{1}{\eps}\right)$ lower bound to build smaller $\eps$-nets. This result is stated in the following theorem.
\[thm:main\] For the problem of guarding a simple polygon with vertex guards or perimeter guards, we can build $\eps$-nets of size $O\kern-2pt\left(\frac{1}{\eps}\log \log \frac{1}{\eps}\right)$ in polynomial time.
In Section \[sec:quadratic\] we introduce the basic ideas that allow the construction of $\eps$-nets of size $O(1/\eps^2)$. In Section \[sec:hierarchical\] we give a more complicated, hierarchical technique that lets us construct $\eps$-nets of size $O\kern-2pt\left(\frac{1}{\eps}\log \log \frac{1}{\eps}\right)$.
A similar result for a different problem was recently obtained by Aronov [@aronov09], who proved the existence of $\eps$-nets of size $O\kern-2pt\left(\frac{1}{\eps}\log \log \frac{1}{\eps}\right)$ when ${\mathcal{S}}$ is either a set of axis-parallel rectangles in ${\mathbb{R}}^2$ or axis-parallel boxes in ${\mathbb{R}}^3$.
### Approximating [[Hitting Set]{}]{} with $\eps$-Nets
Brönnimann and Goodrich [@Bronnimann95] introduced an algorithm for using a *net finder* (an algorithm for finding $\eps$-nets) to find approximately optimal solutions for the [[Hitting Set]{}]{} problem. Their algorithm gives weights (initially uniform[^5]) to the elements in ${\mathcal{U}}$. The net finder is then used to find an $\eps$-net for $\eps = 1/2c'$, with $c'$ fixed at a constant between 1 and $2\cdot \opt$. If there is a set in ${\mathcal{S}}$ not hit by the $\eps$-net, the algorithm picks such a set and doubles the weight of every element in it. It then repeats, finding a new $\eps$-net given the new weighting. This continues until the algorithm finds an $\eps$-net that hits every set in ${\mathcal{S}}$. If the net finder constructs $\eps$-nets of size $f(1/\eps)$, their main algorithm finds a hitting set of size $f(4\cdot\opt)$.
Previous approximation algorithms achieving guaranteed approximation factors of $\Theta(\log\opt)$ [@deshpande07; @efrat06] have used this technique, along with generic $\eps$-net finders of size $O\kern-2pt\left(\frac{1}{\eps}\log\frac{1}{\eps}\right)$ for set systems of constant VC-dimension. Instead, we use our net finder from Theorem \[thm:main\] to obtain the following corollary, whose proof is given in Section \[sec:main\].
\[cor:main\] Let $P$ be a simple polygon with $n$ vertices and let $G$ be a set of potential guard locations such that $V(P)\subseteq G \subset \partial P$. Let $T\subseteq P$ be the set of points we want to guard. There is a polynomial-time algorithm that outputs a guarding set for $T$ of size $O(\opt \cdot \log\log \opt)$, where $\opt$ is the size of the minimum subset of $G$ that guards $T$.
The Main Algorithm {#sec:main}
==================
Main algorithm.
---------------
Our main algorithm is an application of that presented by Brönnimann and Goodrich [@Bronnimann95]. Their algorithm provides a generic way to turn a *net finder*, an algorithm for finding $\eps$-nets for an instance of [[Hitting Set]{}]{}, into an approximation algorithm. Along with a net finder we also need a *verifier*, which either states correctly that a set $H$ is a hitting set, or returns a set from ${\mathcal{S}}$ that is not hit by $H$.
For the sake of completeness we present the entire algorithm here. $G$ is the set of potential guard locations and $T$ is the set of points that must be guarded. We first assign a weight function $w$ to the set $G$. When the algorithm starts each element of $G$ has weight 1. The main idea of the algorithm is to repeatedly find an $\eps$-net $H$ and, if $H$ is not a hitting set ( if it does not see everything in $T$), to choose a point $p \in T$ that is not seen by $H$ and double the weight of any guard that sees $p$.
### Bounding the number of iterations.
For now assume we know the value of $\opt$ and we set $\eps = \frac{1}{2\cdot\opt}$. We give an upper bound for the number of doubling iterations the algorithm can perform. Each iteration increases the total weight of $G$ by no more than a multiplicative factor of $\left(1+\eps\right)$ (since the guards whose weight we double have at most an $\eps$ proportion of the total weight). Therefore after $k$ iterations the weight has increased to at most $$|G|\cdot\left(1+\eps\right)^k ~\leq~
|G|\cdot\exp\left({\frac{k}{2\cdot\opt}}\right) ~\leq~
|G|\cdot2^{\left({\frac{3k}{4\cdot\opt}}\right)}~.$$ Let ${\mathcal{H}}\subseteq G$ be an optimal hitting set ( guarding set) of size $\opt$. For an element $h\in {\mathcal{H}}$ define $z_h$ as the number of times the weight of $h$ has been doubled. Since ${\mathcal{H}}$ is a hitting set, in each iteration some guard in ${\mathcal{H}}$ has its weight doubled, so we have $$\sum_{h\in {\mathcal{H}}} z_h \geq k$$ and $$\begin{aligned}
w({\mathcal{H}}) &=& \sum_{h\in{\mathcal{H}}} 2^{z_h}\\
&\geq& \opt\cdot 2^{\left(\frac{k}{\opt}\right)}~~~~~\mbox{(since~} 2^x \mbox{~is a convex function).}\end{aligned}$$ We now have $$\opt\cdot 2^{\left(\frac{k}{\opt}\right)} ~\leq~
w({\mathcal{H}}) ~\leq~
w(G) ~\leq~
|G|\cdot2^{\left({\frac{3k}{4\cdot\opt}}\right)}~,$$ which gives us $$k \leq 4\cdot\opt\cdot\log\left(\frac{|G|}{\opt}\right)~.$$ This bound also tells us that the total weight $w(G)$ never exceeds $\frac{|G|^4}{\opt^3}$.
We must now address the fact that the value of $\opt$ is unknown. We maintain a variable $c'$ which is our guess at the value of $\opt$, starting with $c'=1$. If the algorithm runs for more than $4\cdot c'\cdot\log\left(\frac{|G|}{c'}\right)$ iterations without obtaining a guarding set, this implies that there is no guarding set of size $c'$ so we double our guess. When our algorithm eventually obtains a hitting set, we have $\opt\leq c'\leq 2\cdot \opt$. The hitting set obtained is a $\left(\frac{1}{2c'}\right)$-net build by our net finder. Therefore, using the method from Section \[sec:hierarchical\] to build an $\eps$-net of size $O\kern-2pt\left(\frac{1}{\eps} \log\log \frac{1}{\eps}\right)$, we obtain a guarding set of size $O(\opt\cdot\log\log\opt)$.
### Verification
The main algorithm requires a verification oracle that, given a set $H$ of guards, either states correctly that $H$ guards $T$ or returns a point $p\in T$ that is not seen by $H$. We can use the techniques of Bose [@bose02] to find the visibility polygon of any guard in $H$ efficiently. It will always be the case that $|H|<n$. Finding the union of visibility polygons of guards in $H$ can be done in polynomial time, as can comparing this union with $T$.
Building quadratic nets {#sec:quadratic}
=======================
In this section we show how to build an $\eps$-net using $O(1/\eps^2)$ guards. This result is not directly useful to us but we use this section to perform the geometric leg work, and hopefully provide some intuition, without worrying about the hierarchical decomposition to be described in Section \[sec:hierarchical\]. It should be clear that these $\eps$-nets can be constructed in polynomial time.
Subdividing the Perimeter.
--------------------------
For the construction both of the $\eps$-nets in this section and those in the next section we will subdivide the perimeter into a number of *fragments*. Fragment endpoints will always lie on vertices, but the weight of a guard location may be split between multiple fragments and a fragment may consist of a single vertex.
The key difference between the construction of the $\eps$-nets in this section and those in the next section is the method of fragmentation. In this section, the perimeter will simply be divided into $ m = 4/\eps$ fragments each having weight $\frac{\eps}{4}w(G)$. For our purposes, $1/\eps$ will always be an integer so $m$ will always be an integer.
Placing Extremal Guards.
------------------------
For two fragments $A_i$ and $A_j$ we will place guards at *extreme points of visibility*. Those are the first and last points on $A_i$ seen from $A_j$ and the first and last points on $A_j$ seen from $A_i$. For a contiguous fragment we define the first (resp. last) point of the segment according to the natural clockwise ordering on the perimeter. We use $G(A_i,A_j)$ to denote the set of up to 4 extremal guards placed between $A_i$ and $A_j$.
These extreme points of visibility might not lie on vertices. In fact, it is entirely possible that two fragments $A_i$ and $A_j$ see each other even if no vertex of $A_i$ sees $A_j$ and vice versa. If an extreme point of visibility is not a potential guard location, we will simply not place a guard there. Our proofs, in particular the proof of Lemma \[lem:tangent\_helper\], will only require guards on extreme points of visibility that either lie on vertices or on fragment endpoints.
All Pairs Extremal Guarding.
----------------------------
Our aim in this section is to build an $\eps$-net by placing extremal guards for every pair $(A_i,A_j)$ of fragments. We denote this set of guards with $$S_{AP}=\bigcup_{i\neq j}G(A_i,A_j)~.$$ Note that $|S_{AP}| \leq 4{m\choose 2} = O(1/\eps^2)$. Also note that every fragment endpoint is included in $S_{AP}$.
\[lem:AP\_fragment\_bound\] Any point not guarded by $S_{AP}$ sees at most 4 fragments.
$S_{AP}$ is an $\eps$-net of size $O(1/\eps^2)$.
For the proof of Lemma \[lem:AP\_fragment\_bound\] we need to present additional properties of the fragments that can be seen by a point. For a point $x$, the fragments seen by $x$ are ordered clockwise in the order they appear on the boundary of $P$. We need to consider lines of sight from $x$, and what happens when a transition is made from seeing one fragment $A_i$ to seeing the next fragment $A_j$. There are three possibilities:
1. $j=i+1$ and $x$ sees the guard at the common endpoint of $A_i$ and $A_j$
2. $A_j$ occludes $A_i$, in which case we say that $x$ has a *left tangent* to $A_j$ (see Figure \[fig:left\_tangent\])
3. $A_i$ was occluding $A_j$, in which case we say that $x$ has a *right tangent* to $A_i$ (see Figure \[fig:right\_tangent\]).
We say a fragment $A$ *owns* a point $x$ if $x$ sees $A$ in a sector of size at least $\pi$. We assume any point $x$ is owned by at most one fragment; if $x$ is a fragment endpoint it will itself be a guard, and otherwise if $x$ is owned by two fragments then only those two fragments can see it.
![\[fig:right\_tangent\]The point $x$ has a right tangent to $A_i$.](figures/left_tangent)
![\[fig:right\_tangent\]The point $x$ has a right tangent to $A_i$.](figures/right_tangent)
![\[fig:cyclic\_tangents\]The point $x$ has no tangent to $A_i$, a left tangent to $A_j$, both a left and right tangent to $A_k$, and a right tangent to $A_\ell$. $A_j$ owns $x$.](figures/cyclic_tangents)
\[lem:tangent\_helper\] Let $A_i$, $A_j$, $A_k$ be fragments that are seen by $x$ consecutively in clockwise order. If $x$ has a left tangent to $A_j$, and the combined angle of $A_j$ and $A_k$ at $x$ is no more than $\pi$, then $x$ sees a guard in $G(A_j,A_k)$. Symmetrically, if $x$ has a right tangent to $A_j$, and the combined angle of $A_i$ and $A_j$ at $x$ is no more than $\pi$, then $x$ sees a guard in $G(A_i,A_j)$.
We can assume w.l.o.g. that $x$ has a left tangent to $A_j$ since the proof of the other case is symmetric. There are now two cases we have to deal with, depending on whether $x$ has a right tangent to $A_j$ (case 1) or a left tangent to $A_k$ (case 2). Define $p_L$ and $p_R$ respectively as the first and last points on $A_j$ seen by $x$. Observe that $x$ must see every vertex on the geodesic between $p_L$ and $p_R$. Let $q$ be the first point on $A_j$ seen from $A_k$. In both cases 1 and 2 (see Figures \[fig:case\_1\] and \[fig:case\_2\]), $q$ must be a vertex of the geodesic between $p_L$ and $p_R$. This can be shown by contradiction; if $q$ lies between consecutive vertices of this geodesic then those two consecutive vertices must also be seen from $A_k$, and one of them comes before $q$.
The restriction that the combined angle of $A_j$ and $A_k$ at $x$ is no more than $\pi$ is necessary to ensure that the geodesic of interest from $A_k$ to $A_j$ does not ‘pass behind’ $x$ to see a point on $A_j$ before $p_L$.
It should be emphasized that, since there is a left tangent to $A_j$, $p_L$ will always be a vertex. Also, if $p_R$ is not a vertex it will not be the first point on $A_j$ seen from $A_k$.
![\[fig:case\_2\]Case 2 in the proof of Lemma \[lem:tangent\_helper\]. The point $x$ has a left tangent to $A_j$ and a left tangent to $A_k$.](figures/tangent_visibility_case_1)
![\[fig:case\_2\]Case 2 in the proof of Lemma \[lem:tangent\_helper\]. The point $x$ has a left tangent to $A_j$ and a left tangent to $A_k$.](figures/tangent_visibility_case_2)
The proof of Lemma \[lem:AP\_fragment\_bound\] is now fairly straightforward.
Let $x$ be a point that sees at least 5 fragments. Assume $x$ is not a fragment endpoint, otherwise it is itself a guard in $S_{AP}$. If we have a directed graph whose underlying undirected graph is a cycle, then either we have a directed cycle or we have a vertex with in-degree 2. By the same principle, either some fragment seen by $x$ has no tangent from $x$, or every fragment seen by $x$ has a left tangent from $x$ (or every one has a right tangent, which can be handled symmetrically).
If a fragment seen by $x$ has no tangent from $x$, call such a fragment $A_0$ and let $A_{-2}$, $A_{-1}$, $A_0$, $A_1$, $A_2$ be fragments seen by $x$ in clockwise order. If the combined angle at $x$ of $A_{-2}$ and $A_{-1}$ is more than $\pi$, the combined angle of $A_1$ and $A_2$ is less than $\pi$. So we can apply Lemma \[lem:tangent\_helper\] with one of the two pairs of fragments to show that $x$ is seen by a guard.
If every fragment seen by $x$ has a left tangent from $x$, then we can apply Lemma \[lem:tangent\_helper\] using two consecutive fragments with a combined angle at $x$ of less than $\pi$.
Before we move on we will prove one more helpful lemma.
\[lem:non-tangent\] The number of fragments seen by an unguarded point $x$ that do not have a tangent from $x$ is at most 1.
Assume the contrary and let $A_0$ and $A_i$ be two such fragments. If one such fragment owns $x$, assume it is $A_0$ and call the next two fragments seen by $x$ in the clockwise direction $A_1$ and $A_2$ respectively. By Lemma \[lem:tangent\_helper\], $x$ is seen by a guard in $G(A_1,A_2)$ so we reach a contradiction. If no such fragment owns $x$ then assume w.l.o.g. that, over the fragments seen by $x$ between $A_0$ and $A_i$ going clockwise, the combined angle at $x$ is less than $\pi$ (if this is not true it must be true going counterclockwise). Again, $x$ is seen by a guard in $G(A_1,A_2)$ so we reach a contradiction.
Hierarchical fragmentation {#sec:hierarchical}
==========================
In the last section we showed how a quadratic number of guards ( $O(1/\eps^2)$) could be placed to ensure that any unguarded point sees at most 4 fragments. In this section we discuss how hierarchical fragmentation can be used to reduce the number of guards required to $O\left(\frac{1}{\eps}\log\log\frac{1}{\eps}\right)$. We will use $S_{HF}$ to denote the guarding set constructed in this section. It should be clear that these $\eps$-nets can be constructed in polynomial time.
We can consider the hierarchy as represented by a tree. At the root there is a single fragment representing the entire perimeter of the polygon. This root fragment is broken up into a certain number of child fragments. Fragmentation continues recursively until a specified depth $t$ is reached. We will set $t={\left\lceil \log\log\frac{1}{\eps} \right\rceil}$. The *fragmentation factor* (equivalently, the branching factor of the corresponding tree) is not constant, but rather depends on both $t$ and the level in the hierarchy. The fragmentation factor generally decreases as the level of the tree increases. Specifically, if $b_i$ is the fragmentation factor at the $i^{\rm th}$ step, we have $$b_i = \left\{
\begin{array}{lcl}
2^{2^{t-1}+1}\cdot 4t\cdot 2^{1-t} \cdot \alpha&,&i=1\\
2^{2^{t-i}+1}&,&1<i\leq t~,
\end{array}
\right.$$ where $\alpha \leq 1$ is a term introduced only to deal with an issue arising from ceilings and double exponentials, namely the fact that $2^{2^{{\left\lceil \log\log1/\eps \right\rceil}}}$ is not in $O(1/\eps)$. $\alpha$ is specified in (\[eqn:alpha\]) later in this section.
If $f_i$ is the total number of fragments after the $i^{\rm th}$ fragmentation step, this gives us $$f_i = \left\{
\begin{array}{ccl}
1 &,&i=0\\
4t\cdot 2^{2^t - 2^{t-i} -t +i +1} \cdot \alpha&,&0<i\leq t \\
4t\cdot 2^{2^t} \cdot \alpha&,&i = t~,
\end{array}
\right.$$ since $$\begin{aligned}
f_i &=& \prod_{j=1}^ib_j \\
&=& 4t\cdot 2^{1-t}\cdot \alpha \cdot \prod_{j=1}^i 2^{2^{t-j}+1} \\
&=& 4t\cdot 2^{1-t+\sum_{j=1}^i (2^{t-j}+1)} \cdot \alpha \\
&=& 4t\cdot 2^{2^t - 2^{t-i} -t +i +1} \cdot \alpha~~.\end{aligned}$$
Our algorithm will place guards at all pairs of *sibling fragments*, fragments having the same parent fragment. For the purposes of this guard placement, the complement of the parent fragment, the subset of $G$ outside the parent fragment, will be considered a dummy child fragment. That is, it will be considered a child fragment when placing guards, but not when counting the number of child fragments seen from some point $x$ as in the statement of Corollary \[cor:children\] or in the proof of Lemma \[lem:HF\_fragment\_bound\]. To denote the complement of a fragment $A$ we use $\overline{A}$. Considering $\overline{A}$ to be a child of $A$ when placing guards allows us to consider the children of $A$ as if they were fragments with guards placed for all pairs. For example, we can obtain the following corollary from Lemmas \[lem:AP\_fragment\_bound\] and \[lem:non-tangent\].
\[cor:children\] For an unguarded point $x$ and a fragment $A$, the number of child fragments of $A$ seen by $x$ is at most 3, and at most one of these child fragments does not have a tangent from $x$.
The total number of guards placed will be $$|S_{HF}|
~~\leq~~ 4 \sum_{i=1}^t {b_i+1 \choose 2} f_{i-1}
~~\leq~~ 4 \sum_{i=1}^t b_i^2 f_{i-1}~~.$$ If $t\geq 6$ we have $b_i\leq 2^{2^{t-i}+1}$ for all values of $i$. This gives us $$\begin{aligned}
|S_{HF}| &\leq& 4 \alpha \sum_{i=1}^t 2^{2\left(2^{t-i}+1\right)} \cdot 4t\cdot 2^{2^t - 2^{t-i+1} -t +i} \\
&=& 16t \alpha \sum_{i=1}^t 2^{2^t -t +i +2} \\
&=& 16t \alpha\cdot 2^{2^t-t+3} (2^t-1) \\
&<& 16t \alpha\cdot 2^{2^t+3} \\
&=& 128t \alpha\cdot 2^{2^t}~.\end{aligned}$$
Recall that $t = {\left\lceil \log \log \frac 1 \eps \right\rceil}$. We need to define $\alpha$ in a way that ensures $b_1$ is an integer and that ensures the following two equations hold:
$$\label{eqn:HF_num_guards}
|S_{HF}| ~=~ O\kern-2pt\left(\frac{1}{\eps} \log \log \frac{1}{\eps}\right)$$
$$\label{eqn:HF_small_fragments}
\frac{f_t}{4t} ~\geq~ \frac{1}{\eps}~.$$
To satisfy these three criteria, it suffices to set $$\label{eqn:alpha}
\alpha
~=~ \frac{{\left\lceil 2^{2^{t-1}+1} \cdot 4t \cdot 2^{-t} \cdot 2^{\log(1/\eps) - 2^t} \right\rceil}}{ 2^{2^{t-1}+1}\cdot 4t\cdot 2^{-t} }
~=~ \frac{{\left\lceil 4t\cdot 2^{\log(1/\eps)+1-t-2^{t-1}} \right\rceil}}{ 4t\cdot 2^{2^{t-1}+1-t} }~.$$
We must now provide a generalization of Lemma \[lem:AP\_fragment\_bound\] that works with our hierarchical fragmentation.
\[lem:HF\_fragment\_bound\] Any point not guarded by $S_{HF}$ sees at most $4i$ fragments at level $i$.
Applying this with $i=t$ and using (\[eqn:HF\_num\_guards\]) and (\[eqn:HF\_small\_fragments\]), we get
$S_{HF}$ is an $\eps$-net of size $O\kern-2pt\left(\frac{1}{\eps} \log \log \frac{1}{\eps}\right)$.
Let $x$ be a point that does not see any guard in $S_{HF}$. From the tree associated with the hierarchical fragmentation, we consider the subtree of fragments that see $x$. We define a *branching fragment* as a fragment with multiple children seen by $x$ and we claim that at any level there are at most 2 branching fragments. Corollary \[cor:children\] tells us that any fragment has at most 3 children seen by $x$. At level 1 there are at most 4 fragments seen by $x$, so it follows that the number of fragments seen by $x$ at level $i$ is at most $4i$. We must now prove our claim that there are at most 2 branching fragments at any level.
First we note that a branching fragment either has no tangent from $x$ or owns $x$. To see this, consider a fragment $A$ that has a tangent from $x$ and does not own $x$. Assume w.l.o.g. that $x$ has a left tangent to $A$ and call the point of tangency $p_L$. $x$ must then also have a left tangent to the child fragment $A_0$ of $A$ that contains $p_L$. $A_0$ must be the leftmost child fragment of $A$ seen by $x$. If $x$ sees another child fragment $A_1$ of $A$ to the right of $A_0$, then by Lemma \[lem:tangent\_helper\] it is seen by a guard in $G(A_0,A_1)$.
Consider now the following possibilities for a given fragment $A$.
1. $A$ is not seen by $x$. Clearly $x$ cannot see any child fragments of $A$.
2. $A$ does not own $x$, and $x$ has a tangent to $A$. $A$ then has exactly one child fragment that sees $x$, and this fragment is of type (2).
3. $A$ does not own $x$, and $x$ does not have a tangent to $A$. By Corollary \[cor:children\], $x$ can see at most 3 child fragments of $A$. At most one of these children is of type (3) and all others must be of type (1) or (2).
4. $A$ owns $x$ and has no tangents from $x$, $\overline{A}$ has two tangents from $x$. If a child of $A$ owns $x$ it must be the only child of $A$ that sees $x$, and this child is also of type (4). Otherwise, $A$ would have a child fragment $A_i$ that is seen by $x$, does not own $x$, and is adjacent to $\overline{A}$. $x$ would then be seen by a guard in $G(A_i,\overline{A_P})$. Thus $A$ has at most one child that is not of type (1) or (2).
5. $A$ owns $x$ and has two tangents from $x$. Because $\overline{A}$ is, in a sense, a ‘dummy’ child of type (3), $A$ cannot have a real child of type (3) by the proof of Lemma \[lem:non-tangent\]. Further, if $A$ has a child $A_0$ that owns $x$, this child must also be of type (5). Otherwise assume w.l.o.g. that $A_1$, immediately clockwise from $A_0$, has a left tangent from $x$. Then, using $A_2$ to denote the fragment clockwise from $A_1$ ($A_2$ might be $\overline{A_P}$), $x$ is seen by $G(A_1,A_2)$.
6. $A$ owns $x$ and has exactly one tangent from $x$ (see Figure \[fig:fruitful\_case\_6\]). We consider how $A$ can have multiple children seen by $x$. Assume w.l.o.g. that $\overline{A}$ has a right tangent. If $A_{-1}$ is the child of $A$ seen by $x$ immediately counterclockwise from $\overline{A}$ then $A_{-1}$ must own $x$, otherwise $x$ is seen by $G(A_{-1},\overline{A})$. If $A_1$ is the child of $A$ seen by $x$ immediately clockwise from $\overline{A}$ then $A_1$ cannot have a tangent from $x$ otherwise $x$ would be seen by $G(\overline{A}, A_1)$. If $x$ can see $A_2$, a child of $A$ between $A_1$ and $A_{-1}$, then $x$ must have two tangents to $A_{-1}$ otherwise it would be seen by $G(A_1,A_2)$.
Therefore if $A$ has more than one child seen by $x$, there must one of type (3) and one of type (5), plus (possibly) a child of type (2).
![\[fig:fruitful\_case\_5\]The only way a fragment of type (5) can have three children seen by $x$.](figures/fruitful_case_5)
![\[fig:fruitful\_case\_6\]The only way a fragment of type (6) can have three children seen by $x$.](figures/fruitful_case_6)
We call a non-root fragment *fruitful* if it or one of its descendants is a branching fragment. Only fragments of type (3-6) can be fruitful. Only fragments of type (6) can have more than one fruitful child, and they can have at most two fruitful children. No non-root fragment can have a child fragment of type (6). Also, if the root has a child fragment of type (6), the root cannot have a child of type (3). Therefore any level has at most 2 fruitful fragments.
We can now state the following:
- Level 1 has at most 4 child fragments that see $x$, at most 2 of which are fruitful.
- A fruitful fragment has at most 3 child fragments that see $x$, at most 1 of which is fruitful.
- A non-fruitful fragment has at most 1 child fragment that sees $x$.
Therefore any level has at most 2 fruitful fragments and the number of fragments at level $i$ that see $x$ is at most $4i$.
Open problems
=============
- We have obtained a $o(\log\opt)$-approximation factor for vertex guards and perimeter guards. Can the same be done for point guards?
- Can we do better than $O(\log\log\opt)$ for perimeter guards? In particular, can we find a constant factor approximation algorithm to match the hardness of approximation result of Eidenbenz [@eidenbenz98]?
- For simple polygons, the set systems associated with point guards have maximum VC-dimension at least 6 and at most 23 [@valtr98]; it is believed that the true value is closer to the lower end of this range, perhaps even 6 [@kalai97]. The upper bound of 23 holds *a fortiori* for set systems associated with perimeter guards but the lower bound of 6 does not. A lower bound of 4 follows from a trivial modification to an example for monotone chains [@king08]; we can increase this bound to 5 without too much difficulty (see Figure \[fig:vc\_dim\_5\]). Can set systems associated with perimeter guards actually have VC-dimension as high as 6? And can the upper bound of 23 be improved? It seems that improving the upper bound would be easier for perimeter guards than for point guards.
![\[fig:vc\_dim\_5\]A polygon with a set $S$ of 5 points on the perimeter. The points in $S=\{1,2,3,4,5\}$ are marked with circles and labeled with large numbers. Each point in $S$ sees all of $S$, and each guard seeing a subset of $S$ of size 3 or 4 is marked with a cross and labeled with small numbers indicating which points in $S$ it sees. Guards seeing the 16 subsets of $S$ of size 0, 1, or 2 are not shown. Adding these is a simple matter of adding nooks with very small angles of visibility, thus we can construct a polygon with 5 points on the perimeter shattered by $2^5$ perimeter guards. Such a polygon can also be obtained via a fairly straightforward modification of the example of Kalai and Matoušek for point guards [@kalai97].](figures/vc_dim_5)
[^1]: Some of these results appeared in preliminary form as *D. Kirkpatrick. Guarding galleries with no nooks. In Proceedings of the 12th Canadian Conference on Computational Geometry (CCCG’00), pages 43–46, 2000.*
[^2]: It is a pseudopolynomial-time algorithm in that its running time may be linear in the ratio between the longest and shortest distances between two vertices.
[^3]: The heuristic repeatedly picks the set that covers the most uncovered elements.
[^4]: This bound also applies *a fortiori* to perimeter guards and vertex guards.
[^5]: Initial weights can be non-uniform but this is not necessary for our purposes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.